diff --git "a/deduped/dedup_0510.jsonl" "b/deduped/dedup_0510.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0510.jsonl" @@ -0,0 +1,38 @@ +{"text": "The gasoline internal combustion engine has more than 100 years of intense development behind it. But now three researchers from the Massachusetts Institute of Technology (MIT) have modified it in a way that elevates efficiency by a remarkable 25%, an advance that could greatly mitigate greenhouse gas emissions and offers compelling advantages over hybrids and diesels. \u201cThis has real potential,\u201d says David Cole, chairman of the Center for Automotive Research, a nonprofit organization of the University of Michigan.The design logic is simple. One can alter an engine to create greater compression of the fuel/air mixture within each cylinder, raising thermodynamic efficiency. One can also add a turbocharger, which force-feeds more fuel/air mixture into the cylinders. This makes it possible to get more power out of an engine, or to downsize an engine without losing power, making it still more efficient.The problem: boosting compression also boosts temperature, and too much heat can ignite the fuel/air mixture prematurely, causing potentially damaging engine \u201cknock.\u201d But Daniel Cohn and Leslie Bromberg of MIT\u2019s Plasma Science and Fusion Center, and John Heywood of MIT\u2019s Sloan Automotive Lab figured out that a little squirt of ethanol into the cylinder from a separate tank could cool it in the same way that rubbing alcohol cools the skin\u2014by vaporizing, then absorbing excess heat. The researchers have formed a company, Ethanol Boosting Systems (EBS), and have drawn several prominent figures on board, including Neil Ressler, former chief technology officer of Ford Motor Company.Calculations of Knock Suppression in Highly Turbocharged Gasoline/Ethanol Engines Using Direct Ethanol Injection, a 2006 MIT report, bench engine tests by Ford show that the knock limit can be vastly alleviated, and unpublished results indicate that a 25% increase in efficiency should therefore be attainable. That would reduce carbon dioxide emissions by about 20%, says Cohn. The engine\u2019s alcohol consumption would be minimal, because the extra cooling is unnecessary under light loads, such as steady driving at low to moderate speeds.According to Although not quite as efficient as the best full hybrid systems, the EBS is far simpler, because it needs no electric motor, extra batteries, or complex software. Cohn says those factors would shave $2,000\u20134,500 off the cost relative to a full hybrid. The EBS and full hybrid systems would have similar emissions profiles.x) than the diesel engine, and less particulate matter. Many U.S. cities have nonattainment zones for NOx, which contributes to ground-level ozone and can damage lung tissue and vegetation.An EBS engine would also be a couple of thousand dollars cheaper than a diesel engine. The two engines would produce roughly the same amount of greenhouse gas emissions, but the EBS would otherwise be cleaner, emitting fewer nitrogen oxides (NOCar and Driver, editor-in-chief and engineer Csaba Csere praises the EBS technology and says that if some seemingly manageable problems are solved\u2014for example, maintaining fuel economy under real-world conditions of elevated temperatures and substandard fuel\u2014EBS engines could be powering cars early in the next decade.In a column in the July 2007 issue of"} +{"text": "CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models.ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML.However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages.We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance.Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions. Systems of differential algebraic equations (DAEs) are one F is a function, t is the independent variable, x is the vector of state variables, and x' is the vector of derivatives of the state variables.where DAE systems are often broken up into individual equations, each of which hold true. Systems of DAEs are used to model a wide variety of biological processes, across a diversity of scales. For example, at one extreme there are models describing the action of ion channels , and at Historically, models of DAEs have been exchanged and archived by publishing equations, constant values, initial conditions, specific protocols, and other associated information in a scientific paper. Someone wanting to independently compute results from the published model then needs to convert that model back into a computer program. This process is both time-consuming and error prone. Reviewers are unlikely to check that the published model accurately corresponds to numerical results presented in the paper.Likewise, it becomes prohibitively expensive to do integrative biology , as builCellML is an XMHowever, for the scientific advantages of using formats such as CellML and SBML for mathematical model exchange to be fully realised, it is important that software used by modellers is able to read and write models in these formats. It is also important that the scientific community has the ability to easily develop software which relies on the existing databases of models in these formats.Using an Application Programming Interface (API) simplifies the task of processing an XML language, thus APIs are important to the exchange of information.Supporting CellML correctly can be a difficult task, due to some of the more complex features in the CellML language. It is therefore important that software developers do not need to re-invent the same functionality every time they develop a new tool. We thus present both an API for working with CellML models, and an efficient implementation of that API.SBML is anothThe CellML API is a platform and programming language independent description of interfaces, with attributes and operations on the interfaces. These attributes and operations are used to retrieve information about the model, or alternatively to manipulate the model in memory.The overall architecture of the API consists of a core API, along with a series of extension APIs from the CellMLElement interface. This interface provides functionality which is useful on all elements. For example, it provides the ability to insert or remove any of the child elements of the element concerned, and to set temporary user data annotations, identified by a unique key, against the elements. These annotations do not form part of the in-memory DOM representation, and so do not, for example, appear in the generated XML when the model is serialised.The interfaces for CellML elements which have a mandatory name attribute all inherit from the NamedCellMLElement interface. This interface provides a name attribute (which can be fetched or set), and inherits from the CellMLElement interface. For example, CellMLComponent inherits from NamedCellMLElement, because the CellML specifications require that all component elements have a name attribute.For each type of child CellML element allowed by the CellML specification, the interface for the parent element has a read-only attribute for retrieving all the CellMLElements of that type. The returned set implements an interface specific to the type of element expected. For example, component elements can contain variable elements, so the CellMLComponent interface has an attribute called variables, of type CellMLVariableSet.These specific types of set follow an inheritance hierarchy parallel to those of the element objects in the set. Each set interface has a corresponding iterator interface, which allows each object to be fetched in sequence. Because the iterator interface is specific to the object being fetched, the required interface is returned, avoiding the need to call QueryInterface (see the section on the object model and memory management for information on QueryInterface). However, it is also possible to use the less specific (ancestors in the interface inheritance hierarchy) set interfaces to retrieve a less specific (ancestor) iterator object .Iterators derived from NamedCellMLElementIterator also provide interfaces for fetching elements by name. All descendant iterator interfaces provide more specific fetch by name operations.Set interfaces also provide facilities for modifying the relevant sets by inserting CellML elements. Because order is not important to the meaning of the model, the iteration and insertion facilities provide no control over the actual order of the elements in the model.CellML makes heavy use of the namespace facilities in XML . CellML In addition, CellML models commonly contain metadata encoded Our implementation of the CellML API also provides an interface allowing access to and modification of the RDF triples found inCellML 1.1 provides for components and physical units to be imported into models from other models . An impoThe result of supporting CellML 1.1 is that processing one mathematical model can require that more than one CellML file be examined. To deal with this issue, the API introduces the following two concepts: an imported model is said to be instantiated once it has been loaded. When all imports required for a mathematical model have been loaded (including models which are imported by an imported model), the model is said to be fully instantiated.The API has an operation for selectively instantiating particular imports, as well as an operation for fully instantiating the model. For imports that are instantiated, the model element is also accessible, as well as the components they import.We have included three separate attributes for sets of components in the model, with three corresponding sets of units:\u2022 local component set - contains only the components in the particular CellML file (excluding imported components);\u2022 model set - contains all components which are in the local set, and also the import component elements in the same file; and\u2022 full set - contains all components in the model, across all files making up the model. Where the model containing a component is uninstantiated, the import component element is provided by iterators. When a model is instantiated, the components in the imported model are returned by the iterators, and in addition, these models are examined to identify further imported models to search for components, as appropriate.The three corresponding sets of units follow the exact same semantics as the sets of components, except over units rather than components.The interfaces defined in the API all use the inheritance capabilities of OMG IDL to derive from a base interface, called IObject. IObject is modelled after the similarly named interface in the XPCOM object model. The IObject interface is used to provide interfaces for basic common facilities relating to the object underlying the interface, such as maintaining the reference count (as discussed later), and providing a unique identifier for each object. This unique identifier is useful for determining if two interface references describe the same object, and for building data structures which require that objects can be compared. API implementations use reference counting to deterIt is worth noting that while IObject provides facilities for reference counting, many programming languages perform automatic garbage collection. When using a direct bridge to these languages, the wrapper code will automatically call add_ref and release_ref on behalf of the user, and so the need for explicit memory management is avoided. For example, the Java bridge makes use of the finalisation facilities in Java, combined with the memory management facilities provided by the Java Native Interface, so that Java users do not need to explicitly modify reference counts.In addition to the reference counting scheme, the IObject interface also provides a QueryInterface operation. This operation is used to ask an object if it supports a particular interface, and if it does, to provide an interface representation. As discussed earlier, the API is often accessed through wrapper code, and so users of the API should always perform QueryInterface operations on API interfaces, rather than directly using the language-specific casting mechanisms.ab initio, or loaded from a file). Modifications can be made to the model in memory. The original file will only be updated if the application uses the API to serialise the CellML model back to XML, and then writes that XML to disk, replacing the original file. Likewise, if the same model is loaded twice, there will be two separate, and independent instances of the model in memory. Modifying one instance will not automatically change the other instance. Where a model is imported, a separate instance of the imported model exists for each import element, and for each instance of the importing model. However, all elements, sets, and iterators in the core CellML API are 'live', in the sense that making any change to an in-memory instance of a model through the API will immediately affect responses from the API, even if the element, set or iterator was retrieved prior to when the change was made. For example, if an iterator is created, and has iterated through all units elements but one in a model, and that remaining units element is deleted, and next element is retrieved from the iterator, it will return a null value, signifying that there are no remaining units elements to iterate.Objects which are created by API implementations exist purely in memory . Our implementation makes consistent use of exception safety techniques , such asThe language independent IDL based interfaces do not provide a solution to the 'bootstrap' problem of how an interface is initially obtained, for example, the interface for creating a new model. The solution to this problem is language dependent. In each language, we provide functionality to retrieve a pointer to a bootstrap interface. For example, in C++, this is obtained by a method defined in a header. The bootstrap interface is defined in IDL, and therefore standardised across all language bindings. Each extension API has a separate bootstrap interface.Our implementation of the API is not designed to allow for two writes to occur concurrently, or for a read to occur concurrently with a write, on the same model. Applications accessing the same model on multiple threads need to either protect all access to the API with a mutex, or more efficiently, use a read-write lock to ensure that there is no activity concurrent with a write.In addition to the core API, we have also produced APIs to provide services which are beyond the scope of the core API.The core API does not depend upon the extensions, and so individual API implementations can choose not to support all extension APIs. However, all extensions depend upon the core API, and some extensions also depend on other extensions API provides the ability to allocate and release a set of annotations, without needing to worry about interfering with other annotations being placed by independent calls to the same code, or about needing to individually remove all annotations left on objects.The IDL specification for the AnnoTools API can be found in the file interfaces/AnnoTools.idl, in the CellML API source tree.AnnoTools implementations generate a unique prefix for each AnnotationSet, and allow the user to set annotations with that prefix. They keep an internal list of all annotations which were added, and clear all annotations in the AnnotationSet when the AnnotationSet is destroyed.The AnnoTools API also includes facilities for more easily setting and retrieving string, integer, and floating point annotations.The CellML Variable Association Service (CeVAS) API facilitates the treatment of interconnected CellML variables as the same mathematical variable . These variables may come from different components, some of which may be imported from different models.The IDL specification for the CeVAS API can be found in the file interfaces/CeVAS.idl, in the CellML API source tree.CellML 1.0 and 1.1 require that variables which are connected to variables in other components have a public or private interface value of 'in' or 'out'. Whether the public or private interface applies depends on the encapsulation relationship between the components. In CellML, all 'in' interfaces must be connected to an 'out' interface, encapsulation is always acyclic, and valid CellML models have a finite number of variable elements. This means that, in a complete and valid model, there is always a variable in each connected network of variables that has no 'in' interfaces. This variable is called the source variable, and is used by CeVAS as a representative of all variables connected (directly or indirectly) to it.The interface allows users to supply a CellML Model interface, and pre-compute which variables are connected. All variables connected to a particular variable can be iterated, and the source variable can be retrieved.n components and m connections is in O), where \u03b1 is the inverse Ackerman function. Note that the inverse Ackerman function grows very slowly; for example, \u03b1 = 1, while \u03b1 = 3.This is implemented using an efficient disjoint sets algorithm, which allows for inverse Ackerman amortised time merges of sets . InitialThe CellML Units Simplification and Expansion Service (CUSES) API provides facilities for processing physical units in a CellML model.The IDL specification for the CUSES API can be found in the file interfaces/CUSES.idl, in the CellML API source tree.g.m2.s-1. In addition, the modeller can define their own derived units , or a new base unit. However, when processing models, it is important to know what the relationship between connected variables is, so the appropriate conversions can be performed, if necessary. For example, when a variable in metres is connected to a variable in millimetres, tools are expected to insert an implicit conversion factor, so the same variable is compatible across the two components. CUSES allows tools to implement this more simply.CellML has a set of built-in units. These units are defined in terms of the SI base uniAll units are firstly expanded to be expressions in terms of the base units. SI Prefixes are converted to multipliers. All units are converted to a canonical form, consisting of the product of powers of base units, each base unit occurring at most once, possibly with a single multiplier and/or offset. The base units, and their corresponding exponents, are exposed to users of the API in an enumerable list of base unit instances. Facilities are provided to enquire whether two units are dimensionally equivalent. This is useful for determining if a connection is valid. The necessary offset and multiplier needed to perform a conversion can also be retrieved.The Validation Against CellML Specification Service (VACSS) API accepts files which are putatively CellML files, and identifies whether or not the CellML is valid, and where the file is not valid, it attempts to build a list of the problems.The IDL specification for the VACSS API can be found in the file interfaces/VACSS.idl, in the CellML API source tree.Errors that can be detected fall into two types:\u2022 representational - errors relating to the encoding of CellML in XML, such as essential elements or attributes which are missing, or illegal extraneous elements; and\u2022 semantic - higher-level errors, where the basic elements of the CellML are in the correct form, but there are inconsistencies, such as references to names which are required to exist but do not, or violations of any of the numerous rules specified in the CellML Specification. Semantic warnings, such as about potential units problems in mathematical equations, due to dimensional inconsistency, are also made available.One task which is common to many applications is to convert the fragments of MathML embedded in CellML documents into fragments of text in some other linear text-based representation, such as programming language source code.i.e. reference to a variable by identifier). It then makes use of AnnoTools to retrieve an annotation (which the user can set) containing the symbol to be used for that variable in the output. The IDL specification for the MaLaES API can be found in the file interfaces/MaLaES.idl, in the CellML API source tree.The MathML Language Expression Service (MaLaES) API provides functionality to assist with this task. MaLaES makes use of CeVAS in order to identify the source variable corresponding to each MathML ci element to be converted into the units of the source variable, and also for the result of an expression to be converted. In order to allow for conversion into many different languages to occur, MaLaES uses a specification in a custom format called MAL (MathML to Language). The MAL description describes the mapping between MathML elements and their forms in the output text-based representation , as well as describing the precedence of each operation, what strings are used to begin and end groupings of low precedence operators inside a higher precedence operator, and the format of conversions. The MAL is precompiled into an efficient in-memory representation, which can then be used to generate output.Another common task is to convert an entire CellML model into code in a procedural programming language, capable of solving the model. The CellML Code Generation Service (CCGS) API simplifies this task.The IDL specification for the CCGS API can be found in the file interfaces/CCGS.idl, in the CellML API source tree.The CCGS is specialised for the common case where there is a single independent variable and the index of the DAE system is at most one ,1. UsersOn this CodeGenerator interface, it is possible to specify a wide range of different attributes about the language to be generated. This means that code can be generated for a wide range of procedural programming languages .Because CCGS relies upon MaLaES to translate individual mathematical expressions into the correct text-based form, the user also needs to supply a MAL description for the language of interest.x, with an initial value of 0, and then an equation such as x and t, the independent variable, is also treated as a computation target for consistency). Note that when a variable is used in several components, but the variables are connected together , there will only be one computation target for all the variable elements.CCGS uses the terminology 'computation target' (represented by a ComputationTarget interface pointer) to represent anything which is required to be computed to evaluate the equations in a CellML model . There is not a one-to-one relationship between variables in the CellML model and computation targets. For example, there may be a variable called CCGS gives every computation target a degree . All computation targets which have a corresponding computation target of higher degree are treated as being state variables, while computation targets with a lower degree computation target are treated as being rates. Computation targets which have both a higher and lower degree computation target are in the unique position of being both a rate and a state variable. CCGS generates code for this case by making use of the standard technique for transforming an ordinary differential equation (ODE) system with higher derivatives into an equivalent ODE system with no higher than first order derivatives .n otherwise unknown constants from n equations . This is done in our implementation using a heuristic algorithm which guarantees that the smallest possible systems are found when the largest indivisible system needed to be solved has at most three equations with three unknowns, and has given good results in our testing.Initially, CellML variables which are in fact constant in value are identified and marked as such. Variables which are computable using only constants are in turn classified as constants (with this process repeating until no further constants can be identified). It is possible that a system of simultaneous equations may need to be solved in order to determine y using an equation like y = f , CCGS will use the assignment pattern to directly assign into the symbol for y. In other cases, the equation might be in the form f = g, in which case the univariate solve pattern is used to compute y. Finally, for systems of equations, the multivariate solve pattern is used.After identifying the order in which equations or systems need to be solved, code is generated for them. This is done using one of three different patterns supplied to CCGS by the application. Where CCGS needs to compute a computation target Any computation targets which are not constants, states, nor are rates, are classified as being 'algebraic computation targets'. In the same fashion as is done for constants, CCGS works out a directed acyclic graph for the order in which systems or equations are used to work out the rate and algebraic computation targets, using the constants, states, and the independent variables. However, these computations are split into two code fragments. The first code fragment contains all computations necessary to compute the rates from the states, constants, and independent variables, while the second code fragment computes any remaining algebraic computation targets not computed in the first code fragment. This separation allows for more efficient processing of models, because at many time steps, the integrator may not want to report back any results, and so there is no need to evaluate computation targets that are not required to compute the next time step.CCGS has the capability to automatically assign indices into four different arrays:\u2022 constants array - stores the values of any computation targets which do not depend on the independent variable, or upon any of the rate or state computation targets;\u2022 states array - used to store the values of each state computation target;\u2022 rates - used to store the values of the rate of change corresponding to each state computation target; and,\u2022 algebraic array - used for all remaining variables.The CodeGenerator object allows the first index to be assigned in each array to be set . Thirdly, the fragment for the remaining variables. The final code fragment contains any functions which needed to be generated (using a pattern supplied to the CCGS) in order to evaluate the code. These functions can then be called from the univariate and multivariate solver patterns, and also in MAL specifications, such as those for evaluating definite integrals.As a CCGS implementation processes models, it will also check for and report back on certain error conditions, such as models which have extraneous equations (reported as being overconstrained), or models which have too few equations to compute all computation targets (i.e. underconstrained models). As CCGS only supports DAEs of index one or lower, it will, for example, report that the model is incorrectly constrained if the model is a valid index two DAE.Defining a new programming language for use with MaLaES requires setting up a MAL description of the language, and configuring this through the API. However, it is convenient to be able to exchange this information with other users, in order to allow for the definition of arbitrary languages by the user. The CellML Language Export Definition Service (CeLEDS) allows for the MAL description of a language to be embedded in an XML file. In addition, it provides a generalised dictionary service, to allow information required to generate output for different languages to be provided to the consumer of the CeLEDS service. The IDL specification for the CeLEDS API can be found in the file interfaces/CeLEDS.idl, in the CellML API source tree.The CeLEDSExporter service builds upon that offered by CeLEDS to support full code generation (based upon CCGS). Instead of programmatically setting attributes on the CodeGenerator interface, all information is specified in a standardised XML format, along with the MAL description. In addition, CeLEDS contains information on the super-structure of the program, including unchanging fragments of code which are required to allow the program to run (such as any supplementary function definitions).This means that all the information required to generate code for a language is encapsulated in a single XML file, which can be read in at run-time. Users can easily modify these definitions in order to customise aspects of code generation, and to create new definitions for conversions to other languages.Due to this standardisation of how conversions are specified, we have created a small repository of CeLEDS/CeLEDSExporter compatible conversion definitions, including definitions for C, MATLAB, and Python. This repository can be found in the CeLEDS/languages subdirectory of the CellML API source code. We have also created a definition for FORTRAN77, although it requires further testing before being considered ready for widespread use.The CellML Integration Service (CIS) API provides an interface for performing simulations of models, and receiving asynchronous notifications as results become available.The IDL specification for the CIS API can be found in the file interfaces/CIS.idl, in the CellML API source tree.CellML Model interface pointers are given to CIS, which then creates a CellMLCompiledModel object. The application then specifies the algorithm to be used, and the parameters of the simulation . The application may also choose to override an initial value without recompiling the model.The IntegrationProgressObserver interface can be implemented by the application, and given to the CellMLIntegrationRun interface prior to starting the simulation. This interface receives information about the values of constants which were computed, as well as the results from each time-step, and an indication of whether the integration has succeeded or failed (with an error message in the latter case).Our implementation of the CellML API internally makes use of CCGS to generate C code. The C code is then compiled using a compiler. For example, in one of our applications based on the CellML API, we bundle a stripped down version of the C compiler from the Free/Open Source GNU Compiler Collection (gcc) wiad hoc testing on a range of other platforms. The API implementation currently passes all of the above tests.We have also developed an extensive test-suite for validating API implementations. For the core API (including DOM and MathML DOM), and some extension APIs, a program included with the test-suite makes use of every attribute and operation in the API, and checks that invariants which are expected to be true, if the implementation behaves correctly, are in fact true. In addition, the test-suite also includes a series of small programs, as well as a series of inputs to those programs, and expected outputs. For example, the program CellML2C is a small, command-line driven test program, that takes a CellML model as input, and uses the CCGS extension API to generate C code from it. The test-suite calls CellML2C with 17 different models (each of which are crafted to contain peculiarities to test different features). Our API implementation is automatically tested against this test-suite after every commit, on Linux, Mac OS X, and Windows XP, with http://sourceforge.net/projects/sbml/files/test-suite/2.0.0%20alpha/.In the future, we plan to add tests which can confirm that the numerical results provided by implementations of the CellML Integration Service are correct, in a similar vein to the SBML test-suite The CellML API is, to our knowledge, the first publicly available API that supports the processing of CellML models. However, there are other similar projects designed to process mathematical models in different encodings.LibSBML is, in many ways, analogous to the CellML API, except that it processes SBML models. As CellML provides a higher level of domain independence than SBML, it is expected that tools used across many different domains of expertise will need to exchange CellML models. In addition, some tools, such as generic modelling environments, may need to import and export both SBML and CellML models, in which case, both a CellML API implementation, and libSBML can be used together in the same program. Aside from the difference in language support, there are some additional major differences between the CellML API and libSBML.The CellML API emphasises the technical separation of the interface definition and implementations of the interface. With the CellML API, adding a new language binding involves developing code to automatically produce a wrapper from the IDL description, rather than using SWIG .In addition, there are plans to make CCGS support dedicated DAE solvers such as IDA , which cAnother important future improvement is the addition of more language bindings. The choice of language bindings will depend on the input we receive from the community, but could, for instance, include Python, Ruby, and Haskell.Other important improvements for future consideration include improving the documentation of the API, providing better support for working with metadata, and providing utilities for easier symbolic manipulation of mathematics.The CellML API and its implementation are available, and are ready for widespread adoption by the community. Developers of tools which process mathematical models are strongly encouraged to support CellML, so that users of the tool can participate in model sharing, with all the associated benefits to the scientific community. The CellML API and its implementation provide facilities which should make this task substantially easier.\u2022 Project name: The CellML API. Version 1.6 was the latest release at the time of writing.http://www.cellml.org/tools/api/\u2022 Project home page: \u2022 Operating systems: The API implementation can be built on any POSIX like system, including Linux, Mac OS X, and Cygwin. It can also be built using Microsoft Visual C++ 2008. It has been tested on Linux , AIX, Windows (XP and Vista) and Mac OS X.\u2022 Programming language: The API is in IDL (language independent), and the implementation in C++, callable through bridges from C++, Java, JavaScript, and from other languages through CORBA.\u2022 Other requirements: The build requires the omniidl tool, which is part of omniORB , as well\u2022 License: The CellML API and implementation can be redistributed under any one of: the GNU GPL version 2 or later, the GNU LGPL version 2.1 or later, or the Mozilla Public License version 1.1. This allows the API and implementation to be used in a wide range of public and private research and applied settings.\u2022 Any restrictions to use by non-academics: There are no restrictions on usage of the API. Redistribution requires compliance with one of the licenses above, as well as the licenses of any dependencies being used .The source code and change history is available on SourceForge. Documentation on how to build the API on various platforms is included in the 'docs' directory of the source tree. In addition, the documentation extracted from the IDL files using the Doxygen tool are available in HTML form. Links to these resources can be found on the project home page.AnnoTools: The Annotation Tools; API: Application Programming Interface; CCGS: The CellML Code Generation Service; CeLEDS: The CellML Language Export Definition Service; CellML: An XML-based language for describing mathematical models; CeVAS: The CellML Variable Annotation Service; CIS: The CellML Integration Service; CORBA: Common Object Request Broker Architecture; CUSES: The CellML Units Simplification and Expansion Service; DAE: Differential Algebraic Equation; DOM: Document Object Model; IDL: Interface Definition Language; MAL: MathML to Language; MaLaES: The MathML Language Expression Service; ODE: Ordinary differential equation; RDF: Resource Description Format; URI: Uniform Resource Indicator; VACSS: The Validation Against CellML Specification Service; XML: The Extensible Markup Language; XPCOM: Cross-platform Common Object Model.AKM developed most of the API and its implementation and wrote the first draft of the manuscript. MH contributed to the development of an earlier API, and provided guidance on the development of the API discussed here. JM contributed to the development of the API and implementation. AR developed the CeLEDS and CeLEDSExporter modules. AG contributed documentation for the API. PN, RB, AG, and JC contributed advice on the development of the API and its implementation. All authors provided input into this manuscript."} +{"text": "During surgery for spinal deformities, accurate placement of pedicle screws may be guided by intraoperative cone-beam flat-detector CT.The purpose of this study was to identify appropriate paediatric imaging protocols aiming to reduce the radiation dose in line with the ALARA principle.w doses with default and lowered exposure settings. Images from 126 scans were evaluated by two spinal surgeons and scores were compared (Kappa statistics). Effective doses were calculated. The recommended new low-dose 3-D spine protocols were then used in 15 children.Using O-arm\u00ae , three paediatric phantoms were employed to measure CTDIw doses 89\u201393%. The effective dose was 0.5\u00a0mSv . The optimised protocols were used clinically without problems.The lowest acceptable exposure as judged by image quality for intraoperative use was 70\u00a0kVp/40\u00a0mAs, 70\u00a0kVp/80\u00a0mAs and 80\u00a0kVp/40\u00a0mAs for the 1-, 5- and 12-year-old-equivalent phantoms respectively . Optimised dose settings reduced CTDIRadiation doses for intraoperative 3-D CT using a cone-beam flat-detector scanner could be reduced at least 89% compared to manufacturer settings and still be used to safely navigate pedicle screws. It is well-documented that radiographic examinations for spinal deformity during childhood increase the lifetime risk of cancer, particularly breast cancer , 2. It fNew technologies have advanced surgery for spinal deformities. The use of pedicle screws requires accurate placement to avoid damage to the spinal cord and the large vessels in front of the spine. To secure the correct placement of pedicle screws, intraoperative imaging is imperative. For many years, fluoroscopy was the only available intraoperative imaging modality. However, in the last two decades, navigation based on preoperative CT has been developed. However, it is not widely used, possibly due to the time-consuming registration process (coupling of the preoperative CT with patient anatomy) of up to 15\u201320 min per vertebra making it almost impossible to use in spine deformity surgery with instrumentation of 10\u201315 vertebrae in a single operation. On the other hand, the need for navigation is most pressing in young patients who have both deformed and small pedicles.Intraoperative cone-beam flat-detector X-ray systems have changed spinal surgery and are rapidly being implemented worldwide. These provide both 2-D fluoroscopic and 3-D images, which when coupled to a navigation system add significant value to surgical outcomes . With suAccording to Zhang et al. , the patOur aim was to identify appropriate intraoperative exposure settings for children for a cone-beam flat-detector system, aiming to reduce the radiation dose.The O-arm\u00ae cone-beam flat-detector system was used. The system has an O-ring type gantry and an X-ray tube equivalent to 32\u00a0kW, and a X-ray filter of 4\u00a0mm Al. The flat-panel has an amorphous silicon-based detector of 30\u00a0cm\u2009\u00d7\u200940\u00a0cm with a 0.194-mm pixel pitch. The system can be configured in 2-D fluoroscopic mode or 3-D mode. In this study, only 3-D mode was used and only the low-definition mode. In the low-definition mode, 192 single images (compared to 392 images in high-definition mode) of slice thickness 0.833\u00a0mm were recorded in a 360-degree rotation of the detector and the radiation source with an image matrix of 512\u2009\u00d7\u2009512. The time for image acquisition was 13 s with a beam on-time of 3.91 s. The source-to-isocentre distance of the O-arm was 64.7\u00a0cm and the source-to-detector distance 116.8\u00a0cm. The collimated X-ray beam was 22.18\u2009\u00d7\u200916.62\u00a0cm and the 3-D reconstructed volume was 20\u00a0cm\u2009\u00d7\u200915\u00a0cm.Besides the 16 default imaging protocols , the O-Arm allows manual adjustment of kVp and mA. With the manual setting, the kVp can be varied between 50 and 120 with 1-kVp intervals, and the mA can be varied between 10 and 120 in predefined steps.The manufacturer\u2019s advice for abdominal imaging is to use the default values for small (waist circumference 12\u201326\u00a0cm) or the default values for medium (waist circumference 20\u201334\u00a0cm).The studies were conducted using polymethyl-methacrylate (PMMA) phantoms to estimate patient dose equivalence. Four cylindrical PMMA phantoms were made with diameters of 10\u00a0cm, 16\u00a0cm, 24\u00a0cm and 32\u00a0cm using the ICRP 103 tissue weighting-factors [The dose to the patient was expressed as CTDI-factors . To perfhttp://rsb.info.nih.gov/ij/) was used.Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured. SNR was calculated as S1/\u03c31, where S1 represents the mean pixel value within a region of interest (ROI) in the PMMA phantom with all PMMA rods inserted at the position of the bone samples and \u03c31 represents the standard deviation of pixel values within the same ROI. CNR was estimated as |S2\u2212S1|/\u03c31, where S2 is the mean pixel value in the pure bone sample. For this purpose, ImageJ version 1.32 was evaluated independently by two spinal surgeons, both with more than 10\u00a0years of experience in spinal surgery and intraoperative imaging. The image quality was deemed adequate if the outlines of the bones were visible and if it was possible to discern whether the screw penetrated the bone. If the outlines of the bone sample and the screw position could be visualised with certainty, the image quality was deemed as adequate. In all other cases, the image quality was deemed inadequate. Interobserver agreement was measured with Kappa statistic 6 scans w.The scans started with the factory default settings, followed by a series of scans with decreasing mA until the lowest possible tube current of 10\u00a0mA was reached. If image quality was still acceptable, additional scans were acquired at 10\u00a0mA while reducing kV until an unacceptable image quality was achieved. The scan parameters with the lowest dose to the phantom where the image quality in all four bone samples were accepted were recorded as the suggested optimum low-dose settings for that specific phantom size. Scans were then acquired with all holes filled with solid PMMA rods to measure image noise and dose.After institutional review board approval, the recommended new low-dose 3-D spine protocol was tested in clinical practice in 15 children with severe deformities in whom it would not have been possible to place pedicle screws without navigation. This would otherwise require conventional preoperative CT with a higher dose. Only the 16-cm and 24-cm protocols were used. The average age was 11.5\u00a0years . An average of 9.7 vertebrae were scanned using an average of 2.5 acquisitions , as shown in Table\u00a0As expected, SNR and CNR decreased nonlinearly with decreasing radiation dose Fig.\u00a0. SNR andTable\u00a0The observers agreed that the lowest acceptable dose for intraoperative imaging was 70\u00a0kVp/40\u00a0mAs for the 1-year-old-equivalent phantom (10\u00a0cm), 70\u00a0kVp/80\u00a0mAs for the 5-year-old-equivalent phantom (16\u00a0cm) and 80\u00a0kVp/40\u00a0mAs for the 12-year-old-equivalent phantom (24\u00a0cm). Figure\u00a0The interobserver agreement for all scans had a kappa = 0,70 . For thIn all 15 operations using the optimised setting, the spinal surgeon achieved adequate intraoperative imaging with the cone-beam flat-detector scanner followed by navigation using the Stealth system . Figure\u00a0The phantom model allowed a systematic evaluation of different dose settings from the highest to the lowest possible. It also allowed different bone inserts containing pedicle screws to be evaluated with regard to image quality. This is especially important as the cornerstone of image guidance during pedicle screw insertion is the ability to clearly identify the cortex relative to the pedicle screw. Streak artefacts from metallic implants are a major concern, and any realistic model needs to incorporate this. Our phantom model could only be used to evaluate 90-degree perforations of the cortex due to the small size of the bone samples. However, to compensate for this limitation, for the sample with the perforating screw, the perforation was made as small as possible.100 does not measure the actual absorbed dose by the individual patient but should be considered an index for comparisons. The homogeneous PMMA does not simulate the different tissue types and heterogeneities in vivo. The CTDI100 will underestimate the radiation dose because the 100-mm pencil ion chamber only partly covers the collimated beam width. For X-ray beams wider than 40\u00a0mm, a pencil chamber longer than 100\u00a0mm is required. The CTDIw will underestimate the ideal CTDIw with approximately 20% at a collimated beam width of 20\u00a0mm [The CTDIof 20\u00a0mm . Dixon [of 20\u00a0mm has descThe patient dose of the cone-beam system relative to CT has been described by Zhang et al. , who sho2 with slight variation in several recent studies [2 (age 5\u00a0years) to 1.0\u00a0g/cm2 (age 17\u00a0years) with much greater variation [Would a vertebra have been a better bone sample for the phantom part of the study? This would without doubt have better represented the normal anatomy. The cortical thickness of the femoral head is greater than the cortical thickness of the vertebra, and this would have influenced the accuracy of the imaging representation . A more studies , 21. Lumariation , 19. A sA substantial number of pedicle screws are misplaced when navigation is not used. The rate of pedicle perforation varies between 10% and 40% \u201324. Our There are currently few alternatives to the system used in our study , 33. OthAbul-Kasim et al. have alsThe cylindrical-equivalent diameter of a body is defined as the diameter of the cylinder that the body would form if laterally compressed into a cylinder of equal cross-sectional area. This definition should not be confused with the definition of patient-equivalent cylinder based on the patient\u2019s weight and height as suggested by The Danish National Board of Health . The equThe phantoms in our study simulated the lumbar anatomy. The results may therefore not adequately reflect the dose reduction achievable in the thorax. Based on the level of scattered radiation in the thorax compared with the lumbar level, we would anticipate the dose in the thorax to be even lower.The lower-contrast images of the cone-beam scanner are not comparable with a standard CT , but in The parameters of the optimised protocols were to some extent dictated by limitations of the generator. Even at the lowest possible tube current (10\u00a0mA), all phantoms at all voltages above 90\u00a0kVp showed acceptable image quality. If the mA were lowered further, a lower patient dose might have been achievable since the same dose at a higher kVp would result in a lower absorbed dose to the patient. Dose to the patient may have been reduced further by using a bow-tie filter typically employed in standard CT scanners . A preliWith optimised exposures at 70\u00a0kVp/40\u00a0mAs for a 1-year-old-equivalent phantom, 70\u00a0kVp/80\u00a0mAs for a 5-year-old-equivalent phantom and 80\u00a0kVp/40\u00a0mAs for a 12-year-old-equivalent phantom, radiation doses for intraoperative 3-D imaging with a cone-beam flat-panel detector scanner were reduced at least 89% and could still be used to safely guide the placement of pedicle screws. The effective doses for optimised scans were estimated at approximately 0.5\u00a0mSv and were between 91\u201394,5% lower than the effective dose estimated for the manufacturers' default exposure values."} +{"text": "Negative for NG2 and CNPase, these radixin+/Olig2+ cells are distinct from typical oligodendrocyte progenitors. The expanded Olig2+ population responds rapidly to EGF and proliferates after only 24 hours along the entire RMS, suggesting local activation by EGF throughout the RMS rather than migration from the SVZ. In addition, the radixin+/Olig2+ progenitors assemble in chains in vivo and migrate in chains in explant cultures, suggesting that they possess migratory properties within the RMS. In summary, these results provide insight into the adaptive capacity of the RMS and point to an additional stem cell source for future brain repair strategies.The presence of neural stem cells in the adult brain is currently widely accepted and efforts are made to harness the regenerative potential of these cells. The dentate gyrus of the hippocampal formation, and the subventricular zone (SVZ) of the anterior lateral ventricles, are considered the main loci of adult neurogenesis. The rostral migratory stream (RMS) is the structure funneling SVZ progenitor cells through the forebrain to their final destination in the olfactory bulb. Moreover, extensive proliferation occurs in the RMS. Some evidence suggest the presence of stem cells in the RMS, but these cells are few and possibly of limited differentiation potential. We have recently demonstrated the specific expression of the cytoskeleton linker protein radixin in neuroblasts in the RMS and in oligodendrocyte progenitors throughout the brain. These cell populations are greatly altered after intracerebroventricular infusion of epidermal growth factor (EGF). In the current study we investigate the effect of EGF infusion on the rat RMS. We describe a specific increase of radixin Neurogenesis persists in two distinct niches in the adult brain; the dentate gyrus of the hippocampal formation and the subventricular zone (SVZ) of the forebrain All animal work was conducted according to European and Swedish animal welfare regulations and approved by the Gothenburg committee of the Swedish Animal Welfare Agency (application no. 145/10 and 32/11).Male Wistar rats, 7\u20138 weeks old, with an average weight of 221\u00b114 grams were used in the study. All surgeries were performed under ketamine and xylazine anesthesia, and all efforts were made to minimize suffering. The animals were divided into six groups receiving either vehicle or EGF (360 ng/day), for 1, 7, or 14 days. The surgeries were performed as described in Lindberg et al http://www.ncbi.nlm.nih.gov/tools/primer-blast/) and Primer express software (Applied Biosystems) and synthesized by Eurofins MWG Operon . Primer sequences were designed with a melting temperature of 60\u00b0C spanning an intron site when possible and the efficiency of all primers was tested using a dilution curve. Sequences used: AAAGCCCAGGCCCAATGCGC and ACAAGTCCTTGTGCTTCCGCAGAC , TGTGATGGACTCCGGAGACGGG , TGTAGCCACGCTCGGTCAGGAT , AACCCATCACCATCTTCCAGGAGCG , ACATACTCAGCACCAGCATCACCCC , CCTACCACAGCGTGTTTTGGA (Radixin forward), TCCCCCTGTGTTCTTCATGC (Radixin reverse). For both primers we used an initial cycle of 15 min at 95\u00b0C, followed by repeated cycles of 94, 55, and 72\u00b0C . A final continuous fluorescence reading at descending temperatures (melting curve) starting at 95\u00b0C was acquired to ensure proper primer functionality and to exclude primer-dimer formation. Fold-changes were calculated using Cq and the \u0394\u0394Cq method normalizing against two reference genes (GAPDH and \u03b2-actin) according to Vandesompele et al. Quantitative polymerase chain reaction was performed on micro-dissected lateral ventricle wall and olfactory bulb tissue bilaterally and processed according to Lindberg et al The sections were washed in Tris-buffered saline (TBS). For all immunostainings where rabbit \u03b1-radixin or mouse \u03b1-radixin were used, the washing step was followed by antigen retrieval in 0.01 M sodium citrate pH6 at 97\u00b0 for 20 minutes. Sections used for BrdU immunofluorescence were treated with 2 M HCl for 30 minutes at room temperature or 37\u00b0C followed by neutralization in 0.1 M borate buffer. All sections were blocked for 1 h in TBS with 3% donkey serum and 0.1% Triton-X (0.2% for radixin antibodies) at room temperature, prior to primary antibody incubation. Primary antibody incubation and a 63\u00d7 objective. Each fluorochrome was recorded individually in sequential scan mode to avoid channel mixing. Images were acquired from the RMS in z-stacks of 2 \u00b5m increments. For radixin/DCX and radixin/Olig2 quantifications, cell nuclei were selected at random in the RMS, using ToPro3 and selected cells analyzed for either single or double immunoreactivity. BrdU/radixin and BrdU/Olig2 colabeling was analyzed by selecting at least 250 BrdU positive cells per animal for colocalization of immunosignals. Sox22 and 1% CO2 for 72 h. At the end of the experiment the explants were fixed in 4% PFA for 20 minutes. Immunocytochemistry was performed in the chamber slides. Following three 15-minute washes in PBS unspecific antibody binding was blocked by incubation with 3% donkey serum and 0.2% Triton-X in PBS for three hours at room temperature. Explants were then incubated with primary antibodies and mixed 1\u22363 with Matrigel (BD Bioscience). 15 \u00b5l of the Matrigel-tissue mixture was dispensed in 8-well chamber slides (BD Bioscience), followed by 10 minutes polymerization at 37\u00b0C. Explants were grown in Neurobasal A medium, supplemented with B27 and Glutamax, PenStrep (All from Invitrogen). The explants cultures were kept at 37\u00b0C in 5% Odies see for 72 ht-test was employed. The Mann-Whitney U-test was used for the comparison of radixin/BrdU ratios. All statistical calculations and graphical visualizations were performed in GraphPad Prism 5 (GraphPad Software). All error bars represent standard error of the mean (SEM). Cell numbers and animal weights are presented as mean\u00b1SE. Differences of p<0.05 were considered statistically significant (*).In all comparisons, except radixin/BrdU cell ratios, the 2-tailed Student's To assess structural changes in the RMS induced by EGF infusion we quantified the area and cell density at two locations along the RMS; one proximal to the SVZ (bregma +2.50 mm) and a distal location closer to the olfactory bulb (bregma +3.00 mm). In the proximal RMS we observed a reduced cell density along with an increase in RMS cross-sectional area after 7 days of EGF infusion . The bouDCX mRNA expression in the SVZ . The radixin animals . Howevernegative . Occasio and RMS .Previous reports indicate that EGFR activation can shift cell commitment towards the generation of oligodendrocyte-like progenitors Astrocytes, which form the glial sheaths between chains of migrating cells in the RMS were analyzed by immunofluorescence against GFAP. Under EGF stimulation, GFAP expressing cells appeared less regular in shape with thicker processes, indicating a more reactive phenotype .+ cells coexpressing radixin were found that correspond to neuroblasts and a high expressing population (Sox2high) representing neural stem cells high and Sox2-negative cells were observed within the radixin+/Olig2+ population .+ cells expressing radixin, we studied the contribution of radixin and Olig2-expressing cells to the proliferative pool. We infused EGF for 1 or 7 days and administered three injections of BrdU during the last 24 hours before perfusion. The percentage of Olig2+ cells in the BrdU+ population increased substantially after 7 days of EGF infusion . The majority of migrating cells expressed radixin and a small portion was radixin+/Olig2+ cells was measured in the entire RMS after just 24 hours of EGF infusion. The radixin+/Olig2+ cells in the control RMS were sparse, while EGF-induced radixin+/Olig2+ cells were numerous, occasionally formed migratory chains and showed activation in the form of pERM expression. Cells of the same phenotype displayed migratory properties and chain formation in explant cultures.In the current study we describe EGF-induced changes in radixin expressing cell populations in the RMS. The EGF stimulated RMS was less dense and displayed reduced DCX expression, while the number of radixin+ cells in the distal RMS after 1 day of EGF infusion speaks in favor of recruitment of local RMS cells, rather than migration of newly generated cells from the SVZ, which would require more time. BrdU labeling showed an increase in the portion of Olig2+ cells among the newly born cells at this location, which in turn indicates a local increase in proliferation within the distal RMS. The reduction in neuroblasts observed after 7 days was further enhanced after 14 days, while the increase detected in the radixin+/Olig2+ population plateaued after 7 days of EGF infusion. These data suggests that the generation of Olig2 expressing cells is a separate event from the decline in DCX expressing cells and not a direct fate shift.The RMS is highly proliferative, which is mostly attributed to rapidly dividing progenitors; however, the presence of slowly dividing stem cells has also been reported + cells can be reverted to neuronal fate determination after EGFR ligand withdrawal Progenitor cells in the RMS react to EGF with increased production of Olig2 cells expressing Sox2, similar to their SVZ counterparts + neuroblasts. Radixin-positive cells co-expressing Olig2 were found throughout the brain. However, in these cells radixin appeared to be unphosphorylated. Under EGF stimulation, the pERM+ neuroblasts are lost; although the newly generated population of Olig2+ cells in the RMS now expresses phosphorylated radixin, suggesting a cytoskeletal activation in these cells. In addition, radixin+/Olig2+/pERM+ cells were more tightly organized than radixin+/Olig2+/pERM\u2212 cells of the control RMS, and occasionally grouped together into chains. Similar cells were also observed in parts of the striatum of EGF-treated animals adjacent to the SVZ and RMS. Migration of Olig2+ progenitors after EGF stimulation has previously been described in the striatum +/Olig2+/pERM+ cells in the striatum were directed towards the RMS. Moreover, in explant cultures radixin+/Olig2+/pERM+ cells are migratory and can form chain-like structures in vitro. Although we present no direct evidence for a direct connection between radixin and EGF receptor activation, a possible link can be seen in the interaction between the EGFR and the ERM binding phosphoprotein 50 (EBP50), putatively modulating radixin function in vitro. Conversely, increased expression of EBP50 has been shown to downregulate EGFR activity suggesting a negative feedback loop for EGFR by EBP50 Activation of ERM proteins by phosphorylation enables interaction with actin and transmembrane proteins resulting in cytoskeleton rearrangement. During tangential neuroblast migration the cytoskeleton needs continuous reorganization when protruding the leading process of the cell, and retracting the trailing end. In the control RMS, we observed, that radixin is phosphorylated in DCXPrevious studies infusing EGF into the adult brain have largely overlooked the effects in the RMS, except for the reduction in neuroblasts"} +{"text": "We retrospectively analyzed 25\u00a0years of experiences with the button Bentall procedure in patients with aortic root pathologies. Even though this procedure has become widespread, there are only a few very long term follow-ups available in the clinical literature, especially regarding single surgeon results.Between 1988 and 2013, a total of 147 patients underwent the Bentall procedure by the same surgeon. Among them there were 62 patients with Marfan syndrome. At the time of the surgery the mean age was 46.5\u2009\u00b1\u200917.6\u00a0years. The impact of surgical experience on long-term survival was evaluated using a cumulative sum analysis chart.p\u2009=\u20090.002), acute indication , use of deep hypothermic circulatory arrest , chronic kidney disease and early complication as significant risk factors for the late overall death. The survival rate for freedom from early complication was 94.3\u2009\u00b1\u20092.2\u00a0%, 88.0\u2009\u00b1\u20093.3\u00a0%, 82.9\u2009\u00b1\u20094.7\u00a0% and 69.2\u2009\u00b1\u20098.4\u00a0% at 1,5,10 and 20\u00a0years. The main pathological findings of the aortic wall were cystic medial degeneration in 75\u00a0%, fibrosis in 6\u00a0%, atherosclerosis in 13\u00a0% and no pathological alteration in 6\u00a0% of the samples. The overall survival rate was significantly lower in patients operated in first 15\u00a0years compared to patients operated in the last decade (log-rank p\u2009=\u20090.011).The Kaplan-Meier estimated overall survival rates for the 147 patients were 91.8\u2009\u00b1\u20092.3\u00a0%, 84.3\u2009\u00b1\u20093.1\u00a0%, 76.3\u2009\u00b1\u20094.9\u00a0% and 59.5\u2009\u00b1\u200910.7\u00a0% at 1,5,10 and 20\u00a0years, respectively. Multivariate Cox regression analysis identified EuroSCORE II over 3\u00a0% (OR 4.245, 95\u00a0% CI, 1.739\u201310.364, According to our long-term follow-up the Bentall operation provides an appropriate functional result by resolving the lesions of the ascending aorta. Based on our results, 25\u201330 operations done is necessary to gain such a level of confidence and experince to aquire better results on long-term survival. In addition, we discussed that there were no co-morbidities affecting on the survival of Marfan patients and prophylactic aortic root replacement ensures a longer survival among patients with Marfan syndrome. Aortic root replacement procedures, which include the modified version of the original Bentall operatioWe retrospectively analyzed 25\u00a0years of experience with Bentall procedure. Between 1988 and 2013, a total of 147 patients who underwent aortic root reconstruction at the Heart and Vascular Center, Semmelweis University. Of these patients, 111 (75\u00a0%) were male and 36 were female (25\u00a0%). 62 patients were with Marfan syndrome (42\u00a0%) among them and the diagnosis of the syndrome was verified in every case with the use of the original and later the revised Ghent criteria [An electronic Aortic Root Reconstruction Registry database has been established which includes demographics, types of indications, comorbidities, procedure specifications and follow-up informations. Medical records and patient\u2019s history were used to identify comorbidities. Patients\u2019 follow-ups included clinical examination, computed tomography scans and transthoracic echocardiography, and they were treated by the same surgeon. During the studied period, Tirone David valve-sparing procedures were performed in 27 patients. These patients were excluded from the study.2. We found that 80 patients had hypertension (54\u00a0%), 11 patients had diabetes mellitus (7\u00a0%) and 6 had hyperlipidaemia (4\u00a0%). 21 patients were suffering from coronary artery disease (14\u00a0%) at the time of the operation, 4 from chronic kidney disease (3\u00a0%) and 7 patients had cerebrovascular accident (5\u00a0%). The cardiovascular functional status was determined according to the New York Heart Association (NYHA). EuroSCORE II was calculated in all patients according to the EuroSCORE II protocol [The mean patient age at the time of the operation was 46.3\u2009\u00b1\u200917.5\u00a0years , 8 (5\u00a0%) patients were older than 70\u00a0years. We measured body parameters of patients (height and weight), and calculated the Body Mass Index. The mean BMI was 25.6\u2009\u00b1\u20095.6\u00a0kg/mThe operation was performed via median sternotomy, and cardiopulmonary bypass was applied by cannulating the ascending aorta, aortic arch, femoral artery, or axillar artery, and the right atrium. Myocardial protection was provided by anterograde, retrograde, or simultaneously anterograde and retrograde intermittent cold hyperkalemic blood cardioplegia. Deep hypothermic circulatory arrest (DHCA) was used in 28 patients (19\u00a0%). The coronary buttons were excised with the aortic wall patch and mobilized to facilitate reimplantation. The proximal anastomosis was implemented with pledgeted interrupted sutures. The distal, grafto-aortic anastomosis was accomplished with continuous sutures and the coronary button anastomoses were also performed with continuous sutures.St. Jude composite graft was used in 101 patients; Carbomedics composite graft in 16 patients; Carbomedics Carbo-Seal conduit in 18 patients; Vascutek Gelweave composite graft in 5 patients; Hancock bioprosthesis with Vaskutek straight graft in 5 patients; and Shelhigh bioconduit in 2 patients. The 7 bioprosthetic valve and graft were compiled together by the surgeon during operation.The concomitant procedures were performed in 39 patients 27\u00a0%); included mitral valve surgery in 11, coronary artery bypass grafting (CABG) in 12, hemi- and total arch replacement in 10, pacemaker implantation in 2, and others in 4 patients. Operative data are described in Table\u00a07\u00a0%; inclWe selected all-cause mortality as the endpoint. No patient was lost during the follow-up period. Death was detected from death records of the Hungarian National Health Insurance Fund, which provided accurate mortality data for every patient. Follow-up period for the overall survival was measured from the date of the operation to the date of death, or of last contact alive. Follow-up ended on October 2013. 118 patients (80\u00a0%) of survivors had completed follow-up. The mean length of the follow-up periods was 84\u2009\u00b1\u200956\u00a0months.t-test, Mann\u2013Whitney U-test and \u03c72 test. Univariate and multivariate analysis of predictors for mortality were performed using a Cox regression model evaluate the association between independent risk factors and mortality. Survival curves were created using the Kaplan-Meier method and compared with the log-rank test. Multivariate logistic regression analysis was used to identify the risk factors of early complication.All continuous variables were expressed as mean\u2009\u00b1\u2009SD or median with interquartile ranges, whereas categorical variables were expressed as percentage. The Shapiro-Wilk test was used to check the normality of the data before further analysis. For the analysis of the data we used Student\u2019s p\u2009<\u20090.10 from univariate Cox regression, and further examined our data with multivariate Cox regression analysis. A p-value of <0.05 was considered statistically significant , where Xi\u2009=\u20090 for operation without complication and 1 for the presence of early complication, and p0i denotes the predicted probability of the development of early complication within 30\u00a0days after surgery. The graph starts at zero, but is incremented by 1- p0i for formation of early complication and decremented by p0i for uncomplicated operation [The impact of surgical experience on survival was evaluated using a time-adjusted cumulative sum complication chart. The statistical principles were adapted from the comprehensive tutorial by Rogers et al. . Cumulatperation . This grperation . Data wen\u2009=\u20091), diffuse hypoxic brain damage (n\u2009=\u20092), ventricular arrhytmia (n\u2009=\u20091) and excess bleeding in 1 patient. Early complications included postoperative resternotomy for bleeding (n\u2009=\u20099), atrial and ventricular arrhytmias (n\u2009=\u200913), renal failure needing hemodialysis (n\u2009=\u20091), cerebral infarction (n\u2009=\u20092) and pericardial tamponade (n\u2009=\u20091) . Causes of early death were low cardiac output syndrome (p\u2009=\u20090.050), dissection , concomitant CABG surgery and concomitant mitral valve surgery were the independent risk factors for early complication, which defined as possible life-threathening complication within 30\u00a0days of initial hospitalization . There have been 23 late deaths (death after one year), whereof 18 cardiac-related (78\u00a0%) and 5 non-cardiac-related (22\u00a0%) deaths occured. The most common causes of cardiac-related death were aneurysm rupture of descending aorta, sudden cardiac death, congestive heart failure, CVA and low cardiac output syndrome.p\u2009=\u20090.002), acute indication , DHCA use , chronic kidney disease , and early complication as significant risk factors for the late overall death were 91.8\u2009\u00b1\u20092.3\u00a0%, 84.3\u2009\u00b1\u20093.1\u00a0%, 76.3\u2009\u00b1\u20094.9\u00a0% and 59.5\u2009\u00b1\u200910.7\u00a0% at 1,5,10 and 20\u00a0years, respectively Fig.\u00a0. Multivap\u2009=\u20090.004) and acute indication were the independent risk factors for mortality (Table\u00a0p\u2009=\u20090.877) compared to the non-Marfan group , and patients without Marfan syndrome (non-Marfan group) Table\u00a0. Multivaoup Fig.\u00a0.Table 4Rp\u2009=\u20090.031) Fig.\u00a0. However31) Fig.\u00a0.Fig. 2Kap\u2009=\u20090.197; CMD vs. atherosclerosis log-rank\u2009=\u20090.400; fibrosis vs. atherosclerosis log-rank p\u2009=\u20090.876) Fig.\u00a0During the operation aortic wall and valve samples were collected and sent for pathological examination. The main pathological findings unearthed were cystic medial degeneration (CMD) in 85 samples (75\u00a0%), fibrosis in 7 samples (6\u00a0%), atherosclerosis in 15 samples (13\u00a0%) and no pathological alteration in 7 samples (6\u00a0%). Overall survival rates between the patients with CMD, patients with fibrosis and patients with atherosclerosis were similar (CMD vs. fibrosis log-rank p\u2009=\u20090.011) . The other 105 patients were operated at the next decade (71\u00a0%). The overall survival rate was significantly lower in patients operated in first 15\u00a0years compared to patients operated in the last decade (log-rank 11) Fig.\u00a0. The tim11) Fig.\u00a0. The CUSThe mean survival in our study was 190\u2009\u00b1\u200910.3\u00a0months (IQ range:170\u2013210 months). There are only few papers published about the Bentall procedure which include mean follow-up duration of more than six years Table\u00a0.Table 6PIn our study, we report that the independent predictors of early complication include NYHA class of III and IV, dissection and concomitant CABG or mitral valve surgery. The poor cardiac condition might often associated with early arrythmias, while aortic dissection is a well-known risk factor of the bleeding events.Numerous studies have shown different independent risk factors for death after Bentall procedure , 9. Advap\u2009=\u20090.877) [There are many debates as to whether Marfan syndrome had an influence in long-term survival after the Bentall procedure. Although several papers have been published , 8, 11, =\u20090.877) . In our The long-term survival of the cardiac procedures are examined from the perspective of the duration of the operation . In our p\u2009=\u20090.197; CMD vs. atherosclerosis log-rank\u2009=\u20090.400; fibrosis vs. atherosclerosis log-rank p\u2009=\u20090.876) in 85 samples (75\u00a0%), which is also known as the pathological change of the aortic wall in Marfan patients . Further76) Fig.\u00a0.During the cardiothoracic training surgeons need to gain experience in Bentall procedure. Hence the surgeon launches on a \u2018learning curve\u2019, and unfortunately, his/her patients may possibly be at a higher risk . The trap\u2009=\u20090.011) Fig.\u00a0.n\u2009=\u20094) and NYHA class of III and IV in Marfan patients (n\u2009=\u20092), could not be carried out in logistic regression analysis due to the low number of events.Our paper has some limitations that are unaviodable in a retrospective study. Observational data do not provide casual evidence. After fifteen years of observation time only 10\u00a0% of of patients still at risk which is a limitation. Due to incomplete data collection, several important variables, such as ejection fraction after surgery and pharyngeal temperature, were omitted in the statistical analysis. The histology of the aortic wall could not be included in logistic regression analysis due to the missing 33 samples. In our study only two pseudoaneurysm formations was observed at the graft anastomosis sites , 19, butIn summary, as this study was performed from the data of one surgeon, gives us the opportunity to describe the learning curve of the button Bentall procedure and follow the results through his course of carrier. In our series, 25\u201330 operations gave the surgeon confidence and experince to aquire better results on long-term survival. In addition, we discussed that there were no co-morbidities affecting on the survival of Marfan patients and prophylactic aortic root replacement ensures a longer survival among patients with Marfan syndrome. Finally, upon the statistical results we discussed histological changes of the aortic wall, which has never been described in long-term button Bentall follow-ups."} +{"text": "Crizotinib is recommended as first-line therapy in ROS1-driven lung adenocarcinoma. However, the optimal first-line therapy for this subgroup of lung cancer is controversial according to the available clinical data.Here, we describe a 57-year-old man who was diagnosed with stage IIIB lung adenocarcinoma and EGFR/KRAS/ALK-negative tumors. The patient received six cycles of pemetrexed plus cisplatin as first-line therapy and then pemetrexed as maintenance treatment, with a progression-free survival (PFS) of 42\u00a0months. The patient relapsed and underwent re-biopsy. EZR-ROS1 fusion mutation was detected by next-generation sequencing (NGS).\u00a0The patient was prescribed crizotinib as second-line therapy and achieved a PFS of 6\u00a0months. After disease progression, lorlatinib was administered as third-line therapy, with a favorable response.Prolonged PFS in patients receiving pemetrexed chemotherapy might be related to the EZR-ROS1 fusion mutation. Lorlatinib is an optimal choice in patients showing crizotinib resistance. Prolonging the overall survival (OS) of advanced lung cancer patients remains a challenge. The advent of targeted therapeutic approaches led to the classification of NSCLC into subgroups according to factors such as histology and the molecular makeup of the tumor. C-ros oncogene 1 (ROS1) rearrangements are detected in approximately 1\u20132% of patients with NSCLC , 4. PrecWe herein report a case of advanced lung adenocarcinoma with EZR-ROS1 rearrangement treated by first-line pemetrexed/cisplatin and then pemetrexed mono-drug for maintenance therapy. After progression, crizotinib was used as second-line treatment, and lorlatinib as third-line treatment. The patient showed an excellent response and achieved long-term progression-free survival (PFS).2) and cisplatin (75\u00a0mg/m2), achieving a partial response every month and cisplatin (75\u00a0mg/m2) at a local hospital. At last, the pemetrexed-based regimen for this patient resulted in a PFS of 42\u00a0months.A 57-year-old man with a 20-pack-year smoking history presented to the hospital in March 2013 with a persistent cough for 2\u00a0months and a palpable right cervical mass for 4\u00a0days. Enhanced computed tomography (CT) showed a 9\u2009\u00d7\u200911\u00a0mm nodule in the lower lobe of the left lung and multiple enlarged lymph nodes Fig.\u00a0. The sernse Fig. . Maintennth Fig. . Assessmnth Fig. . The patv1.1). In April 2017, serum CEA levels were increased, and a lung CT scan showed an enlarged nodule /CT scan revealed a 32\u2009\u00d7\u200917\u00a0mm nodule in the left lower lobe with intense uptake of 18F\u2013fluorodeoxyglucose and multiple hypermetabolic lymph nodes Figs.\u00a0 and 2a. ule Fig. . The PFSule Fig. , which wWritten informed consent was obtained from the patient for the publication of this case report.Crizotinib is approved by the US Food and Drug Administration (FDA) for the first-line treatment of NSCLC patients with ROS1 rearrangements with a median PFS of at least 19.2\u00a0months , 11. PemThe molecular mechanism underlying the prolonged PFS in ROS1-driven lung cancer treated with pemetrexed remains unknown. Low thymidylate synthase (TS) expression is positively related to the efficacy of pemetrexed in NSCLC patients and may predict a longer PFS , 16. SevIn the present case, NGS identified EZR-ROS1 as the oncogenic driver mutation, and crizotinib, a multi-targeted TKI, showed robust and clinically meaningful efficacy endpoints in this patient . The advLorlatinib is a third-generation macrocyclic ALK/ROS1 TKI with a novel chemical scaffold that shows potent antineoplastic activity against all known mutations resistant to first- and second-generation inhibitors \u201327.\u00a0The CEA is a glycosylphosphatidylinositol-anchored glycoprotein that is normally produced during fetal development, and its production stops before birth . High leIn conclusion, the present ROS1-driven lung cancer case showed a positive response to pemetrexed-based chemotherapy as first-line and maintenance treatment, with a PFS of 42\u00a0months. Crizotinib and lorlatinib were used as second- and third-line therapies and both elicited a favorable response. Data from this case suggested that pemetrexed-based regimens may not be inferior to crizotinib as first-line treatment for ROS1-driven lung cancers, and lorlatinib may be an alternative treatment choice in crizotinib-resistant disease. Secondary biopsy after each relapse combined with NGS would help to dynamically monitor the histological and molecular evolution of lung cancer and may benefit the individualized selection of treatment regimens. CEA serum levels may be useful for monitoring relapse in lung adenocarcinoma."} +{"text": "PPARG2 rs1801282 (C>G); PPARGC1A rs8192678 (C>T); TCF7L2 rs7903146 (C>T); LDLR rs2228671 (C>T); MTHFR rs1801133 (C>T); APOA5 rs662799 (T>C); GCKR rs1260326 (C>T); FTO rs9939609 (T>A); MC4R rs17782313 (T>C) were genotyped in 168 pregnant Caucasian women with or without GDM by High Resolution Melting (HRM) analysis. A significant correlation was observed between TT genotype of TCF7L2 gene and increased risk of GDM (OR 5.4 [95% CI 1.5\u201319.3]). Moreover, a significant correlation was observed between lipid parameters and genetic variations in additional genes, namely, PPARG2 , APOA5 , MC4R , LDLR , and FTO . Our findings support the association between TCF7L2 rs7903146 variant and an increased GDM risk. Results about the investigated genetic variants provide important information about cardiometabolic risk in GDM and help to plan future prevention studies.Gestational diabetes mellitus (GDM) is the most frequent metabolic disorder in pregnancy. Women with a GDM history are at increased risk of developing diabetes and cardiovascular diseases. Studies have demonstrated a significant correlation between several genes involved in the metabolic pathway of insulin and environmental factors. The aim of this study was to investigate the relationship between clinical parameters in GDM and variants in genes involved with nutrients and metabolism. Several variants Gestational diabetes mellitus (GDM) is defined as \u201cdiabetes diagnosed in the second or third trimester of pregnancy that is not clearly overt diabetes\u201d .GDM approximately affects 7% range 2\u201318%) of all pregnancies \u201318% of a, and GDM HNF4A, GCK, HNF1A, IPF1, HNF1B, and NEUROD1. Common variants in MODY genes, especially in HNF1A and GCK, have been correlated with GDM and related traits ). Although the rs7903146 in the TCF7L2 was the only variant associated with GDM, a significant relation was also found between APOA5 \u22121131T>C polymorphism, and MC4R rs17782313 polymorphism with 3rd trimester HDL-C in women with GDM. Then, a significant correlation was observed between PPARG2 rs1801282 and LDLR rs2228671 with 3rd trimester LDL-cholesterol levels. Finally, a significant correlation was observed between FTO rs9939609 and 3rd trimester LDL-cholesterol levels (Supplementary Tables\u2009\u20092-3). Our results show that the associations between SNPs and BMI are not statistically significant (Supplementary Tables\u2009\u20094-5).Only one out of the nine investigated gene variants, namely, rs7903146 in theC carrier in APOA5 and HDL-C ; T carrier in LDLR and LDL-C ; A carrier in FTO and LDL-C , respectively.After adjusting for age and BMI, the significant association was still present between \u03c72 test p value > 0.05) both in cases and in controls, except for TCF7L2 rs7903146 (CA) in cases.All the investigated genotype frequencies were within the Hardy\u2013Weinberg equilibrium and glucagon-like peptide 1 (GLP-1) in intestinal endocrine cells.The TCF7L2 T allele has been associated with an increased risk of T2DM. In fact, Grant et al. [ TCF7L2, including the rs7903146 (C>T), show significant associations with T2DM, and these findings have been replicated by numerous groups, demonstrating the rs7903146 (C>T) SNP as one of the most important T2DM susceptibility variants. Dahlgren et al. [T allele results in overexpression of TCF7L2 in the pancreatic \u03b2 cell, with a reduced insulin secretion in turn increasing hepatic glucose production. In addition, several studies previously observed a strong association between this variant and GDM in different ethnic groups [C>T) and GDM, previously reported by the literature. In addition, our results demonstrated that this association is not mediated by the presence of variants within other genes involved in food metabolism, among those selected, since no one of the other genes investigated in our study was associated with GDM. Future study will need to investigate the possible role of other genes in this condition.The presence of thet et al. reportedn et al. suggestec groups \u201322. ThusPPARG2, APOA5, MC4R, LDLR, and FTO) and lipid parameters within the GDM women group.Nevertheless, our study evidenced also a significant correlation between common genetic variations in several genes per C allele [ MC4R and FTO are associated with severe obesity and metabolic impairment in Caucasians [ AA genotype of rs9939609 in FTO was associated with 2.09-fold risk of CVD in men [As to theC allele . Gene vaucasians . In the D in men . TT genotype of LDLR rs2228671 polymorphismand 3rd trimester LDL-cholesterol levels, it is interesting to note that a recent meta-analysis has established rs2228671 as a protective factor of CHD in Europeans [Regarding our results about the significant correlation observed betweenuropeans . APOA5-1131T>C and MC4R rs17782313 with 3rd trimester HDL-C in GDM women. This result evokes the study [mean common carotid IMT-CCIMT) was higher in subjects without diabetes but with previous GDM (pGDM) than in control subjects. In this study when these variables were introduced in a multiple regression analysis, only oxidized LDL (oxLDL) remained significantly associated with CC-IMT. The increased carotid IMT, a marker of preclinical atherosclerosis [All these findings are of great relevance and the presence of these gene variants in our panel could increase the use of genetic information in clinical practice for GDM patients: several data support the potential cardiometabolic risk in later life in patients with previous GDM. In fact, Mai et al. demonstrhe study which haclerosis , shows tclerosis . APOA5-1131T>C SNP modulates the effects of macronutrient intake on BMI and obesity risk in both men and women. In fact, the presence of at least one C allele implicated less weight on a high fat diet than the presence of homozygosity for the T allele. Future studies are needed to evaluate this aspect and to predict increased cardiovascular events risk and diabetes in the mother and offspring [Furthermore, Corella et al. demonstrffspring .As is known the diagnosis of GDM identifies a population at high risk for T2DM and metaGDM represents an important opportunity in the era of \u201cPrecision Medicine.\u201d Further studies are required to examine whether incorporation of our panel genes into an algorithm including genetic, clinic, and metabolic variables will help further improving the identification of women with GDM at early risk of diabetes and CHD; this would allow a better allocation of resources and the implementation of effective and sustainable strategies of primary prevention. In the last decades, the nutrition science is taking off in the prevention of noncommunicable chronic diseases that account for more than 60% of global deaths annually and the The present study has some limitations. First, the pooled sample size for the SNPs was relatively small. Future genome-wide studies with larger sample size are still required to identify genes with smaller allele effects and possibly GDM specific risk variants: this to understand the role of these variants of genes involved with nutrients and metabolism, in the aetiology and progression of GDM. Second, our results show the role of lipids only in GDM because in control group they were unavailable: our study suggests that in the future these parameters should be considered at least during 3rd trimester of pregnancy.Third, additional analyses, such as epigenetic studies, will also be required to elucidate the role of the molecular mechanisms to the disease susceptibility. In fact, as is known, the development of noncommunicable chronic diseases is determined not only by genetic variants but also by epigenetic changes as DNA methylation and histone modifications, in response to diet and environmental conditions. It has become clear that adverse epigenetic effects might also influence the foetus during pregnancy and postnatal early life increasing the susceptibility to chronic diseases such as obesity, T2DM, cardiovascular disease, or clinically relevant phenotypic traits.Fourth, we recruited a larger sample of women with GDM compared to the control group. This limit is because the recruitment was conducted primarily at the Diabetes and Metabolism Unit.Therefore, future studies are warranted for better understanding the GDM pathogenesis considering gene-gene and gene-environmental interactions.Supplementary Tables showing genotypes distribution in study subjects and the correlation between SNPs and clinical data (lipid parameters and BMI) of GDM patients."} +{"text": "The transcriptional repressor Blimp-1 originally cloned as a silencer of type I interferon (IFN)-\u03b2 gene expression controls cell fate decisions in multiple tissue contexts. Conditional inactivation in the mammary gland was recently shown to disrupt epithelial cell architecture. Here we report that Blimp-1 regulates expression of viral defense, IFN signaling and MHC class I pathways, and directly targets the transcriptional activator Stat1. Blimp-1 functional loss in 3D cultures of mammary epithelial cells (MECs) results in accumulation of dsRNA and expression of type III IFN-\u03bb. Cultures treated with IFN lambda similarly display defective lumen formation. These results demonstrate that type III IFN-\u03bb profoundly influences the behavior of MECs and identify Blimp-1 as a critical regulator of IFN signaling cascades. By virtue of its ability to recruit chromatin modifiers such as HDAC1/2, LSD-1, and G9a, Blimp-1 governs epigenetic reprogramming required for germ cell specification in the early mouse embryo6. Blimp-1 null embryos die at around embryonic day 10.5 (E10.5) due to defective placental morphogenesis7. Loss of Blimp-1 disrupts specification of the spiral artery-associated trophoblast giant cell lineage (SpA-TGC\u2019s) the crucially important cell subpopulation that invades and remodels maternal blood vessels9. Lineage tracing experiments in combination with expression profiling and single cell RNA-Seq analysis have defined the cell type specific transcriptional signature governing these specialized functional activities9.The zinc finger transcriptional repressor Blimp-1 (PRDM1) originally identified as a post-inductive silencer of interferon beta (IFN-\u03b2) gene expression in virally infected cells12. Recent studies have further characterized Blimp-1-dependent gene expression changes and chromatin remodeling at its transcriptional targets associated with plasma cell maturation13. Similarly Blimp-1 regulates cell fate choices made during differentiation of CD4+ T cell subsets, and controls the balance of cytolytic effectors vs the generation of memory CD8+ T cells18.In contrast in B and T lymphocytes, Blimp-1 function is not required at early stages during lineage commitment. Rather in B cells Blimp-1 directly silences expression of key transcription factors such as c-Myc, Pax5, and CIITA that maintain B cell identity to dramatically shift the developmental programme towards plasma cell terminal differentiation19. Blimp-1 function is required to prevent premature activation of the adult enterocyte biochemical signature19. ChIP-Seq analysis of E18.5 small intestine demonstrate Blimp-1 preferentially binds to promoter regions upstream of genes associated with metabolism20 and interestingly revealed a subset of highly conserved target sites, including the promoters of IFN-inducible components of the MHC class I antigen processing machinery such as Psmb8, Psmb10, Tapbp, and Erap1, that are also recognized by interferon regulatory factor (IRF) -1 a positive regulator of the MHC class I peptide loading pathway21. Thus Blimp-1 occupancy directly antagonizes IRF-1 to prevent premature activation of the MHC class I pathway in fetal enterocytes and maintain tolerance in the neonatal intestine in the first few weeks after birth during colonization of the intestinal tract by commensal microorganisms20.Recent experiments have shown that Blimp-1 governs postnatal reprogramming of intestinal enterocytes22. Recent experiments demonstrate that Blimp-1 expression within a rare subset of luminal progenitors is up-regulated in response to pregnancy hormones and conditional inactivation results in defective mammary gland morphogenesis23. Strikingly Blimp-1 functional loss disrupts epithelial architecture and lumen formation both in vivo and in three-dimensional (3D) primary cell cultures23.Survival at these early stages depends on the highly specialized mammary glands that produce milk and enable the mother to feed her newborn offspring. The mammary epithelium contains two structurally and functionally distinct cell subpopulations: the outer myoepithelial/basal cells and the inner luminal cell population. Considerable progress has been directed towards understanding cell fate decisions during ductal morphogenesis, lumen formation, and alveologenesis. The functional contributions made by distinct mammary stem cell subpopulations, including both unipotent and bipotent progenitors have been extensively described in lineage tracing experimentsin vivo, we exploited the inducible ROSA26:CreERT2 allele to activate Cre-mediated Blimp-1 deletion via tamoxifen treatment of 3D mammary epithelial cell (MEC) cultures. As expected Blimp-1 deficient (cKO) MEC cultures display defective lumen formation and fail to establish apical-basal polarity. Surprisingly however functional annotation analysis of up-regulated genes identified highest enrichment scores for categories associated with innate immunity and IFN signaling pathways. This gene list substantially overlaps with those recently described as conserved Blimp-1 targets via ChIP-Seq analysis, including key components of the MHC class I peptide-loading pathway20. Additionally IFN-stimulated genes and pathway regulators including Usp18, Oligoadenylate synthetase (OAS) family members, and the key transcriptional activator Stat1, were identified here as direct Blimp-1 targets. Finally the present experiments demonstrate that Blimp-1 loss of function results in up-regulated expression of double stranded (ds)RNA and type III IFN lambda (IFN-\u03bb). Moreover type III IFN-\u03bb treatment of wild type MEC cultures causes defective lumen formation and maturation. These results demonstrate for the first time that the zinc finger transcriptional repressor Blimp-1 silences type III IFN expression in mammary epithelial cells and establish its actions as a global regulator of the IFN signaling cascade.To further investigate the underlying causes of these tissue disturbances, here we performed transcriptional profiling experiments. To avoid studying possible contributions made by strongly Blimp-1-positive endothelial cells within this highly complex vascularized tissue To learn more about Blimp-1 functional contributions during mammary gland morphogenesis, we used the Illumina array platform to compare transcripts in wild type versus Blimp-1 cKO MEC cultures harvested at Day 3 (D3) and Day 4 (D4), when tissue disturbances become readily visible. The greatest differences in gene expression profiles were detectable at Day 4. The relatively few down-regulated genes (n\u2009=\u2009138) in D4 Blimp-1 cKO MECs showed no significant enrichment for any GO annotated pathways or biological processes and were not investigated further small intestine25. Results above demonstrate increased Stat1 expression readily detectable at Day 3 the earliest time point examined 18.5 small intestine is severely hypomorphic elsewhere20 and FACS sorting is problematic due to relatively weak eGFP fluorescence intensity even in plasma cells. For these reasons, we elected to perform qPCR validation of Blimp-1 occupancy at candidate target sites using transiently transfected CommaD\u03b2 cells, an established cell line that shares many functional and behavioral characteristics with bona fide mammary stem cells26, in combination with our expression construct encoding the C-terminal eGFP-tagged Blimp-1 fusion protein and the proven GFP mAb27. Results shown in Supplementary Figure\u00a0Blimp-1 expression in the mammary gland is restricted to a rare subset of luminal progenitors, representing less than 1% of the total luminal cell population recovered from pregnant females29. Our transcriptional profiling experiments shown above reveal that Blimp-1 conditional loss causes up-regulated expression of several dsRNA binding proteins including the DEXD/H box helicases Ddx60, Dhx58 (LGP2), and RIG-1 (Ddx58) Fig.\u00a0. However33. Considerable evidence demonstrates that type III IFN-\u03bb plays a crucial protective role at mucosal barrier surfaces lining the respiratory, gastro-intestinal, and reproductive tracts33. Blimp-1 was previously shown to silence IFN-\u03bb1 expression in human lung epithelial cells34. However in mice IFN-\u03bb1 is a pseudogene. To test whether Blimp-1 regulates type III IFN-\u03bb expression in mouse mammary epithelial cells, here we performed RT-PCR experiments using primers designed to detect IFN-\u03bb2/3 transcripts. As shown in Fig.\u00a0Recent experiments demonstrate that type III IFN-\u03bb via its unique receptor complex selectively activates anti-viral defense pathways in epithelial cells21. Irf-3 and Irf-7 have been shown to induce Type I and Type III IFN expression in virally infected cells35. Interestingly we found here that Irf-1, 2, 3, and 6 are constitutively expressed by mammary epithelial cells whereas Blimp-1 inactivation leads to increased expression of Irf-7 and Irf-9 that recognize viral RNA trigger membrane bound signaling proteins such as MAVS that in turn activate assembly of transcriptional complexes driving IFN-stimulated gene expression. In contrast to these disturbances downstream of viral entry and replication, the striking morphological changes described here associated with up-regulated IFN-\u03bb expression are simply caused by Blimp-1 inactivation. Interestingly Blimp-1 cKO also resulted in accumulation of dsRNA recognized by the J2 monoclonal antibody. Thus, activation of IFN signaling responses could simply be due to the presence of dsRNA.Recent studies implicate a functional relationship between epithelial cell polarity and IFN-\u03bb expression from peroxisomesThe J2 mAb recognizes dsRNA greater than 40\u2009bp in length but its fine specificity remains ill defined. An extensive literature has described diverse host mechanisms responsible for controlling expression of endogenous retroviral sequences scattered across the genome. It is tempting to speculate that as for many other zinc finger proteins, Blimp-1 may directly silence ERV transcription in a sequence specific manner. However mapping ChIP seq peaks to repetitive elements requires the application of sophisticated bioinformatics strategies. An important long term goal is to characterize dsRNAs that accumulate in the absence of Blimp-1 and test this attractive model.23. We now understand in the absence of Blimp-1 that type III IFN-\u03bb secretion by rare luminal progenitors acting in a paracrine fashion has the ability to influence the behavior of neighboring cells. Our results demonstrate for the first time that type III IFN-\u03bb not only activates anti-viral responses in mucosal epithelial cells, but also in mammary epithelial cells. While the role played by type III IFNs in innate immune protection at mucosal barrier surfaces has been extensively investigated, relatively little is known about viral infections in the mammary epithelium or how this impacts on neonatal health. It will be important to learn more about protective and potentially negative effects mediated by type III IFN-\u03bb.The present results help to resolve the paradoxical finding previously described in our recent studies, namely how could Blimp-1 inactivation in rare progenitor cell population cause such dramatic widespread phenotypic changes throughout the entire epithelial cell populationPrdm1BEH/+ and Prdm1CA/CA mice23 were crossed with the ROSA26:CreERT2 line47 to generate Prdm1BEH/CA;ROSA26:CreERT2 females. Mice were genotyped by PCR as described in the original reports. All animal experiments were performed in accordance with Home Office regulations and were approved by the University of Oxford Local Ethical Committee.Female C57BL/6 mice (8\u201310 weeks of age) were used as the wild type strain. For inducible Blimp-1 gene deletion in mammary epithelial cell (MEC) cultures, 26. Primary MECs were collected from 15.5- and 16.5-day pregnant mice and cultured as described48. Cells (2\u2009\u00d7\u2009104) were seeded onto growth factor reduced Matrigel to form acini and cultured in growth media supplemented with 2% growth factor reduced Matrigel. Cultures were fed every 48 hrs and grown for 3\u20136 days. Cre-mediated deletion of Blimp-1 in primary MEC cultures (hereinafter referred to as Blimp-1 cKO) was achieved by collecting MECs from Prdm1BEH/CA;ROSA26:CreERT2 mice and treating with 1/1000 dilution of 4-Hydroxytamoxifen (4-OHT) dissolved in ethanol for 24 hrs. As controls, MECs from WT mice were similarly treated with 4-OHT. In some experiments, MitoTracker Red CMXRos (Invitrogen) dissolved in MEC growth media was added at day 2 of culture for 45\u2009min at 37\u2009\u00b0C. Recombinant mouse IFN-\u03b2 , IFN\u03b3 or IFN-\u03bb2 was added on day 0 of culture.CommaD\u03b2 cells were maintained in DMEM/F12 medium supplemented with B27 (GIBCO), 20 ng/mL epidermal growth factor , 20 ng/mL basic fibroblast growth factor and 10\u2009\u03bcg/mL insulin (Sigma-Aldrich) as previously describedFor immunofluorescence staining, acini were fixed with 4% PFA in PBS and permeabilized with 0.5% Triton X-100 in PBS. Fixed cells were blocked with 10% normal goat serum/2% BSA in PBS for 2 hrs at RT, and incubated with Blimp-1, Stat1, pStat1 (Tyr701), Stat2, or dsRNA (J2) antibodies with on column DNase treatment according manufacturer\u2019s protocol. Microarray transcriptional profiling experiments were performed using triplicate RNA samples of each genotype at both Day 3 and 4 as previously described using Illumina Mouse WG-6 v2 Expression BeadChips8. Differential probe expression was determined following rank-invariant normalization using the Illumina custom error model option of Gene Expression Analysis Module V1.6.0 of GenomeStudio V2009 (Illumina) with Benjamini and Hochberg false discovery rate. Probes with significant different expression were analyzed by WebGestalt using default parameters . qPCR analysis was performed on 5 RNA samples for each genotype at day 3 and 4. First-strand cDNA was reverse transcribed from RNA (1 \u03bcg) using Superscript III according to manufacturer\u2019s protocol and qPCR performed using QuantiTech SYBR Green master mix on a Rotor-Gene Q (Qiagen). Relative gene expression was calculated using the \u0394\u0394Ct method in comparison with Hprt as the reference. Reverse transcription-PCR (RT-PCR) was performed using the OneStep RT-PCR kit (Qiagen) according to manufacturer\u2019s protocol. Primer sequences are provided in Supplementary Table\u00a0Total RNA from five independent MEC 3D cultures each from 20 and Stat124 genomic binding sites. ChIP-peaks were functionally annotated with GREAT version 3.0.049, using the basal extension rule linking peaks to the nearest transcription start site (TSS)\u2009\u00b1\u2009100\u2009kb.ChIP-Seq datasets corresponding to NCBI GEO accession numbers GSE66069, GSE60204 and GSE33913 were used to identify Blimp-127 using Lipofectamine 2000 (Invitrogen) according to the manufacturer\u2019s protocol. GFP positive cells, sorted by FACS at 24\u201348\u2009hours post transfection were cross-linked with 1% formaldehyde in growth media for 20\u2009min at RT and subsequently processed for ChIP using 10 \u03bcg of mouse anti-GFP IgG2a , as described previously50. Non-transfected CommD\u03b2 cells were used as negative controls. Eluted DNA samples were recovered using ChIP DNA Clean and Concentrator column kit (Zymo Research) and qPCR analysis of quadruplicate ChIP and input samples performed using QuantiTech SYBR Green master mix on a Rotor-Gene Q (Qiagen). Primers were designed to amplify 100\u2013200\u2009bp regions central to Blimp-1 binding sites identified in ChIP-Seq dataset GSE66069. Selected genes included c-Myc, Prdm1, Stat1 . P values \u22640.05 were considered significant.Statistical analyses were performed using two-tailed unpaired The microarray data has been deposited in NCBI GEO with the accession number GSE100747.Dataset 1Supplementary information"} +{"text": "Improving child nutritional status is an important step towards achieving the Sustainable Development Goals 2 and 3 in developing countries. Most child nutrition interventions in these countries remain variably effective because the strategies often target the child's mother/caregiver and give limited attention to other household members. Quantitative studies have identified individual level factors, such as mother and child attributes, influencing child nutritional outcomes.We used a qualitative approach to explore the influence of household members on child feeding, in particular, the roles of grandmothers and fathers, in two Nairobi informal settlements. Using in\u2010depth interviews, we collected data from mothers of under\u2010five children, grandmothers, and fathers from the same households.Our findings illustrate that poverty is a root cause of poor nutrition. We found that mothers are not the sole decision makers within the household regarding the feeding of their children, as grandmothers appear to play key roles. Even in urban informal settlements, three\u2010generation households exist and must be taken into account. Fathers, however, are described as providers of food and are rarely involved in decision making around child feeding. Lastly, we illustrate that promotion of exclusive breastfeeding for 6\u00a0months, as recommended by the World Health Organization, is hard to achieve in this community.These findings call for a more holistic and inclusive approach for tackling suboptimal feeding in these communities by addressing poverty, targeting both mothers and grandmothers in child nutrition strategies, and promoting environments that support improved feeding practices such as home\u2010based support for breastfeeding and other baby\u2010friendly initiatives. Grandmothers have central roles in child nutrition as they are key advisors to younger mothers in addition to being child caregivers themselves. They are important targets for child health interventions and should not be undermined in strategies aiming to reduce child malnutrition in urban informal settings.Child health interventions in urban informal settings not only should focus on increasing mothers' knowledge on child nutrition but also should include promotion of optimal child\u2010feeding practices and promotion of interventions that support mothers to combine breastfeeding and their other occupations.Reinforcement of interventions towards sustainable poverty reduction is fundamental in dealing with child undernutrition.1Nutrition is one of the most important factors that impact on children's development and is linked to short\u2010 and long\u2010term cognitive development and survival . The NUHDSS collects birth, death, and migration data every 4\u00a0months, and the Maternal and Child Health study collected child anthropometric measurements. From these data, households with stunted and not stunted under\u2010five children were identified. Community health workers in the study area helped identify the selected households based on their NUHDSS unique identifier. In total, 30 in\u2010depth interviews were conducted in the two sites: 12 with mothers of stunted children, eight with mothers of nonstunted children, six with grandmothers, and four with fathers, as shown in Table\u00a02.3The interviews were audio\u2010recorded then transcribed before translation into English. Using a deductive coding approach, the analysis was done using Nvivo 10 and guided by the conceptual framework on child health described in Figure\u00a02.4The study was approved by the African Medical and Research Foundation Ethics and Scientific Review Committee in Kenya and the Human Research Ethics Committee at the University of Witwatersrand in South Africa. All interviews were conducted in private, and written informed consent was sought from all participants.3Table\u00a0The results below are structured around the three thematic areas explored during the study: knowledge on child stunting, views and practices on child feeding, and decision\u2010making mechanisms on child feeding.3.1Mother nonstunted child, ViwandaniThe child looks small and yet he is older. You will notice from the body, the body does not match the number of years, and if you have such a child, he is sickly and will not want to play with other children. Mother stunted child, ViwandaniThe child is older but the body frame is small. My youngest is really small. His age mates are much taller \u2026. Respondents were asked to describe symptoms they associated with stunting and its perceived causes. The results show a broad knowledge on child stunting and its symptoms. Regardless of the child's stunting status, there was quasiunanimity among respondents in the two slums that stunted children were usually too small for their age, as two mothers noted.Grandmother, KorogochoSuch child can have swollen legs, pot belly, swollen cheeks; you might think he is fat which is not being fat. You are able to see he is not in good health, which is malnutrition. His hair turns red. You will get the child has sores all over the body. The same statement was echoed by grandmothers from the two slums who associated child stunting symptoms with an imbalance between the age and the height of a child. However, one respondent did not differentiate between stunting and signs of acute malnutrition:Mother stunted child, ViwandaniThe main problem was lack of food because at times in the morning when she wakes up, there was nothing to eat and she could only get strong tea in the morning, at lunch time maybe you cook white rice with nothing else. ugali and cabbage. The child will then sleep and when morning comes there is not even a cup of tea, the child will then stay hungry till evening again. Grandmother, KorogochoAnd this is because this child has been hungry all day and even when the mother comes in the evening she will bring When the causes of stunting were explored, respondents unanimously identified poverty and food insecurity and unbalanced diets as the major reasons. In particular, most of the respondents reported their limited access to food, especially food that is appropriate for feeding young children.Poor child health outcome including stunting was seen as a consequence of deprivation and the poor quality of life in the slums, parents' own poor health status, and parents' lack of interaction with their children. For instance, a caregiver's HIV status was related to child stunting in a situation where the affected parent or caregiver essentially prioritizes his or her own health rather than the feeding and health care of the child.Father, KorogochoI could tell you \u2026 the problem was the food given to the child by the mother. Mostly we, men, are out hustling for the most part of the day, so it is the responsibility of the mother to look after the child. Exploring respondents' views on individual responsibilities regarding child stunting, some fathers were reportedly not fulfilling their role to provide for their families' needs. However, from their own point of view, fathers did not appear to perceive themselves to have a direct responsibility when their child is stunted; the mother or caregiver was identified as the person primarily responsible.We noted that no reference was made to common practices that may be driven by cultural or religious norms and standards in relation to child feeding. Finally, we also noted that caregivers of stunted children were not always aware that their children were stunted.3.2Mother nonstunted child, KorogochoI used to breastfeed her, but after one month I had to do my KCSE (Kenya Certificate of Secondary Education). Then I started giving her Nan milk (infant formula). ugali and that is how I have been bringing him up. Mother stunted child, ViwandaniI felt that if I am going to sit here breastfeeding this child, I will suffer even more. So after one week I introduce other food like milk and porridge and any other food that I have found. Even if it is Respondents' views and actual practices on child feeding were explored from two angles in this study. We first asked specific questions on exclusive breastfeeding in the first 6\u00a0months, and then the introduction of complementary feeding was explored. The results show that most of the respondents in the two slums, regardless of the child's stunting status, recognized the importance of exclusive breastfeeding for a newborn but found it hard to sustain up to 6\u00a0months. In many cases, there were attempts to exclusively breastfeed in the first 2 or 3\u00a0months of child's birth, but for different reasons, mothers opted to stop exclusive breastfeeding before their child reached 6\u00a0months. Among the reasons, mothers found it impossible to combine their occupation , with exclusive breastfeeding of their child. The findings were observed among both mothers of stunted children and those of nonstunted children.Mother nonstunted child, ViwandaniWhen I just realized that I was pregnant (five months after birth), I stopped breastfeeding. Mother stunted child, ViwandaniI stopped breastfeeding because I was taking family planning contraceptives. I thought the contraceptives were making him sick. However, misinformation about health effects of contraception, often promoted in the postnatal period, was also found.Grandmother, KorogochoExclusive breastfeeding is good because it is vital in development in the child's brain. Then the milk of the mother helps prevent diseases. The mother's milk is always ready, it needs neither boiling nor cooking, and it has no budget. Grandmother, ViwandaniI think it is important because I noticed it with this baby, when he breastfed for six months exclusively, I have not seen this child disturbing at all. Being unable to breastfeed exclusively until 6\u00a0months was common in the two slums with no difference between mothers of stunted children and those of nonstunted children. Grandmothers, however, were mostly favourable towards exclusive breastfeeding, even though some of them remained unclear on the duration. In many cases, they advised young mothers to comply with the practice and offered their support to take care of babies when mothers were away. However, they seemed not to be cognizant of the complexity of caring for an exclusively breastfed child without having the needed breast milk within easy reach.Mother nonstunted child, ViwandaniWhen I gave birth, after six months I started giving the baby water, fruits, and continued like that, as I introduced other feeds slowly by slowly until the baby was one\u2010year\u2010old and able to feed by himself. The findings also reveal that there is no standard or informed practice in the study area when breastfeeding is interrupted. Mothers and caregivers usually give the child the types of food that are available depending on their economic capability; a mother of a nonstunted child reported that she would not prepare something different, the child would just eat what is prepared for everybody, because there is no money to prepare weaning foods. In few cases, mothers tried to provide a more structured diet for their newborn.ugali followed by some boiled milk. After all these efforts, I would find his health is improving, and he is growing normally. Mother stunted child, KorogochoI changed it \u2026 so I would give him fruits, milk, bananas, all to boost his appetite. I would make him a mixture with a red fruit and give him. After that he would eat some plain Stunted children usually need an appropriate diet in order to improve their nutritional status. During the interviews, mothers, who recognized that their children were stunted, were asked if they thought it was necessary to feed their child differently to help them recover. Affirmative answers were noted from mothers in the two slums. The moment when the child is seen as not growing well and taken frequently to the hospital for care was usually the moment when mothers of stunted children realized the need to take care of their child differently, including changing feeding practices to avoid fatal consequences.However, this finding may not necessarily reflect usual responses to stunting in the slums as a number of mothers provided the food that was available to them as determined by their economic situation. Overall, economic circumstances were more important in determining child\u2010feeding practices.Another result is that child\u2010feeding practices in the slums did not differ by background characteristics, particularly by site or ethnic group. For instance, difficultly to exclusively breastfeed children for 6\u00a0months was found among all mothers regardless of their ethnic group.3.3Mother stunted child, ViwandaniWhen I introduced cow's milk and porridge at six months, my mother is the one who advised me to do that. Mother nonstunted child, KorogochoMy mother used to advise me on the importance of breastfeeding the child for long. Our interest focused on how decisions on child feeding are taken in the study areas and who the main persons involved are. This information could be the basis of targeted strategies to address child suboptimal feeding in the slums. The findings show that mothers are not the only decision makers on the feeding of their child. Beside health providers, grandmothers were commonly reported to have a key role in that process. Most of the mothers recognized and valued advices from grandmothers, and this was the case for mothers of children who were stunted and not stunted.Grandmother, KorogochoIt is good for me to sit down with them (young mothers) and teach them how they should feed these children and the types of food that the children should be given, how they should bring up the kids and the importance of taking the children for clinics. Grandmothers considered it normal to advise young mothers on how to take care of their babies including child feeding. Most grandmothers proudly assumed that role and justified it by their intention to ensure good health and a better future for their grandchildren. The role of grandmothers in child nutrition in this community is central as it goes beyond the basic advisory tasks. They are also child caregivers and mentors for young mothers as one reported:Grandmother, KorogochoThese days, you can see how they (young mothers) behave \u2026 If you talk to them, they would say \u2018what this old woman can teach me?\u2019 I would not be surprised if their children are fed on French fries at night. However, some grandmothers did not always think their advice was valued. Indeed, their comments implied generational shifts in norms and suggested intergenerational conflict between old and young mothers in this community, as a grandmother reportedLooking at the roles of fathers in child feeding, the results show that most of the fathers interviewed indicate that their role is limited to that of providing food, and they have developed strategies to cope with their expected duties. Their role did not extend to feeding children.4This qualitative study aimed to explore the influence of household members on child feeding, in particular, the roles of grandmothers and fathers.The study shows broad knowledge on child stunting among mothers and grandmothers. Although respondents mostly described stunting as an imbalance between the age and the height of a child, some did not differentiate stunting from acute malnutrition. Other studies from the same setting also found broad knowledge on child stunting among mothers increasing mothers' knowledge on child nutrition including removing the misconceptions around breastfeeding and pregnancy or use of contraceptives; (b) prioritizing interventions that support mothers to combine breastfeeding and other occupations, through human milk banking, and other baby\u2010friendly initiatives; (c) promoting optimal child\u2010feeding practices, in particular, when exclusive breastfeeding ends; and (d) reinforcing interventions for more sustainable poverty reduction programmes in these settings."} +{"text": "The idea of screening patients for low health literacy has had a polarizing effect in the health literacy community. One side feels that universal precautions and screening for low health literacy are mutually exclusive; meaning, if you screen patients for low health literacy, you are not adhering to the universal precautions approach. Others may be concerned about the feasibility of fully scaling universal precautions across a clinical enterprise and favor a more targeted approach that identifies patients with risk factors for low health literacy so that interventions that rely on limited resources can be allocated where there is greatest potential for benefit. Given recent changes in health care delivery models, we propose that the time has come to consider a hybrid approach that employs both the foundation of universal precautions for all patients, as well as identification of those for whom universal precautions alone may not suffice, due to extreme needs. When this combined approach is used, the limitations of each approach are mitigated by the other. This integrated approach is well aligned with recent innovation in the health care landscape and should be considered by researchers, providers, and policymakers.Health literacy universal precautions were first operationalized in 2010 to address the complex demands faced by patients in health systems in the United States . The appAlthough health literacy universal precautions have been broadly supported by researchers, policymakers, and promoters of health literacy, recent publications have revealed barriers to their adoption in health systems. Indeed, some of the patient-level universal precautions may be challenging to integrate routinely into busy clinical environments, whereas others are resource-intensive. Weiss posits tResearchers have developed efficient methods of screening patients for low health literacy using validated questions that assess patients' self-reported understanding of medical information and forms . The quePatient-level health literacy data make it possible for clinical staff who have completed training in clear health communication to provide additional assistance to patients with low health literacy levels. At a population level, these screening data are also useful for examining the association of health literacy with processes and outcomes of care ; for detThroughout the last decade, concerns have arisen regarding the risk of stigmatizing and embarrassing patients through health literacy screening . In addiOther studies have shown that patients feel that it is useful for their providers to know about their health literacy challenges , and whe\u201cHealth literacy 2.0,\u201d a hybrid approach to addressing health literacy in health systems, integrates universal precautions with targeted assistance for patients with lower health literacy levels. This approach supports striving for clear communication with all patients but recognizes that resources are limited and gaps in implementation of universal precautions are prevalent. Thus, the approach also involves screening to identify patients at greatest risk, so that resources can be directed to support their care.System-wide screening and documentation in EHRs produces data that can be used for targeted interventions, population health opportunities, and point-of-care best practices that are data-driven, rather than universal or result from profiling. Further, screening patients for low health literacy is in alignment with risk-based models of medical care and models of precision medicine that are prevalent in the current health care landscape and include attention to social determinants of health .\u201cHealth literacy 2.0\u201d should acknowledge the limitations of past and current attempts to address health literacy on large scales, learn from research and evidence, and embrace the landscape of innovation that is the context of our current health systems. The hybrid approach of universal precautions and screening for health literacy holds promise in this future. As health care shifts toward more tailored approaches to care, additional evidence is needed about how to best identify and deliver personalized care to patients with risk factors for poor outcomes related to health literacy.Historically, conceptual and analytical models of health literacy reveal causal pathways that position health literacy as an influencer of health behaviors that determine health outcomes . Most he(Table 1).A notable health literacy innovation that has influenced how care is and should be delivered in the U.S. is the Ten Attributes of Health Literate Care Organizations model . This moOne of the most influential innovations in the health care landscape in the U.S. is the adoption of the Institute for Healthcare Improvement's Triple Aim , which fhttps://innovation.cms.gov/initiatives/ACO/), can use health literacy \u201crisk\u201d data to identify patients for whom an evidence-based intervention and/or best practices will likely lead to improvement of specific health outcomes. As health systems personalize medicine and medical care, patients' individual capacity to understand information and choices becomes essential. Health literacy screening in patient EHRs can provide data that can also be used in population health strategies, quality and satisfaction efforts, and a host of medical informatics initiatives. There are vast opportunities for health systems researchers and administrators to test and disseminate novel ways of integrating health literacy in this new and evolving landscape.Other health care delivery model innovations have created a space for health literacy. Health systems, especially those that are a part of an Accountable Care Organization are applied as a safety net for all patients, and screening is used to identify patients at greater risk, we can better serve both population and individual patient needs. Screening patients for low health literacy requires a modest amount of staff training to normalize the questions and ask them respectfully, as well as time to collect data . HoweverAlthough integrating these two approaches may provide opportunity to better serve patients and health systems, the need for evidence-based interventions is significant. As researchers and practitioners respond to the call to innovate, we must ensure that we continue to develop, test, and disseminate new research on interventions that are effective in this new context."} +{"text": "Taxus cuspidata is well known worldwide for its ability to produce Taxol, one of the top-selling natural anticancer drugs. However, current Taxol production cannot match the increasing needs of the market, and novel strategies should be considered to increase the supply of Taxol. Since the biosynthetic mechanism of Taxol remains largely unknown, elucidating this pathway in detail will be very helpful in exploring alternative methods for Taxol production.Taxus cuspidata transcriptomes with next-generation sequencing (NGS) and third-generation sequencing (TGS) platforms. After correction with Illumina reads and removal of redundant reads, more than 180,000 nonredundant transcripts were generated from the raw Iso-Seq data. Using Cogent software and an alignment-based method, we identified a total of 139 cytochrome P450s (CYP450s), 31 BAHD acyltransferases (ACTs) and 1940 transcription factors (TFs). Based on phylogenetic and coexpression analysis, we identified 9 CYP450s and 7 BAHD ACTs as potential lead candidates for Taxol biosynthesis and 6 TFs that are possibly involved in the regulation of this process. Using coexpression analysis of genes known to be involved in Taxol biosynthesis, we elucidated the stem biosynthetic pathway. In addition, we analyzed the expression patterns of 12 characterized genes in the Taxol pathway and speculated that the isoprene precursors for Taxol biosynthesis were mainly synthesized via the MEP pathway. In addition, we found and confirmed that the alternative splicing patterns of some genes varied in different tissues, which may be an important tissue-specific method of posttranscriptional regulation.Here, we sequenced Taxus spp. using molecular breeding or plant management strategies and synthesizing Taxol in microorganisms using synthetic biological technology.A strategy was developed to generate corrected full-length or nearly full-length transcripts without assembly to ensure sequence accuracy, thus greatly improving the reliability of coexpression and phylogenetic analysis and greatly facilitating gene cloning and characterization. This strategy was successfully utilized to elucidate the Taxol biosynthetic pathway, which will greatly contribute to the goals of improving the Taxol content in The online version of this article (10.1186/s12870-019-1809-8) contains supplementary material, which is available to authorized users. Taxus cuspidata, an evergreen woody plant from the Taxaceae family that is native to Northeast China, Korea, Japan and the extreme southeast of Russia [T. cuspidata is well known worldwide for its ability to produce the antitumor metabolite Taxol, a complex tetracyclic diterpenoid that is mainly produced by plants from the Taxus genus [Taxus species and artificial semisynthesis from the extracted intermediates baccatin III or 10-deacetylbaccatin III (DAB) [Taxus spp. and the high costs of purifying Taxol and its intermediates [Taxus cell culture, metabolic engineering and synthetic biology methods.f Russia , has beef Russia , 2. Howeus genus . Since Tus genus , 5. The us genus . Furtherus genus . At presII (DAB) , 8. Unfomediates . TherefoTaxus spp., which are responsible for hydroxylation at the C-2, C-5, C-7, C-10, and C-13 positions [Elucidating the biosynthetic pathway of Taxol in detail is essential in exploring alternative methods for Taxol production. In plants, all terpenoids arise from the common precursors dimethylallyl pyrophosphate (DMAPP) and isopentenyl diphosphate (IPP), which are typically synthesized by either the mevalonic acid (MVA) pathway in the cytoplasm or the methylerythritol phosphate (MEP) pathway in the plastid , 10. Forositions . All of ositions , 12. HowT. chinensis to be involved in transcriptional activation of the DBAT gene. Lenka et al. [T. cuspidata play negative roles in Taxol biosynthesis. However, in some cases, TFs from the same families were shown to potentially play opposite roles in regulating Taxol biosynthesis. Zhang et al. [T. chinensis. The functional diversity of TFs has led to difficulty in understanding the regulatory mechanism of Taxol biosynthesis.Understanding the regulatory mechanism of Taxol biosynthesis is a necessary prerequisite for improving the Taxol content in intact plants, tissues and cell cultures using biotechnology. Most transcription factors (TFs) are very important for regulating plant growth and development as well as for the biosynthesis of secondary metabolites , 15. A na et al. demonstrg et al. reportedTaxus spp.Alternative splicing (AS) can produce multiple transcript isoforms from a single pre-mRNA via variable splice site selection . In planZea mays [Sorghum bicolor [Arabidopsis thaliana [Transcriptome analysis-based next-generation sequencing (NGS) technology is a powerful and economical way to obtain genetic information on a large scale and has been widely used to uncover genes involved in the biosynthesis of secondary metabolites , 25. AltZea mays , Sorghum bicolor , Arabidothaliana , and strthaliana .T. cuspidata genome is currently available, inhibiting elucidation of the Taxol biosynthetic pathway. Here, we developed a practical strategy to mine candidate genes involved in Taxol biosynthesis based on an Iso-Seq analysis of the T. cuspidata transcriptome. The candidate sequences and in-house pipelines produced from this study provide valuable resources for the elucidation of Taxol biosynthesis and will be beneficial for future studies on the production of Taxol or its precursors with synthetic biology technology.Due to its large genome size, little information about the T. cuspidata grown in Institute of Medicinal Plant Development served as the source of plant material in this study. Roots, stems and leaves were collected in three duplicates. After collection, all samples were immediately frozen in liquid nitrogen and stored at \u2212\u200980\u2009\u00b0C prior to RNA extraction. Total RNA was extracted using an RNAprep Plant Kit and quantified by Qubit . The RNA integrity was evaluated on an Agilent 2100 Bioanalyzer .Total RNA from different tissues was mixed at equal ratios. Poly(A) RNA was isolated from total RNA using Dynal oligo(dT)25 beads and used for construction of the Iso-Seq library. The first cDNA strand was synthesized from purified polyA RNAs using the Clontech SMARTer PCR cDNA Synthesis Kit . After PCR optimization, large-scale PCR was performed to synthesize the second cDNA strand without size selection. Equimolar mixed libraries of unfiltered fragments and\u2009>\u20094\u2009kb fragments were prepared with the SMRTbell Template Prep Kit 1.0. Sequencing was performed on a PacBio Sequel platform. A total of four SMRT cells were utilized in this study.The raw data were processed using SMRTlink 4.0 software. Circular consistency sequences (CCSs) were generated from subread sequences by mutual correction and then classified into full-length or non-full-length reads by examining whether the 5\u2032 primer, 3\u2032 primer, or polyA tail was present. Full-length reads were corrected by isoform-level clustering (ICE) to obtain clustered consensus sequences, and then final arrow polishing was performed with non-full-length reads to obtain polished consensus sequences. Finally, the high-quality consensus transcripts of multiple libraries were merged, and redundant reads were removed based on CD-HIT-EST (\u2212c 0.99) to obtain nonredundant transcripts . The CodA total of twelve RNA samples from the roots, stems and leaves were used in four duplicates to construct the transcriptome sequencing library. The transcriptome library was pair-end sequenced on the Illumina HiSeq\u2122 2000 platform. Clean reads were used for error correction to obtain the polished consensus sequences as described above. For comparison of Iso-Seq and RNA-Seq data, Illumina data from the same samples were assembled with Trinity and SOAP to produce unigenes. Coding sequences (CDSs) from RNA-Seq unigenes predicted using Swiss-Prot and NCBI Non-redundant Protein (Nr) data and the CDSs from unigenes assembled by Cogent using ANGEL software were comhttp://www.kegg.jp/) [https://www.uniprot.org/uniprot/) [https://pfam.xfam.org) [ftp://ftp.ncbi.nih.gov/pub/COG/KOG/) [https://www.ncbi.nlm.nih.gov/protein/) [https://www.ncbi.nlm.nih.gov/nucleotide/) and Gene Ontology (GO)(http://www.geneontology.org/) [https://CRAN.R-project.org/package=pheatmap).Gene functions were annotated using the following databases: Kyoto Encyclopedia of Genes and Genomes (KEGG) (egg.jp/) , Swiss-Pniprot/) , Pfam (hfam.org) , euKaryoOG/KOG/) , NCBI Norotein/) , NCBI Nogy.org/) . The expgy.org/) with IllSeveral tools have been used to evaluate the coding potential of unigenes, such as Coding Potential Calculator (CPC) , Coding-T. cuspidata. The classification of TcuCYP450 proteins was based on reference sequences from a P450 database established by Nelson. BAHD ACTs were identified by searching for the key word \u201cPF02458\u201d in the Pfam database. Unigenes were submitted to the iTAK online program (version 1.7.0b) [Arabidopsis thaliana and Salvia miltiorrhiza used for TF comparative analysis were derived from NCBI.Based on functional annotations from the Swiss-Prot and Pfam databases, the unigenes of CYP450s were identified in 1.7.0b) for idenThe CYP450, WRKY, bHLH and ERF phylogenetic trees were constructed using the neighbor-joining (NJ) method with the \u201cPoisson correction\u201d and \u201cpairwise deletion of gaps\u201d functions in MEGA6 software [\u2013\u25b3\u25b3Ct method.RNA samples were isolated from the roots, stems and leaves in three biological replicates. Reverse transcription was performed using the GoScript\u2122 Reverse Transcription System kit . For each sample, reverse transcription was performed using 2\u2009\u03bcg of total RNA and 200\u2009U\u2009M-MLV Transcriptase (Promega) in a 40\u2009\u03bcl volume. The reaction was carried out at 25\u2009\u00b0C for 5\u2009min, 42\u2009\u00b0C for 60\u2009min and 70\u2009\u00b0C for 15\u2009min. A qPCR analysis was then conducted in triplicate using SYBR Premix Ex Taq and a 7500 Real-time PCR system (ABI). The reaction mixture (20\u2009\u03bcL) contained 10\u2009\u03bcL of 2\u2009\u00d7\u2009SYBR Premix Ex Taq, 0.5\u2009\u03bcL of each forward and reverse primer, and 1\u2009\u03bcL of template cDNA. PCR amplification was performed under the following conditions: 95\u2009\u00b0C for 30\u2009s; 40\u2009cycles of 95\u2009\u00b0C for 5\u2009s, 60\u2009\u00b0C for 30\u2009s and 72\u2009\u00b0C for 15\u2009s; and 95\u2009\u00b0C for 10\u2009s. The primers used in this study are listed in Additional\u00a0file\u00a0T. cuspidata plants were pooled together and reverse transcribed into cDNA. To minimize bias that favors sequencing shorter transcripts, unfiltered and\u2009>\u20094\u2009kb cDNA fragments were equally mixed and used to construct sequencing libraries. Using the PacBio Sequel platform, a total of 5,678,524 subreads with an average length of 2047\u2009bp were generated. According to the bioinformatics procedure shown in Figs.\u00a0T. cuspidata transcriptome.To identify as many transcripts as possible, equal amounts of total RNA from the roots, stems and leaves of To compare the integrality of transcripts from Iso-Seq and RNA-Seq, RNA-Seq data from the same samples were assembled with Trinity and SOAP, producing 63,872 and 53,781 unigenes, respectively. As shown in Fig.\u00a0To capture the most informative and complete annotation information, we used a basic local alignment search tool (BLAST) to annotate all the unigenes based on sequence similarity searches against public databases, including the Swiss-Prot, KEGG, GO, KOG, NCBI Nr, and NCBI Nt databases. In addition, annotation was performed with hmmScan based on a domain similarity search against the Pfam database. In total, 42,920 unigenes were successfully matched to known sequences or domains in at least one of the seven databases, and 10,388 unigenes were annotated in all the databases Fig.\u00a0a. The laGs) Fig.\u00a0b. In othPAM gene, involved in side chain synthesis, exhibited expression patterns different from those of TS, as PAM was expressed at the lowest level in the root, followed by the stem, and exhibited the highest expression in the leaf. Among ten genes encoding enzymes modifying the Taxol skeleton, six had expression patterns similar to those of TS. In the left four genes, TAT showed high expression in all three tissues, while T2\u03b1H and T7\u03b2H showed moderate expression in all tissues; T13\u03b1H exhibited an expression trend opposite that of TS, showing the lowest expression in the root and the highest expression in the leaf. According to our coexpression analysis, we hypothesized that the enzymes involved in the stem biosynthetic pathway are TS, T5\u03b1H, TAT, T10\u03b2H, TBT, DBAT, BAPT and DBTNPT. In addition, several genes in the first part of the biosynthetic pathway were expressed at substantially higher levels than those in the final steps. In particular, the gene encoding the last enzyme, DBTNPT, was expressed at the lowest level in all plant tissues, which is one of the reasons underlying the extremely low Taxol content in T. cuspidata. Therefore, we hypothesized that dramatically improving DBTNPT expression will substantially contribute to Taxol production. We also investigated the coexpression of Taxol-specific genes with several upstream genes and found that GGPPS and two genes from the MEP pathway, DXS and DXR, were consistently coexpressed with most of the downstream genes, such as TS, T5\u03b1H, T10\u03b2H, TBT, DBAT, BAPT and DBTNPT, while HMGS and HMGR from the MVA pathway had expression patterns different from those downstream genes, suggesting that the isoprene precursors for Taxol biosynthesis may primarily derive from the MEP pathway, which is consistent with previous reports [As shown in Fig. reports .AS is a regulated process that increases the diversity of an organism\u2019s transcriptome and proteome, mediating plant biological processes ranging from plant development to stress responses , 22. In Six unigenes were used to validate the authenticity of the AS events using the RT-PCR method, including two genes encoding CYP450s, one encoding a TF and three encoding splicing factors and 3 subfamilies .Plant CYP450s are heme-containing enzymes that play roles in a wide variety of both primary and secondary metabolism reactions \u201351. To dTaxus CYP450s via analysis of high-throughput RNA sequencing data from G. biloba and found that G. biloba suspension cells exhibit taxoid 9\u03b1-hydroxylation activity. This CYP450 belongs to the CYP716B subfamily, suggesting that C9 hydroxylases in T. cuspidata may belong to the CYP716B subfamily , which catalyzes taxa-4(20),11(12)-diene-5\u03b1-ol into taxa-4(20),11(12)-diene-5\u03b1-yl acetate in the Taxol pathway. However, we did not find the other four characterized BAHD ACTs in the T. cuspidata transcriptome. According to the real-time results, the four enzymes were expressed at substantially lower levels in T. cuspidata than TcuACT31 had the same expression patterns as TAT and TBT, as their expression quantities in root tissue were similar to that in leaf tissue but significantly higher than that in stem tissue exhibited expression trends similar to those of BAPT, DBAT and DBTNBT, as their levels in roots were significantly higher than those in stems and leaves. We inferred that these seven BAHD ACTs are possible candidates for the Taxol pathway.All BAHD ACTs from S. miltiorrhiza (1948 TFs), and to that of the model plant A. thaliana (2357 TFs). As shown in Fig.\u00a0T. cuspidata, an obviously higher number than those in the angiosperm A. thaliana and S. miltiorrhiza. Plant C2H2 zinc finger proteins are mainly involved in the growth and development of plants at various stages and the regulation of gene expression under environmental stress, including extreme temperatures, salinity, drought, oxidative stress and excessive light [T. cuspidata genomes during evolution, resulting in the rapid expansion of these genes, and they may have special functions in T. cuspidata. Seven TFs have been identified to be involved in Taxol pathway regulation, including one WRKY, three bHLHs, two ERFs and one AP2 TF [We herein identified 1940 unigenes representing putative TFs distributed across 61 families and including bZIPs, bHLHs, WRKYs, and MYBs. The number of TFs is comparable to that of another diterpenoid-producing plant, liana 235 TFs. As ve light . The phee AP2 TF \u201319.Fig. T. chinensis is the sole WRKY TF that regulates Taxol biosynthesis. TcWRKY1 was reportedly able to specifically bind to W-box elements (TTGAC(C/T)) within the DBAT promoter and activate DBAT expression [T. cuspidata transcriptome, although six TcuWRKYs were divided into the same subgroup, Group IIa, with TcWRKY1 were then subjected to further analysis. An alignment-based method was used to cluster and remove redundant reads to generate unigenes. For example, with this pipeline, 813 nonredundant transcripts annotated as CYP450 were clustered into 139 CYP450 unigenes, while with Cogent, among all CYP450 nonredundant transcripts, only 595 transcripts were constructed into the 136 UniTransModels, and 218 CYP450 transcripts were left. According to the manually curated CYP450 unigenes, two types of mistakes were evident in the Cogent output. One is that transcripts from one CYP450 were constructed into different UniTransModels, and the other is that transcripts from two CYP450s were constructed into one UniTransModel. Our in-house pipeline was more accurate than Cogent, partly because manual evaluation was used, and was thus utilized for the subsequent gene family analysis. The candidate genes related to Taxol biosynthesis were screened from these gene families by analyzing phylogenetic relationships, conserved motifs and expression profiles. This strategy avoids the mistakes introduced by short-read assembly in NGS and can produce assembly-free, highly accurate full-length and near full-length unigenes, which not only substantially contribute to specific gene cloning and characterization by molecular biology methods but also significantly improve the accuracy of gene annotation and gene expression quantification. With this strategy, we successfully identified 9 CYP450s and 7 BAHD ACTs as lead candidates for Taxol biosynthesis.T. cuspidata transcriptome, which is comparable to the number of TFs in another diterpenoid-producing plant, S. miltiorrhiza, and the number of TFs in the model plant A. thaliana. However, it is interesting that significant expansion of the C2H2 family was observed in T. cuspidata, suggesting that C2H2 TFs may play important roles in the survival of T. cuspidata. Three categories of TFs are reportedly involved in the Taxol pathway, including WRKY, bHLH and ERF/AP2 TFs. Using a strategy similar to that used for the discovery of enzymes involved in Taxol biosynthesis, 14 TFs were identified as candidates for the regulation of Taxol biosynthesis, including six WRKYs, one bHLH and seven ERFs.Long-read transcriptome sequencing also provided a substantial amount of genetic information for transcriptional and posttranscriptional regulation analyses, such as TFs, AS and lncRNAs. TFs that regulate transcriptional initiation by binding to cis-regulatory elements in promoters or enhancers are key players in regulating the biosynthesis of secondary metabolites . In geneT. cuspidata [AS is one of the most important posttranscriptional regulations and can affect gene expression by multiple regulatory mechanisms, and we herein observed that TFs easily underwent AS events in uspidata . In totauspidata , 67. Recuspidata . The rolLncRNAs are another important group of regulators that play vital roles in plant stress responses , 69. LncIn summary, compared to NGS, the TGS transcriptome provides substantially longer and more accurate sequence resources for gene discovery and AS analysis. However, for the Iso-Seq analysis of species without a reference genome assembly, the currently used software Cogent does not effectively cluster transcripts into unigenes or identify AS events. Here, we developed an in-house pipeline to analyze gene families and their AS events to screen candidate genes related to Taxol biosynthesis and regulation. This pipeline can plausibly be used for unigene generation and AS analysis at the gene family level, as it was successfully used to identify candidate genes related to the Taxol biosynthesis pathway. However, to analyze AS at the whole-transcriptome level, the effectiveness of Cogent needs to be improved or a more effective, novel software program needs to be developed to better analyze TGS transcriptome data without a reference genome.T. cuspidata transcriptome sequencing with PacBio SMRT technology. With this strategy, we identified 9 CYP450s and 7 BAHD ACTs as the lead candidate genes in the Taxol biosynthetic pathway and 6 TF genes that may regulate this pathway. We also investigated the coexpression of known genes in Taxol biosynthesis and elucidated the stem biosynthetic pathway based on the rule that genes in the same pathway are coexpressed. A coexpression analysis also suggested that the isoprene precursors for Taxol biosynthesis are mainly synthesized via the MEP pathway. In addition, we found and confirmed the existence of tissue-specific AS events, which represent a possible posttranscriptional mechanism in the regulation of Taxol biosynthesis. Our study provides not only a valuable resource for investigating novel genes in Taxol biosynthesis but also a practical procedure for screening candidate genes involved in secondary metabolite biosynthesis and analyzing AS events in organisms without reference genomes based on long-read transcriptome sequencing.We developed an in-house pipeline to search for candidate genes involved in Taxol biosynthesis based on Additional file 1:Table S1. Proteins derived from GenBank used for the phylogenetic analysis. (XLSX 13 kb)Additional file 2:Table S2. Primers used in this study. (XLSX 20 kb)Additional file 3:Table S3. RNA-Seq data of identified genes and candidates for Taxol biosynthesis in T. cuspidata. (XLSX 46 kb)Additional file 4:Figure S1. Venn diagram of unigene numbers of Iso-Seq from the KEGG, Swiss-Prot, Pfam, NT, NR, and KOG databases for T. cuspidata. (TIF 1151 kb)Additional file 5:Figure S2. Identification of lncRNAs. (TIF 394 kb)Additional file 6:Figure S3. GO annotation of unigenes. (TIF 6225 kb)Additional file 7:Figure S4. Unigene functional classification by KEGG. The abscissa indicates the number of genes annotated to the pathway, and the ordinate indicates the subcategories. The pathway is divided into five categories in this analysis, including Cellular Processes, Environmental Information Processing, Genetic Information Processing, Metabolism, and Organismal Systems. (TIF 1344 kb)Additional file 8:Figure S5. Functional classification of unigenes exhibiting increased expression in roots by KEGG analysis. (TIF 294 kb)Additional file 9:Figure S6. Phylogenetic tree of 139 CYP450 proteins in T. cuspidata. The image shows an NJ tree made using CLUSTAL Omega . The tree was drawn with Figtree v1.4.4 and labeled in GIMP2.8.2. (PNG 664 kb)Additional file 10:Figure S7. Phylogenetic tree of WRKY domains among T. cuspidata and A. thaliana. Filled green diamonds represent unigenes in T. cuspidata, and the filled red circle indicates the identified gene in Taxus. (TIF 2580 kb)Additional file 11:Figure S8. Phylogenetic analyses of bHLH domains in T. cuspidata and A. thaliana. Filled green diamonds represent unigenes in T. cuspidata, and the filled red circle indicates the identified gene in Taxus. (TIF 1382 kb)Additional file 12:Figure S9. Phylogenetic analyses of ERF domains in T. cuspidata and A. thaliana. Filled green diamonds represent unigenes in T. cuspidata, and the filled red circle indicates the identified gene in Taxus. (TIF 2809 kb)"} +{"text": "Cajanus cajan and one of its wild relatives, Cajanus platycarpus. Illumina sequencing revealed 0.11 million transcripts in both the species with an annotation of 0.09 million (82%) transcripts using BLASTX. Comparative transcriptome analyses on the whole, divulged cues about the wild relative being vigilant and agile. Gene ontology and Mapman analysis depicted higher number of transcripts in the wild relative pertaining to signaling, transcription factors and stress responsive genes. Further, networking between the differentially expressed MapMan bins demonstrated conspicuous interactions between different bins through 535 nodes (512 Genes and 23 Pathways) and 1857 edges. The authenticity of RNA-seq analysis was confirmed by qRT-PCR. The information emanating from this study can provide valuable information and resource for future translational research including genome editing to alleviate varied stresses. Further, this learning can be a platform for in-depth investigations to decipher molecular mechanisms for mitigation of various stresses in the wild relative.Pigeonpea is a major source of dietary protein to the vegetarian population of the Indian sub-continent. Crop improvement to mitigate biotic and abiotic stresses for realization of its potential yield and bridging yield gap is the need of the hour. Availability of limited genomic resources in the cultivated germplasm, however, is a serious bottleneck towards successful molecular breeding for the development of superior genotypes in pigeonpea. In view of this, improvement of pigeonpea can be attempted through transgenesis or by exploiting genetic resources from its wild relatives. Pigeonpea wild relatives are known to be bestowed with agronomic traits of importance; discovery and deployment of genes from them can provide a lucrative option for crop improvement. Understanding molecular signatures of wild relatives would not only provide information about the mechanism behind desired traits but also enable us to extrapolate the information to cultivated pigeonpea. The present study deals with the characterization of leaf transcriptomes of Escalating global food demand and the potential impact of climate change have created an increased need for effectual crop improvement programmes. Management of resilience to an array of biotic and abiotic stresses and improvement of productivity through appropriate utilization of available genetic resources requires attention \u20133. ConsiCajanus cajan (L.) Millspaugh], also known as red gram, is the sixth most important grain legume crop grown in the semi-arid tropics of Asia, Africa and the Caribbean [Pigeonpea [aribbean ; India baribbean due to aCajanus cajanifolius Maesen in peninsular India [Cajanus is composed of 34 species [C. cajan is the only cultivated member, while the wild relatives are assigned to secondary or tertiary gene pools [Wild relatives, the ancestors of crop plants have been exposed to natural challenges compared to their cultivated counterparts. Continuous exposure to severe climatic conditions has helped them evolve at the genetic level, making them not only more genetically diverse but also more resilient towards the adverse impacts of climate change and incidence of pests and diseases . It is kar India . The gen species , amongstne pools . Studiesne pools \u201313.Cajanus platycarpus from the tertiary gene pool is one of the wild relatives of C. cajan found growing along the hedges with slender and climbing plant type. The leaves and pods are extremely pubescent and produce rectangular dark brown seeds on maturity. C. platycarpus has the same chromosome number as that of cultivated pigeonpea (2n = 22) [C. platycarpus. In view of this, the major aim of the study has been the characterization of baseline transcriptomes of the cultivated and wild pigeonpea as it can provide fascinating insights on the basic differences in their molecular signatures. Such comparative baseline transcriptomes have been developed in numerous important crops as a step towards broadening the genetic base and better molecular understanding [2n = 22) and is b2n = 22) \u201318. Seve2n = 22) , 19, 20.2n = 22) , 21. It 2n = 22) as human2n = 22) . This st2n = 22) , 25 and standing \u201328.C. cajan vis a vis C. platycarpus.Corroboratory efforts have been made in various crops for extrapolation of the information obtained through molecular characterization of crop wild relatives for revelation of traits endowed to them both under stressed and non-stressed conditions , 29\u201331. C. cajan procured from UAS, GKVK, Bangalore, India and C. platycarpus procured from ICRISAT, Hyderabad, India were used in the present study. Seeds of both the species were sown in plastic pots (14 inch diameter and 60 inch height) and maintained under greenhouse conditions. In order to obtain enough plant material for RNA isolation, at least two plants were maintained per pot. Fully expanded and healthy leaves from 3rd or 4th positions were collected from 45 days old plants. Samples were collected from six different plants separately and made into two individual pools; two such pooled samples were considered as replicates. The samples were frozen in liquid nitrogen and stored at -80\u00b0C until use.Two species of pigeonpea, C. cajan and C. platycarpus leaf samples using Spectrum Plant Total RNA kit (Sigma) following manufacturer\u2019s instructions. RNA samples (5\u03bcg) were later treated with DNase to remove the residual genomic DNA and integrity was checked on 1% formaldehyde agarose gel. Total RNA quality control was performed using Agilent 2100 Bioanalyzer and samples with an RNA integrity number (RIN) of 8.0 were used for mRNA purification. mRNA was purified from 1 \u03bcg of intact total RNA using oligodT beads . The purified mRNA was fragmented at an elevated temperature (90 0C) in the presence of divalent cations and reverse transcribed with Superscript II Reverse Transcriptase (Invitrogen Life Technologies) by priming with random hexamers. Second strand cDNA was synthesized in the presence of DNA polymerase I and RNaseH. The cDNA was further cleaned using Agencourt Ampure XP SPRI beads (Beckman Coulter) and Illumina adapters were ligated after end repair and addition of an \u2018A\u2019 base followed by SPRI clean-up. The resultant cDNA library was amplified using PCR for enrichment of adapter ligated fragments, quantified using a Nanodrop spectrophotometer (Thermo Scientific) and validated for quality with a Bioanalyzer (Agilent Technologies). The cDNA library was sequenced using Illumina Hi-Seq 2500 platform with 100 bp read length obtained in paired end module. Paired end FASTQ files were subjected to standard quality control with Phred Score >20 using NGSQC Tool Kit [Total RNA was extracted from Tool Kit to obtaide novo transcriptome assembly. For this study, we chose de novo bruijn graph-based Trinity Assembler [All the HQ filtered paired end libraries were subjected to ssembler with crissembler . Clusterhttp://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE=Protein). BLAST hits with e-value cutoff \u2264 1e-14 and query coverage of >80% were considered as annotated homologous proteins and AWK script was used for filtering reciprocal best hits. BLAST hits were processed to retrieve associated Gene Ontology (GO) terms describing biological processes, molecular functions, and cellular components. Expression levels of all the transcripts in the individual libraries in replicates were assessed by mapping high quality (HQ) filtered reads using BOWTIE2 [Annotation of the unique transcripts (>200 bp) was performed using BLASTX homology search against NCBI non-redundant (nr) protein database followed by GO categorization using another online server, Wego [Transcripts annotated in both the er, Wego . For Maper, Wego analysisDifferential gene expression analysis of the expressed transcripts was performed using DESeq softwareCajanus spp and pathway clusters.Enriched biological categories along with differentially expressed genes were used as input for Bridge Island Software for identifying key edges that connect genes and biological categories. Statistical scores from differential expression and biological analyses were used as attributes to visualize the network. Output of Bridge Island Software was used as input to CytoScape V 2.8 . The nodIF4\u03b1 gene in each sample was used for normalization. RT PCR conditions were set as: initial denaturation at 95\u00b0C for 5 min, followed by 40 cycles each of 95\u00b0C for 10 sec, 60\u00b0C for 15s and 72\u00b0C for 15s. qRT-PCR was performed in two independent biological replicates with three technical replicates along with no template control. For analysis, C. platycarpus was considered as the test and C. cajan as control. The data was first normalized by subtracting internal reference gene from test and control samples and fold change was calculated [About 2 \u03bcg total RNA was used for cDNA synthesis by Superscript Vilo cDNA synthesis kit (Invitrogen). The diluted cDNA was used as a template in qRT PCR and amplified with gene specific primers transcripts were annotated transcripts with <10 Read count out of the total 114781 transcripts. We also observed 13203 (11.5%) transcripts to be C. platycarpus-specific and 11402 (10%) to be C. cajan-specific were seen to be expressed in more numbers compared to other TFs. Comparatively, WRKY transcription factors showed higher expression in C. platycarpus whereas more number of MYB TFs displayed higher expression in C. cajan. However, highest expression level of both the transcription factors was seen in C. platycarpus. In contrast to MYB, TFs belonging to homeobox-leucine zipper were found to be expressing more in C. cajan were seen to be abundant. Among the transcripts pertaining to calcium mediated signaling, calcium-transporting ATPase, calcium-dependent protein kinase, calcium-binding protein, calmodulin-like protein, and calmodulin-binding protein/transcription activators were displayed in higher levels in C. platycarpus. Nevertheless, calcineurin B-like protein and CBL-interacting serine/threonine-protein kinases were predominantly expressed in C. cajan (S3 Data).With respect to various transcripts belonging to signaling and G- proteins, a total of 6076 transcripts were expressed in both the species. Among them, 977 and 701 transcripts were seen to be specific to tycarpus . It was C. cajan ; S3 DataC. platycarpus along with proline-rich and LysM domain RLKs. Further, probable/putative receptor like kinases and threonine-protein kinases were specifically up-regulated in C. platycarpus whereas, G-type lectin-RLKs (GsRLKs) showed higher level of expression in C. cajan (Leucine-rich repeat receptor-like protein kinases (LLR-RLKs) were seen to be abundantly expressed in both the species compared to other RLKs. Distinctively, LLR-RLKs were predominantly expressed in C. cajan ; S3 DataC. platycarpus. However, L-type lectin-domain containing receptor kinase was equally expressed in both the species. Different isoforms of protein phosphatases were also found to be up-regulated in both the species.Protein kinases and phosphatases are important regulators of proteins in biological systems. Based on differential gene expression analysis, mitogen-activated protein kinase, casein kinase, receptor protein kinase TMK1, wall-associated receptor kinase, and putative receptor protein kinase ZmPK1 were seen to be specifically up regulated in The study illustrated that 48 transcripts that belonged to G-proteins were differentially expressed in both species, out of which, 27 transcripts were found to be up-regulated in the wild relative. Particularly, the up-regulated transcripts included, extra-large guanine nucleotide-binding protein 1-like, EVI5-like protein isoform X4, ras-related protein Rab11A-like and rop guanine nucleotide exchange factor 5-like isoform X1. In the domesticated pigeonpea, transcripts belonging to 22B isoform X1, a member of TBC1 domain family proteins and GTP-binding protein SAR1A were seen to be up-regulated ; S3 DataC. platycarpus and C. cajan specific transcripts being 160 and 169 respectively. However, no major difference was seen in the types of transcripts between both the species and 1857 edges . It was tycarpus . The secTwenty genes homology domain in plants, an intracellular domain common among the identified plant R-proteins [The quality of leaf transcriptomes of both the species was found to be reliable with respect to parameters like N50, number and length of transcripts. Based on KEGG analysis, proteins . Similarproteins , plant Tproteins \u201356.In-depth analysis of the transcriptome would be definitely fascinating for a better perception of the species under study. Towards this, MapMan, an advanced bioinformatics tool for comprehensive interpretation of transcriptome data and visualization of functions of associated genes was used. This analysis allowed us to explore gene categories from the large data sets to get meaningful information. Through MapMan, it was evident that significant variation between the two species was conspicuous with respect to genes related to transcription factors (TFs), signaling, secondary metabolites, and stress response. Exploration of specific bins was attempted in order to decipher major similarities as well as differences between the two systems.C. platycarpus expressed more number of WRKY transcripts when compared to its cultivated counterpart depicting its role in regulation of various abiotic and biotic responses [In general, TFs are seen to be involved in various plant processes like growth, development and stress signaling , 57\u201359. esponses , 61. Simesponses indicatiC. platycarpus as a large number of varied kinases, especially receptor like kinases and those involved in calcium-mediated signaling [Further support for this assumption was established based on the analysis of genes involved in cellular signal transduction. Response of plants to various environmental and developmental signals is pertinent for successful growth and reproduction , 64. Proignaling were seeignaling , 67.Cajanus platycarpus. Tocopheral plays a crucial role in wax accumulation in plant leaves. It is a known fact about C. platycarpus that it portrays more pubescence, increased hardening of leaves (sclerophyly) by cuticular wax accumulation, cell wall thickening and lignifications. These traits are expected to prevent plants from insect attack by making them non-preferable, unpalatable and undigestable [Secondary metabolism produces a large number of specialized molecules that are required for the plant to survive in its environment and essential for communicating with other organisms in a mutualistic or antagonistic (eg. to combat herbivores and pathogens) manner. Under baseline or non-stress conditions, it is expected that mutualistic metabolites or those required for normal physiological processes are expressed , 69. Thigestable , 70, 71.C. platycarpus. This variation presented in the study repeatedly depicted intrinsic differences between the two species at transcriptome level thus reconfirming earlier evidences in other categories like TFs and signaling.Furthermore, another interesting feature observed was the variation in the transcripts pertaining to biotic and abiotic stress. Though, the study did not involve imposition of stress, still majority of stress-related gene transcripts were seen to be up-regulated in C. platycarpus were found to be interacting in the developed network.Perfect corroboration was evident from interactions between the differentially expressed genes of specific bins derived from MapMan analysis. The inherent variation in the kind and specific function of transcripts between the two species was clear when it was observed that distinct clusters densely packed with transcripts dominated by Therefore, considering different aspects of the study, clear disparity was seen in the transcriptome profiles of the two pigeonpea species, with the wild relative demonstrating skewed expression of transcripts pertaining to signaling, transcription factors and certain biotic stress related genes. However, dynamics of the transcriptome under specific stress conditions will provide intriguing insights and reasoning for the variety of desirable agronomic traits persisting in the wild relative. This learning can be a platform for further investigations with respect to the wild relative in deciphering the hidden molecular mechanisms towards mitigation of various biotic/abiotic stresses.S1 FileTable A. List of primers used in the study; Table B. QC statistics; Table C. Assembly statistics.(DOCX)Click here for additional data file.S1 DataTable A. Complete list of transcripts with annotation; Table B. Differential gene expression analysis of C. platycarpus vs C. cajan (given in separate tabs).(XLSX)Click here for additional data file.S2 Data(XLSX)Click here for additional data file.S3 Data(XLSX)Click here for additional data file.S4 Data(XLSX)Click here for additional data file."} +{"text": "Avian influenza A (H5N6) virus poses a great threat to the human health since it is capable to cross the species barrier and infect humans. Although human infections are believed to largely originate from poultry contaminations, the transmissibility is unclear and only limited information was available on poultry environment contaminations, especially in Fujian Province.Chi-square test or Fisher\u2019s exact probability test was used to compare the AIV and the viral subtype positive rates among samples from different Surveillance cities, surveillance sites, sample types, and seasons. Phylogenetic tree analysis and molecular analysis were conducted to track the viral transmission route of the human infection and to map out the evolutions of H5N6 in Fujian.A total of 4901 environmental samples were collected and tested for Avian Influenza Virus (AIV) from six cities in Fujian Province through the Fujian Influenza Surveillance System from 2013 to 2017. Two patient-related samples were taken from Fujian\u2019s first confirmed H5N6 human case and his backyard chicken feces in 2017. p\u2009<\u20090.05) in the positive rates in samples from different cities, sample sites, sample types and seasons. The viruses from the patient and his backyard chicken feces shared high homologies (99.9\u2013100%) in all the eight gene segments. Phylogenetic trees also showed that these two H5N6 viruses were closely related to each other, and were classified into the same genetic clade 2.3.4.4 with another six H5N6 isolates from the environmental samples. The patient\u2019s H5N6 virus carried genes from H6N6, H5N8 and H5N6 viruses originated from different areas. The R294K or N294S substitution was not detected in the neuraminidase (NA). The S31\u2009N substitution in the matrix2 (M2) gene was detected but only in one strain from the environmental samples.The overall positive rate of the H5 subtype AIVs was 4.24% (208/4903). There were distinctive differences contains supplementary material, which is available to authorized users. China\u2019s first H5N6 strain was isolated in March 2014 from a domestic duck in Guangdong Province .Wild migratory birds and waterfowls are often considered as natural hosts of AIVs. Through their migration, the viruses are transmitted globally , 5. The Epidemiological investigations have confirmed that most cases of H5N6 human infections had contact with infected poultry \u201316. LiveTherefore, in order to reduce the risk of human H5N6 infection and the impacts of viral circulation in poultry, and also to trace the origin of the human H5N6 virus, we collected and analyzed the samples from poultry environments and the human H5N6 virus in Fujian Province, China.chi-square test or Fisher\u2019s exact probability test.Data for this study were collected through the Fujian Influenza Surveillance System, which has been collecting at least 40 environmental samples monthly from locations such as LPMs, poultry farms, households and slaughter factories in Fuzhou, Xiamen, Quanzhou, Sanming, Nanping and Zhangzhou cities since 2013. Sample types includes surface wipe samples of poultry cage and slaughter or placing board, poultry fecal samples, poultry cleaning sewage samples and poultry drinking water samples. Samples were kept under 4\u2009\u00b0C to 8\u2009\u00b0C and sent to FJCDC within 24\u2009h for viral detections. Surveillance data from 2013 to 2017 were retrieved and analyzed using SPSS 20.0. The differences of positive detection rates of avian influenza A and the H5 subtype in different network labs, surveillance sites, sample types, and seasons were tested by The two case-related samples were injected into 9-to-10-day-old specific pathogen-free (SPF) embryonated chicken eggs and inoculated at 37\u2009\u00b0C for 72\u2009h in a biosafety level 3 laboratory. Hemagglutination assay and hemagglutination inhibition assay of the allantonic fluids were sequentially used for subtype identifications.Viral RNA was amplified using Qiagen One-Step RT-PCR Kit with 32 pairs of primers . Full genetic sequences of all six Fujian H5N6 strains from 2016 were retrieved from the China Influenza Virus Genetic Sequence Database and the GISAID EpiFlu Database using the ABI BigDye Terminator v3.1\u2009Cycle Sequencing Kit (Life Technologies) following the manufacturer\u2019s instructions. The sequencing primers were M13F (5\u2032\u2013TGTAAAACGACGGCCAGT\u20133\u2032) and M13R (5\u2032\u2013CAGGAAACAGCTATG ACC\u20133\u2032). Sequences were assembled using the SeqMan program of the Lasergene Package . The genetic sequences of both strains were submitted to the Global Initiative on Sharing Avian Influenza Data (GISAID) (The DNAMAN program (version 6.0) was used for the analysis and alignment of the sequencing data. The phylogenetic trees were performed in MEGA version 6 using maximum-likelihood method based on the Tamura\u2013Nei model with 1000 bootstrap replicates . Severalp\u2009<\u20090.05). The highest positive rates of the H5 subtype were observed in Sanming city (9.83%), LPMs (5.26%), and the wipe samples of poultry chopping board (7.66%). The positive rates of H5 subtype were at peak in winter and spring\u00a0(Table The positive detection rates of influenza A virus and the H5 subtype were 31.92% (1565/4903) and 4.24% (208/4903), respectively. There were statistical differences in the positive rates of influenza A virus and the H5 subtype in different cities, sample sites, samples types and seasons , polymerase basic 1 (PB1), HA, matrix protein (MP) and nonstructural (NS) genes appeared to be derived from an H5N8 virus isolated from a black swan in Hubei Province, China (A/Cygnus atratus/Hubei/HF-1/2016). Their NA and nucleoprotein (NP) gene segments were in the same cluster as the H5N6 viruses that were circulating in Fujian Province during 2016 were closely related to the courtyard chicken feces strain (A/environment/fujiansanyuan/08/2017). Their PA genes were both clustered into the HThe key amino acid mutations had increased the viral mammalian transmissibility, virulence in mammals and antiviral resistance in the human strain and the other 7 environmental strains of H5N6 viruses Table\u00a0.The HA oSince the H5N1 virus was first identified in Guangdong Province, China, 1996, this high pathogenic H5 subtype of AIVs has been continually detected in wild birds and domestic poultry environments . The H5NThe H5N6 viral isolates from the first human case in 2014 carried the same internal genes as the H5N1 virus identified in 2013 , which sInfluenza viruses are carried by migratory waterfowl and therefore spread globally along the avian flyways during migration . H5N6 isThe H5N6 viruses in this study were considered highly pathogenic in poultry due to the amino acids at the HA cleavage site. The affinities of the influenza virus to different sialyl-sugar structures are important determinants of rang and pathogenicity in the viral host . The \u03b1-2Our study has several strengths, including large environmental sample size, viral isolated from Fujian\u2019s first confirmed H5N6 human case and his courtyard chicken feces. However, this study also has some limitations, for example, the signs and symptoms and mortality rates among poultry were not collected.The results of this study indicated that in Fujian Province, the clade 2.3.4.4 of the H5 subtype had become the main circulating AIVs in poultry environments. The patient with H5N6 infection was most likely to contract the virus from contaminated poultry environment. The human strain had genetic reassortment with H6N6, H5N8, and H5N6 viruses. It was still sensitive to Oseltamivir and Zanamivir. Although there has been no outbreak of human infection with H5N6 in Fujian Province, it is of great importance to continue and strengthen the surveillance of the H5Nx virus in poultry environment to monitor the spread and evolution of the virus.Additional file 1:Table S1. Composition of subtypes of AIVs positive specimens in Fujian Province, during 2013\u20132017. H5\u2009+\u2009H7: Both the H5 and H7 subtype influenza virus nucleic acids were detected in the same sample and other subtypes and so on. (DOCX 18 kb)Additional file 2:Table S2. The GISAID isolate ID of H5N6 viruses. (DOCX 16 kb)Additional file 3:Phylogenetic analysis of the H5N6 viruses isolated in Fujian Province. (In red Triangle) Viral strains of human infection with avian influenza A(H5N6) virus in Fujian Province. (In green Square) Viral strains of human H5N6 viruses. (In red Circle) Viral strains of H5N6 viruses isolated from environment sample in Fujian Province. (DOC 415 kb)"} +{"text": "However, several authors have hypothesized that this ancestor may have been the crucian carp (Carassius auratus). Previously, we generated an experimental hybrid goldfish (EG) from the interspecific hybridization of red crucian carp \u00d7 common carp . Unlike either parent, EG possessed twin caudal fins similar to those of natural goldfish . The genetic characteristics of EG, as well as the mechanisms underlying its formation, are largely unknown. Here, we identified the genetic variation in the chordin gene that was associated with the formation of the twin-tail phenotype in EG: a stop codon mutation at the 127th amino acid. Furthermore, simple sequence repeat (SSR) genotyping indicated that, among the six alleles, all of the EG alleles were also present in female parent (RCC), but alleles specific to the male parent (CC) were completely lost. At some loci, EG and NG alleles differed, showing that these morphologically similar goldfish were genetically dissimilar. Collectively, our results demonstrated that genetic variations and differentiation contributed to the changes of morphological characteristics in hybrid offspring. This analysis of genetic variation in EG sheds new light on the common ancestor of NG, as well as on the role of hybridization and artificial breeding in NG speciation.Owning to the extreme difficulty in identifying the primary generation (G Carassius auratus, NG) lineages, this conserved architecture has undergone extreme modifications due to artificial selection in NG during ventralization in early embryonic development belong to the same subfamily, Cyprininae. Previous phylogenic studies have shown that NG are closely related to crucian carp \u00d7 male common carp after sequential selective breeding. A stable population of EG was successfully established by self-mating from the hybridization of female red crucian carp , and alleles differed between EG and NG at some loci. Collectively, our results showed that certain genetic characteristics of EG, including the chordin gene mutation and the RCC-aligned SSR alleles, originated from the distant hybridization, and contributed to the observed difference in morphology. The process of EG generation, as well as the genetic characters of this hybrid, shed new light on the common ancestor of NG, as well as on the role of hybridization and artificial breeding in NG speciation.Here, we identified a stop codon mutation at the 127All RCC, CC, and EG were raised in ponds at the State Key Laboratory of Developmental Biology of Freshwater Fish, Hunan Normal University, Changsha, China. NG were purchased at a local market. All experiments were approved by the Animal Care Committee of Hunan Normal University and followed the guidelines of the Administration of Affairs Concerning Animal Experimentation of China.chordin gene between single-tail and twin-tail fish, we isolated and sequenced the 1st to 6th exons of chordin homologues from the embryonic cDNA pools of four fish: single-tail RCC and CC, and twin-tail EG and NG. After self-mating, total RNA was extracted from the gastrula-stage embryos of all four fish using Trizol . The first-strand cDNA of the chordin gene was synthesized using ReverTra Ace . The forward primer 5\u2032-GCGTTACCCATCCAACC-3\u2032 and the reverse primer 5\u2032-TCTGTRTCCGCTTGTGGT-3\u2032 were designed based on CDS of the chordin genes from Carassius auratus (AB874473.1) and Cyprinus carpio (LC092194.1), which were downloaded from GenBank. Each PCR (25 \u03bcL) contained 20 ng of cDNA template, 1.5 mM MgCl2, 0.2 mM of each dNTP, 0.4 \u03bcM of each primer, 1 \u00d7 PCR buffer, and 1.25 U Taq polymerase . The cycling conditions were as follows: an initial denaturation at 94\u00b0C for 4 min; followed by 30 cycles of 94\u00b0C for 30 s, 55\u00b0C for 30 s, and 72\u00b0C for 1 min; and a final extension at 72\u00b0C for 10 min. PCR products were separated and purified using 1.2% agarose gels and Gel Extraction Kits , respectively. Purified products were ligated into pMD18-T vectors and transfected into Escherichia coli DH5\u03b1. Positive clones were sequenced and further analyzed using BLAST1 and CLUSTAL W2.To compare the sequences of the Taq Buffer (Mg2+Plus), 1 \u03bcL of 2.5 mM dNTPs, 0.4 \u03bcL of each primer (5 \u03bcM), 0.4 \u03bcL of Taq DNA Polymerase (5 U/\u03bcL), and 14.8 \u03bcL of ddH2O. The PCR cycling conditions were as follows: an initial denaturation at 94\u00b0C for 3 min; followed by 35 cycles of 94\u00b0C for 30 s, a primer-specific annealing temperature for 30 s, and 72\u00b0C for 45 s; a final extension step of 72\u00b0C for 7 min. PCR products were sequenced using capillary electrophoresis on an ABI 3730XL DNA sequencer using BigDye Terminator Cycle Sequencing kits .We randomly selected eight fish of each type for SSR testing. Total genomic DNA was extracted from the fin tissue of each fish with a DNA extraction kit . The microsatellite regions were PCR amplified using six florescent-labeled microsatellite primers : three from the common carp ; two froGenetic distance and genetic polymorphism indexes, including major allele frequency, numbers of genotypes, numbers of alleles, heterozygosity, and gene diversity, were calculated using Popgene32 . SSR genst to 6th exons of chordin homologues were cloned from RCC, CC, EG, and NG. All cloned sequences were identified as chordin and were submitted to GenBank . There was high nucleotide sequence similarity (> 99%) among the chordin genes from all four fish at the 127th amino acid position of the chordin protein, while the single-tail fish had a glutamic acid codon (GAG) .Across the four fish , each of the six amplified SSR loci (121\u2013302 bp) had 1\u20138 alleles . Almost Given the uncertainty surrounding the common ancestor of NG strains, as well as the evolutionary processes underlying their emergence , the forHybridization is an important method of animal breeding because this process increases genetic variation , and becth exon of the chordin gene includes a glutamic codon at the 127th amino-acid position, while the other allele (chordinAE127X) includes a stop codon at the same position. The allele chordinAE127X is predicted to encode a truncated protein that contributes to the formation of the twin-tail (chordin gene gave rise to a twin-tail phenotype (0) of NG, we identified the twin-tail-associated mutation at the 127th amino-acid position in the chordin gene: a stop codon (TAG) instead of a glutamic acid codon (GAG). This mutation was identical to that found in NG. In contrast, the chordin genes of RCC and CC had a wild-type glutamic acid codon at the same position have been broadly used in studies of molecular evolution . These s1\u2013F6, within 6 years) . Here, oarieties . Thus, oThe original contributions presented in the study are included in the article/The animal study was reviewed and approved by the Animal Care Committee of Hunan Normal University and followed the guidelines statement of the Administration of Affairs Concerning Animal Experimentation of China. All samples were raised in natural ponds, all dissections were performed with 100 mg/L MS-222 , and all efforts were made to minimize suffering.JW and WH conceived the research, analyzed the data, and wrote the manuscript. JZ, LL, GZ, TL, CX, and MC performed the research and writing-reviewed the manuscript. SL provided substantial contributions to conception and coordination. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Croatian Medical Journal. The authors state that during the COVID-19 pandemic, the medical information system has been hit by a storm of open-access pandemic-related manuscripts through preprint platforms, but also through accelerated review processes.To the Editor: We have read with interest the editorial by \u0160kori\u0107 et al in the Amaximize the efficiency of peer review, ensuring that key work related to COVID-19 is reviewed and published as quickly and openly as possible\u201d indicated that 60% of studies were preprints, meaning that they reported non-peer-reviewed information (However, unprecedented scientific efforts have generated a torrent of publications, many of which are not peer-reviewed. It is even possible that indexed articles with a digital object identifier did not undergo peer review. A Reuters analysis of some of the most important servers (Google Scholar, ormation .This raises some concerns. First, non-peer-reviewed preprints could be cited by peer-reviewed articles published in legitimate peer-reviewed journals, possibly leading to the spread of misinformation. Second, peer review quality may be compromised by rushing the process and assigning an excessive workload to peer reviewers, thus putting them under pressure and inducing psychological stress. Third, the overabundance of opinion papers and editorials hinders the discovery of valuable raw data and medical insight. Finally, some data, ideas, and content, preliminary and peer-reviewed, are constantly being disproved, outdated, and invalidated, which makes them candidates for corrections or retractions following post-publication peer review. There is a real possibility that uncontrolled and potentially misleading information will reach the general public, directly or via the media, leading to incorrect, sometimes fatal, responses to the pandemic. In this scenario with many ethical challenges, scientific progress could be hampered, allowing predatory journals and scholars to exploit open access and possibly compromise public health and academic integrity.Most of all, the spreading of misinformation \u201cinfodemic\u201d through the social and traditional mass media poses a serious problem for public health systems. Restrictions imposed by policymakers and governments, such as lockdown and social distancing measures, are based on advice given by governments\u2019 scientific advisory committees, which rely on scientific findings. However, the best scientific evidence available on COVID-19 is still scarce, making the decisions by governments susceptible to bias. Governments should not just base their decisions related to COVID-19 on scientific evidence , they sh"} +{"text": "What exactly is the short-time rate of change (growth rate) in the trend of Are global events such as the natural climate phenomena, the major economic recessions or the COVID-19 global crisis and economic shutdown visible in the continuous monitoring of carbon dioxide in the atmosphere? This is a timely question that probably can not be given a definite answer, but which nevertheless is worthwhile to study and to discuss. This query also leads directly to the follow up question regarding how the short-time rate of change in the trend of 5. To this end, continuous monitoring was started by David Keeling at NOAA\u2019s Mauna Loa Observatory (MLO) in Hawaii, which gives access to well-mixed clean air at 3.5\u20134 km altitude. These measurements later became known as the \u201cKeeling curve\u201d, see e.g.6.The continuous monitoring of the atmospheric greenhouse gases as well as their historical, pre-industrial levels continues to provide an irreplaceable source of data, information and analyses that are becoming increasingly important for policymakers in their quest to mitigate the worst scenarios concerning the global warming, see e.g.7. In this paper, we will revisit some of the basic regression analysis techniques and perform a statistical analysis based on weekly data from the Keeling curve covering the period from 1974-05-19 to 2020-03-29, cf.8 as well as the master\u2019s thesis9. The purpose of this study is to validate a simple short-time estimate of the rate of change (or growth rate) in the trend of the Trends and analyses of local as well as of global carbon dioxide cycles at various time-scales are continuously monitored and analysed by the National Oceanic and Atmospheric Administration (NOAA), see e.g. the curve fitting methods described in7. The difference to the NOAA approach, however, is that we are using here a short-time data window of only 2 years and no postfiltering of the residuals. This enables us to perform a statistical analysis of the residual errors and to establish the corresponding estimation accuracy. We will demonstrate that the resulting residual errors based on a 2-year data window are in good agreement with the statistical assumption that the weekly data from 1974 to 2020 are corrupted by additive, uncorrelated Gaussian noise. Indeed, most of the daily variability in this data is caused by weather systems that bring different air masses to MLO so that there is memory from day to day, but not from week to week. There can also be variability on a seasonal scale, caused for example by droughts, or an early spring, etc., and which will then be seen as local changes in the short-time trend. We will show that the regression analysis that is considered here is perfectly consistent trend and its rate of change. As a reference, the simple approach can always be validated by a comparison with any of the more sophisticated linear regression techniques that are available. In this study, we consider a well-established linear regression analysis based on 3 polynomial coefficients and 8 Fourier series coefficients (4 yearly harmonics) followed by smoothing (low-pass filtering) of the residuals, as is employed in10. Hence, a regularization, or smoothing, is necessary. In a discrete-time linear regression analysis with a linear least squares (pseudo-inverse) solution, this ill-posedness will manifest itself as an increasingly ill-conditioned system matrix when the observation interval decreases. Obviously, the ill-conditioning also increases with an increasing number of regression parameters . To obtain error estimates in such inverse problems, it is common to employ the Singular Value Decomposition (SVD) as well as very effective L-curve techniques where the residual error is plotted against some regularizing parameter12. Due to the Gaussian assumption referred to above, we can take a more direct approach here and consider the Maximum Likelihood (ML) estimation and the associated Cram\u00e9r\u2013Rao Lower Bound (CRLB)13 of the pertinent parameter as the objective for the L-curve analysis. The subsequent analysis will show that a stable estimate of the rate of change can be obtained with the 2-year window yielding a relative estimation error of less than 5 9. Considering the long term data used in this study (1974\u20132020), it is finally worth noting that the error estimates that have been presented here are very much coherent with the 1.The question about the smallest useful window length in the short-time estimate of the rate of change is an interesting, subtle issue. It is noted that the task to estimate the trend of the Keeling curve by averaging of continuous-time data is a well-posed problem. However, the task to differentiate the data is an ill-posed problem in the sense that the derivative does not depend continuously on the given dataLet mentclass2pt{minim9. Hence, the averaging which is defined for uniformly sampled discrete frequencies14, p.\u00a0456. The DFT is usually computed by using an algorithm referred to as the Fast Fourier Transform (FFT). Similarly, the differencing . Let Define a time window of 7, but other model orders can easily be incorporated and investigated in a similar way. The covariance matrix of the noise is given by Consider now the following statistical model13. The corresponding Cram\u00e9r\u2013Rao Lower Bound (CRLB) is given by13. Here, we are particularly interested in the Cram\u00e9r\u2013Rao lower bound of the two parameters The Fisher Information Matrix (FIM) and the Maximum-Likelikelihood (ML) estimate for this situation are given byal model is lineaDefine the residual errorBased on , it is r results depend o results is emploThe estimates can now entclass1pt{minima^0 using for a \\d7. The NOAA estimate shown here is based on a long-term trend and where a smoothing of the residual data has been employed with a Full Width at Half Maximum (FWHM) of 1.4 year to match the amplitude of the short-time estimates.In Fig.\u00a0^1 using for the entclass1pt{minimaTo validate the Gaussian assumption above, we perform a statistical analysis of the residual error fined in . The 2-yfined in is estimBased on , the varnce with .Figure 42. For 2, and which agrees perfectly with the previously obtained value.Finally, we evaluate the Gaussian assumption by stacking all the ollowing we obtaiM. All the weekly data between 1974 and 2020 have been used to obtain the mean value, and as the variance of the noise we have employed the estimate 2.To obtain a unitless measure of the estimation errors we consider the following normalized Cram\u00e9r\u2013Rao lower bounds (relative errors)mentclasspt{minimagiven by , and whe9. Nevertheless, provided that the window length M (the regularizing parameter) is large enough, we can indeed have a stable estimate of the derivative The relative error in the estimate of the rate of change (the derivative) It may be of interest to compare or to correlate the estimated rate of change in the trend of the Keeling curve with natural phenomena as well as the major anthropogenic activities. Of course, any conclusions based on such a comparison must be treated with great care considering the vast complexity of the Earth system. Nevertheless, to demonstrate the application of the estimation of trends in y-axis, together with estimated values of global 15 which is presented here in units of y-axis. It is noted that the last update of CDIAC is from 201415. The values for 2015\u20132018 have been extrapolated from BP review by multiplying the increase factors from BP review to the carbon numbers in CDIAC. The values for 2019\u20132020 are assumed to be constant. The plot in Fig.\u00a05. Many people are confused about emissions and 16. That is why the trend of In Fig.\u00a0y-axis. The NOAA growth rates of Fig.\u00a017. These SSTs are plotted here as y-axis of the figure. In the same figure, we have also plotted year-to-year differences (dCDIAC) of the CDIAC data that was shown in Fig.\u00a019. If the emission change is for 1 year, then there has not been enough time for the anomaly to spread uniformly through the atmosphere. The Southern Hemisphere (SH) has only partially caught up with the NH where the emissions change originated, nor has the stratosphere. So, in 1 year the emissions are effectively diluted into 70\u201380 In Fig.\u00a0entclass1pt{minima20. It may also be noted that the eruption of Pinatubo was estimated to produce some extra 50 The changes seen in dCDIAC seem to correlate somewhat with the major economic recessions in 1980\u20131983, 1990\u20131993, 2001\u20132002 and 2008\u20132009. However, the corresponding changes in dCDIAC are too small to be reliably connected to the rather strong fluctuations that are seen in the estimated growth rates 21. In Fig.\u00a0It is observed that the statistical analysis presented in this paper is restricted to one station only, the MLO in Hawaii. The main reason for choosing this particular station is the uniqueness of the extensive record of weekly data that is available since 1974, and which have made it possible to assess the Gaussian statistics with very high precision. Nevertheless, the simple estimation strategy expressed in is readiIt may finally be of interest to consider the potential impact that the COVID-19 crisis may have on the present and future We have proposed a simple strategy for estimating the short-time rate of change in the trend of the Keeling curve. This estimate is based on a centered 1-year sliding average to estimate the trend, and a corresponding centered 2-year sliding data window with differencing to determine its rate of change. To validate this estimator, we have compared it to a more sophisticated regression analysis based on a combined Taylor and Fourier series expansion and found a very good agreement based on 3 Taylor coefficients, 8 Fourier series coefficients (4 yearly harmonics) and a 2-year data window to determine the parameters. A statistical analysis based on weekly data for the years 1974\u20132020 has shown that the model errors can be considered to be uncorrelated and identically Gaussian distributed. The Gaussian assumption justifies the use of standard formulas for Maximum Likelihood (ML) estimation, Fisher information and the related Cram\u00e9r-Rao Lower Bounds (CRLB). An L-curve technique based on the CRLB has been used to study the accuracy and stability of the regression analysis. With the regression model that has been studied here, it is found that the limit of stable inversion for the rate of change in the trend of the Keeling curve is about 50\u201360 weeks, and that the 2-year window of 104 weeks yields a relative error less than 5"} +{"text": "The anti-angiogenic effect of Apatinib was explored by bioassays in human umbilical vein endothelial cells (HUVECs), including cell migration, invasion and apoptosis, tube formation, and wound healing. Further experiments showed that Apatinib inhibited tumor microangiogenesis to achieve the aims of inhibiting tumor growth and recurrence by means of down-regulating the phosphorylation of the RAF-mek-erk, PI3K-akt and P38MAPK pathways. The antitumor growth and anti-angiogenic effect of Apatinib was further validated by the animal experiment. Taken together, we concluded that Apatinib inhibits the angiogenesis and growth of liver cancer by down-regulating the PI3K-akt, RAF-mek-erk and P38MAPK pathways, and has a stronger inhibitory effect in hypoxic environments. Combining TAE with adopted iodized oil containing Apatinib has a stronger inhibitory effect in VX2 liver tumor growth and metastasis, which suggesting such combinations may provide a new target and strategy for interventional therapy of liver cancer.Transcatheter arterial embolization (TAE) plays an important role in clinical liver tumor therapy. However, hypoxia after TAE limit the medium-long term efficacy of TAE. Thus, in our study, we explored the treatment effect and mechanism of combining transcatheter arterial embolization with adopted iodized oil containing Apatinib on suppressing tumor growth and metastasis. We simulated the changing of tumor microenvironment before and after TAE both Due to the concealment of the onset of liver cancer, most patients are already in the advanced stage at the time of diagnosis. Transcatheter arterial embolization (TAE) is playing a major role in the therapy for patients who are not candidates for surgery4. However, although TAE therapy prolongs the survival time of many patients5, the inability of conventional TAE therapy with iodine oil to effect thorough necrosis of the tumor cells should not be ignored7. Incomplete necrosis of the tumor could aggravate hypoxia in the tumor9. And hypoxia can further activate angiogenesis and tumor growth, which often associated with the tumor metastasis and recurrence, and is the critical factor limiting the treatment effect of TAE11. Hence, it is mightily vital to explore new strategies for HCC treatment.Hepatocellular carcinoma (HCC) is garnering more research and clinical attention as the third-leading cause among cancer patients13. Tumors often remain hypoxic due to decreased blood flow, leading to the sustained overproduction of vascular endothelial growth factor (VEGF) after TAE therapy15. VEGF is a critical factor that induces developmental angiogenesis via VEGF receptor (VEGFR)-dependent signaling, which in turn leads to tumor recurrence and metastasis17. VEGFR2 is mainly expressed on endothelial cells, mediating the angiogenic effects of VEGF19. However, this neovascularization produces abnormal leaky vessels that produce interstitial hypertension, edema and tumor hypoxia. This process forms a vicious circle of nonproductive angiogenesis, tumor growth and hypoxia after TAE20. Accordingly, suppression of the VEGF signaling pathway employing VEGFR2 tyrosine kinase inhibitors has become a hopeful therapeutic strategy to decrease excessive angiogenesis in HCC after TACE.Malignant angiogenesis is believed to be the most crucial aberrantly activated pathway; this belief is highlighted by the nature of HCC as a hypervascular tumor22. Apatinib has antineoplastic and antiangiogenic activities in gastric cancers, colon, breast, non-small cell lung and so on24.Apatinib (YN968D1), a novel tyrosine kinase inhibitor, is a highly selective VEGFR-2 inhibitor, with a binding affinity 45 times that of sorafenibHerein, we hypothesize combining iodized oil containing Apatinib with TAE has its potential activities in treating HCC. Owing to our limited understanding of the molecular mechanisms of Apatinib for HCC treatment and the Apatinib-mediated downstream pathways in HCC cells, further detailed studies are needed to elucidate the mechanism of Apatinib in HCC. Together, these results will be of great benefit to the strategies against human HCC.2 (normoxic conditions), and hypoxic conditions were simulated 24\u2009h by incubation in RPMI 1640 medium containing 200\u2009\u03bcM CoCl225. Primary antibody was obtained from Bioss (China); second antibody was obtained from Aspen.HepG2 was maintained in Dulbecco\u2019s modified Eagle\u2019s medium with 10% fetal bovine serum (FBS), 100\u2009mg/mL penicillin, and were cultured at 37\u2009\u00b0C with 21% OHuman umbilical vein endothelial cells were cocultured with HepG2 cells under normoxic or hypoxic conditions on Matrigel. The study design included 8 treatment groups: Groups A, B, C, and D were normoxic matrices with different concentrations of Apatinib , and groups E, F, G, and H were hypoxic matrices with different dose of Apatinib . All experimental methods were ratified by the Animal Experiment Committee of Institute for Huazhong University of Science and Technology. And we confirmed that all experiments were performed in accordance with relevant guidelines and regulations.HepG2 cells were cultured and plated into 96-well flat bottom plates (100\u2009mL per well) for 24\u2009h under normoxic or hypoxic conditions. The cells were then supplemented with Apatinib at concentrations of 0, 0.1, 1, 10 or 50\u2009\u00b5M. After 48\u2009hours of treatment, CCK-8 solution was added and incubated at 37\u2009\u00b0C for 4\u2009hours. Cells were cultured for 1\u20134\u2009h under the above conditions, and measured using microplate reader with absorbance at 450\u2009nm.5 cells was added to each well. After the cells are covered with the bottom of the well, the cells are crossed with the gun head perpendicular to the horizontal line of the marker pen. The smeared cells were gently rinsed with PBS and added to the medium. The cells were cultured in a cell culture incubator, and the well plates were taken out at 0 and 24\u2009hours for photographing under an inverted microscope (Olympus).First draw a parallel line on the back of the 6-well plate with a marker pen and traverse the hole. 2\u2009mL of medium containing approximately 5\u2009\u00d7\u2009105 cells was taken into the transwell chambers . Subsequently, 500\u2009\u00b5L of complete medium containing 10% FBS was added to the 24-well plate, and the chamber was placed in the plate. Incubate for 48\u2009h at 37\u2009\u00b0C in a CO2 (content 5%) incubator. The stain was prepared and stained and photographed on the non-cell inoculated side.200\u2009ul of serum-free cell suspension containing 104 HUVECs in 100\u2009\u00b5L of complete medium containing different concentrations of apatinib were plated on the solidified Matrigel suspension. Twelve hours after the HUVECs were overlaid, the cocultures incubated on Matrigel were inspected and photographed at 200\u2009\u00d7\u2009magnification. Check 5 separate fields per well and calculate average tubes/field.Via a precooled tip, 50\u2009\u00b5L of liquid Matrigel was embedded into a 96-well plate at 4\u2009\u00b0C. A total of 2\u2009\u00d7\u200910PBS was prechilled at 4\u2009\u00b0C and diluted with an appropriate amount of binding buffer. Then, the cells were collected in a flow tube and rinsed twice with PBS. Cell pellet resuspended in 300\u2009\u00b5l Binding Buffer. Subsequently, 5\u2009\u00b5L of annexin V-FITC was added, mix and incubate for 10\u2009min in the dark. 5\u2009\u00b5L PI was added, mixed and incubated. On-board detection within 1\u2009h, FITC excitation 494\u2009nm emission 520\u2009nm, PI excitation 493\u2009nm emission 636\u2009nm.Cells were resuspended, centrifuged, and the cell pellet was collected. Add Protease Inhibitor Cocktail(ROCHE) and cell protein extraction reagent, repeatedly beat upon with a pipette to ensure complete cell lysis. After centrifugation at 13,000\u2009g for 5\u2009min at 4\u2009\u00b0C, the supernatant was collected, which was the total protein solution. The sample protein concentration was determined using a BCA protein concentration assay kit , and ensure that the total amount of protein in each sample is 40\u2009ug. Separation of proteins by SDS-PAGE electrophoresis. And transferred onto PVDF membranes . Add primary and secondary antibodies for antibody incubation. The freshly prepared ECL mixed solution was added dropwise to the protein side of the membrane, and exposed in a dark room. The film was scanned and archived, and the optical density of the target bands was analyzed by the AlphaEaseFC software processing system.HUVECs were added on sterilized coverslips in 6-well plates and treated according to the groups. 4% paraformaldehyde was fixed for 30\u2009min and permeabilized using 3% H2O2 for 20\u2009min. The cells were covered with a 5% BSA diluted primary antibody and incubated in a humidifying box overnight. Following incubation with a FITC- or Cy3-labeled goat anti-rabbit secondary antibody for 50\u2009min, 50\u2013100 ul of DAPI stain was added dropwise to each well and incubated for 5\u2009min. Add anti-fluorescence quencher to the cells, cover the slides, and observe under a fluorescence microscope.The 2.5\u20133.0\u2009kg adult New Zealand white rabbits used in this study were all purchased from the Animal Experimental Center of Huazhong University of Science and Technology. All experimental plans were ratified by the Animal Experiment Committee of Institute for Huazhong University of Science and Technology.The study design included 4 treatment groups and ten rabbits per group: the sham (NS) group ; the AI group (treatment with both 3\u2009mL of lipiodol containing 50\u2009mg of Apatinib and TAE); the I group (treatment with both 3\u2009mL of lipiodol supplemented with a gelatin sponge); and the A group .3 VX2 tumor tissue, take from tumor-bearing rabbits, was implanted into left medial lobe of the liver approximately 0.5\u2009cm in depth, and then covered by gelatin. At 16 days after implantation, the tumor sizes were measured by CT scan and the rabbits carrying tumors of 10- 20\u2009mm in diameter were used for subsequent experiments.The rabbit\u2019s abdominal cavity was cut about 4\u2009cm to reveal the left medial liver lobe. A piece of 1\u2009mmAll rabbits were anesthetized, and femoral artery was dissected and catheterized. Then, catheter was super-selectively inserted to the HCC feeding artery from the femoral artery under the digital subtraction angiography (DSA). Subsequently, the drugs were injected into the catheter according to the different groups. Finally, the femoral artery was sutured, and all rabbits were intramuscularly injected with penicillin daily for three days.1) and transverse diameter (D2) of tumor were separately recorded and Tumor volume was calculated by using the following modified formula for elliptic volume: V\u2009=\u2009D1\u2009\u00d7\u2009D22/2. The tumor growth rate was calculated by using the following formula: V7/V0 * 100%Tumor growth was monitored with contrast-enhanced CT on days 0 and 7. After scanning, all the images acquired were processed by the Syngo Fastview image processing system, and the size, location, shape and the presence of necrosis and intrahepatic metastasis of the implanted tumor were analyzed by twos senior doctors of radiology department. The maximum diameter Fig.\u00a0. The dat05) Fig.\u00a0. The exp05) Fig.\u00a0. It confWe can see from the figure that a concentration-dependent manner in which Apatinib inhibits basal tube formation and HUVEC proliferation agents from 0 to 50\u2009\u03bcM, especially in the hypoxic environment (P\u2009=\u20090.000\u2009<\u20090.001) Fig.\u00a0. The apoin vivo experiments also proved this. Immunohistochemical staining for CD31 expression, which is known to be highly correlated with tumor angiogenesis, was carried out to investigate the expression of tumor MVD. Consistent with the vitro model, the CD31 staining data revealed that the AI group had a significantly lower tumor MVD than the other three groups (P\u2009<\u20090.01) Fig.\u00a0, which sWound-healing and migration assays were performed for evaluating the control effect of Apatinib on migration and invasiveness of HUVECs. Apatinib showed inhibition of HUVECs at a very low dose (1\u2009\u00b5M). Subsequently, after HUVECs treated with 10 and 50\u2009\u03bcM Apatinib for 24\u2009h, the invasion reduced by 43% and 56%, respectively, under normoxic conditions, and by 49% and 79%, respectively, under hypoxic conditions Fig.\u00a0. What coThe pathway downstream of VEGF-VEGFR, such as PI3K-Akt, RAF-MEK-ERK and P38 MAPK, also mediate metatasis and proliferation of endothelial cell. To confirm whether the pathway was involved in the antiangiogenic responsiveness of Apatinib, the phosphorylation of some kinases were inspected by Western blot analysis. The experimental results revealed that the phosphorylation levels of ERK, Akt, PI3K and P38 were decreased in HUVECs after treated with Apatinib Fig.\u00a0. ApatiniThe blockade of these kinases by Apatinib was so drastic that we further verified the reliability of the results by immunofluorescence. The immunofluorescence results indicated that Apatinib potentially exerted its inhibitory effects on HUVECs by decreasing activity in the PI3K-Akt, RAF-MEK-ERK and P38 MAPK signaling pathways Fig.\u00a0.26. But because of recurrence and metastasis due to the highly hypoxic microenvironment after TAE, the outcomes remain modest. It is necessary to further study the effect of interventional embolization on the angiogenesis of HCC and the corresponding intervention for the changes of angiogenesis after embolization. Up to now, there is no report on the use of adopted iodized oil containing VEGFR-2 inhibitor Apatinib in interventional therapy of hepatocellular carcinoma. Therefore, we propose a new concept of interventional embolization combined with adopted iodized oil containing Apatinib targeted therapy of liver cancer. The results show that this new administration protocol successfully interrupted cell migration and the recurrence of VX2 liver tumors.TAE, as mentioned above, has been widely used to treat advanced HCC, and there was no difference between the TAE with or without doxorubicin15. Realizing the limitations of TAE due to hypoxia, more and more studies have explored new ways of vascular normalization to treat HCC. Thus, the antiangiogenic impact of Apatinib was further validated in HUVECs by bioassays, including cell migration, invasion and apoptosis, tube formation, and wound healing assays. These results revealed that Apatinib inhibits HUVEC migration and invasion, especially in hypoxic environments. The results of vivo experiments also confirmed that there were significant differences between the Group A and Group NS in tumor growth rate and MVD. Comprehensively, our results indicate that Apatinib has great potential as a pharmaceutical treatment for human HCC by inhibiting cell migration, invasion and survival.Angiogenesis, a hallmark of tumor development, is essential for cancer invasion and migration28. Moreover, the interaction between VEGF and VEGFR2 supports HCC cell growth and migration through an angiogenesis-independent antiapoptotic pathway31. However, the drugs that block VEGFA, which is considered a major VEGF involved in angiogenesis, have not achieved tiptop results4; therefore, Apatinib, a potent VEGFR-2 inhibitor, is considered. In this study, we had simulated the changing of tumor microenvironment before and after TAE both in vitro and vivo models. We found out that overproduction of VEGF and HIF-1\u03b1 after embolization, and there are no significant difference between-group AI and I. But we demonstrated that Apatinib mixed with lipiodol to enter the tumor through the hepatic artery significantly reduced the MVD. In other words, Apatinib plays an antiangiogenic role after interventional embolization, blocking the regeneration of blood vessels. And this new administration protocol could significantly suppress the GR of VX2 liver tumors; this effect was further confirmed by the inhibition of HepG2 cell proliferation by Apatinib in the CCK-8 assays. Interestingly, we found that Apatinib was more potent against HepG2 cell proliferation in a hypoxic environment than in a normoxic environment, in a dose-dependent manner. Thus, we concluded that Apatinib has a direct toxic effect on HCC cells, that Apatinib can effectively inhibit tumor cell proliferation in a concentration-dependent manner, especially in the anoxic environment. And this is the reason why Apatinb can work efficient with TAE.Angiogenesis is critical for the proliferation and metastasis of HCC, and a high level of MVD, which is demonstrated a harmful prognosis marker for HCC patients and related to a reduction in patient survivalWhat is the major gap in the knowledge about TAE is that the incomplete understanding of the mechanism between antiangiogenesis and tumor growth. Insight into this mechanism represents a necessary foundation on which to increase the effectiveness of TAE treatment paradigms. HIF-1\u03b1 is elevated after hypoxia and triggers the VEGF/VEGFR pathway, thereby induceing genes expression involved in cell proliferation and angiogenesis. Therefore, exploring the effect of Apatinib on these downstream pathways is very important. The PI3K/Akt signaling pathway, overactivated in many malignant tumor, is playing a vital role in various cellular processes involved in angiogenesis in endothelial cells. In addition to the PI3K/Akt signaling pathway, the VEGF/VEGFR pathway activate endothelial cell proliferation and metastasis by inducing other downstream kinases, such as P38, ERK and FAK. Indeed, the Western blotting results demonstrated that Apatinib downregulated the protein levels of p-ERK, p-Akt, p-PI3K and p-P38 in a concentration-dependent manner, especially in a hypoxic environment, which suggesting that the mechanism underlying the antiangiogenic effect of Apatinib is blocking the activation of these kinases.In summary, Apatinib can effectively block the VEGF-VEGFR pathway by downregulating the activity of the RAF-MEK-ERK, PI3K/Akt, and P38 MAPK signaling pathways, which contributed to inhibiting the growth, invasion and migration of the residual tumor after embolization in an anoxic microenvironment. And these findings made it a promising potential therapy of combining TAE therapy with the VEGFR-2 inhibitor Apatinib for the treatment of HCC. And our efforts have deepened the study of interventional treatment of liver cancer into the field of the tumor microenvironment and hopefully provide a new target and strategy for interventional therapy of liver cancer.Supplementary information."} +{"text": "Prospective studies investigating risk factors for low back pain (LBP) in youth athletes are limited. The aim of this prospective study was to investigate the association between hip-pelvic kinematics and vertical ground reaction force (vGRF) during landing tasks and LBP in youth floorball and basketball players.Three-hundred-and-eighty-three Finnish youth female and male floorball and basketball players (mean age 15.7\u2009\u00b1\u20091.8) participated and were followed up on for 3\u2009years. At the beginning of every study year the players were tested with a single-leg vertical drop jump (SLVDJ) and a vertical drop jump (VDJ). Hip-pelvic kinematics, measured as femur-pelvic angle (FPA) during SLVDJ landing, and peak vGRF and side-to-side asymmetry of vGRF during VDJ landing were the investigated risk factors. Individual exposure time and LBP resulting in time-loss were recorded during the follow-up. Cox\u2019s proportional hazard models with mixed effects and time-varying risk factors were used for analysis.We found an increase in the risk for LBP in players with decreased FPA during SLVDJ landing. There was a small increase in risk for LBP with a one-degree decrease in right leg FPA during SLVDJ landing . Our results showed no significant relationship between risk for LBP and left leg FPA , vGRF or vGRF side-to-side difference during landing tasks.Our results suggest that there is an association between hip-pelvic kinematics and future LBP. However, we did not find an association between LBP and vGRF. In the future, the association between hip-pelvic kinematics and LBP occurrence should be investigated further with cohort and intervention studies to verify the results from this investigation.Prognosis, level 1b. Findings Based on the results of this study, peak vGRF is a poor risk factor for LBP in youth team sport players. Hip-pelvic kinematics are associated with increased risk for LBP; smaller angle between the femur and pelvis increases the risk for all LBP and non-traumatic gradual onset LBP.Implications One cannot discriminate players with future LBP based on the femur-pelvic angle during SLVDJ landing alone. The association between hip-pelvic kinematics and other movement patterns, such as trunk kinematics, and risk for LBP in athletes merits further investigations.Caution The data recording and statistical analyses in this study did not take into account the temporal nature of physical abilities during the follow-up nor did it include psychosocial factors. Statistical power might not have been enough to reveal small to moderate associations. The results should be verified by future cohort and intervention studies.Back pain is common among youth athletes . Our prePrevious studies investigating intrinsic risk factors for LBP in youth have focused mostly on lower extremity and trunk muscle strength and endurance, flexibility and anthropometric measures , 7. ProsIt has been stated that the trunk, including lumbo\u2013pelvic\u2013hip complex, is the central point of kinetic chains of most sports activities and essential in decreasing back injuries . FurtherBasketball and floorball are sports that include running, sudden direction changes and stops. In addition, basketball players perform lots of jumping and landing . These mThe aim of this exploratory prospective study was to investigate if hip-pelvic kinematics, measured as femur-pelvic angle (FPA), and peak vGRF during landing tasks, are associated with LBP incidence in a large cohort of youth basketball and floorball players. The prospective design and consideration of the individual training and game exposure hours adds to the novelty value of this study. The hypotheses were that decreaseThis study is part of the large Finnish PROFITS study (Predictors of Lower Extremity Injuries in Team Sports) carried out between 2011 and 2015 [Ten female and male basketball and 10 floorball teams were recruited from six sports clubs in Tampere, Finland. Players older than 21 and younger than 12 at baseline were excluded. Data were collected at baseline in April or May of 2011, 2012, or 2013 as the player entered the study, and at the beginning of each study year in which the player participated. The players were followed prospectively for up to 3\u2009years. Data from all players entering the follow-up were included in the analyses for the time they participated.The baseline questionnaire covered the following demographics: age, sex, dominant leg, nicotine use, family history of musculoskeletal disorders, and training and playing history during the previous 12\u2009months.The players\u2019 history of back pain was recorded using the Standardized Nordic questionnaire of musculoskeletal symptoms (modified version for athletes) , 22. HisThe baseline tests were performed at the UKK Institute over 1\u00a0day at the beginning of every follow-up year. The test procedures are outlined in more detail in previous reports , 24\u201329 aThe SLVDJ was used to investigate hip-pelvic kinematics. In the SLVDJ the player dropped off from a 10-cm box followed by a maximal vertical jump. Hip-pelvic angles were estimated from a still video image by an investigator using Java-based software , and FPA, outlined in Fig.\u00a0The VDJ was used to investigate the vGRF during landing. During a valid VDJ test the player stood on the 30-cm box, dropped off the box and immediately after landing the player performed a maximal vertical jump. Absolute and weight adjusted peak vGRF and side-to-side asymmetry were investigated as potential risk factors. The same methodology has been used previously by, for example, Nilstad et al., Mok et al. and Krosshaug et al. \u201329. TheyThe primary outcomes were traumatic and non-traumatic LBP. LBP was defined as pain in the lower back area that prevented the player from taking full part in team practices and games for at least 24\u2009h. LBP that resulted from a specific identifiable event, such as falling, was referred to as acute traumatic LBP. Non-traumatic LBP had gradual onset, without an identifiable event of trauma. Acute traumatic LBP events were categorised as \u201ccontact\u201d, \u201cindirect contact\u201d, and \u201cnon-contact\u201d . A contaOnce a week one of the two study physicians contacted the teams to interview the injured players. A structured injury questionnaire (Supplementary Table\u00a0IBM SPSS Statistics (v. 23\u201324.0) and Chi-square test and the t-test (Mann-Whitney test when appropriate) were used for descriptive statistical analyses and the results were reported as the mean and standard deviation (SD). Cox\u2019s proportional hazard models with mixed-effects were used to investigate the associations between potential risk factors and LBP (yes/no). This method accounts for the sports exposure and variance in follow-up time between the players. Mixed effects were used to account for the sports club as a random effect. Time-dependent variables were used, when possible, due to the tendency of changes in investigated variables over time. The individual game and practice hours from the start of the follow-up until the first event (LBP) or the end of follow-up (if no event) were included in analyses. For players reporting more than one LBP after the baseline, only the first was included. Data from all eligible players entering the follow-up were included in the analyses for the time they participated.R packageNine teams of both sports agreed to participate Fig.\u00a0, with a n\u00a0=\u2009205) reported no history of LBP at baseline. Of the 383 players, 13% (n\u00a0=\u200948) sustained LBP during the follow up, 35% of them (n\u00a0=\u200917) had not had back pain prior to the study. Half of the players developing LBP during the follow-up were females . Fifty-four percent of floorball players and 46% of basketball players had LBP during the follow-up. Most of the players who developed back pain during the follow up did so during their first follow-up year (81%) and only one player was followed for 3\u00a0years before developing LBP. LBP incidence was addressed in a previous publication [During the follow-up, altogether 566 athlete-years were recorded. Fifty-four percent of players , but no significant differences were observed between previous study years Table .Table 4The aim of this study was to investigate whether hip-pelvic kinematics and peak vGRF during landing tasks were associated with LBP incidence in youth floorball and basketball players. The first hypothesis was that the movement pattern, where the FPA is decreased during SLVDJ landing due to increased movement of the hip in the direction of adduction and contralateral pelvis drop might predispose for LBP. The second hypothesis was that players with higher or asymmetric peak vGRF during VDJ landing are at increased risk for LBP. Contrary to our second hypothesis, we did not find a statistically significant association between LBP and peak vGRF. However, our results suggested that there is an association between hip-pelvic kinematics and LBP.The lumbo-pelvic function is an essential part of successful athletic performance . AccordiOur results showed a small increase in risk (8%) for LBP with a one-degree decrease in the right leg FPA during the SLVDJ landing. This means a 2.2-fold increase in risk in players with less than 80\u00b0 FPA during the right leg landing, compared to the players with more than 80\u00b0 FPA. However, no association was detected between the left leg FPA and the risk of LBP. The difference between the right and left leg results might be due to the test procedure where the starting leg was not randomized, that is, the test was always started with the right leg. Another explanation may be the fact that in most players the right leg was their dominant (kicking) leg and the left leg was their supporting leg. This may explain why the left side was more stable during the SLVDJ. Our results are in line with previous studies suggesting that hip-pelvic kinematics are associated with injuries in athletes , 44, 45.Our second hypothesis was that vGRFs that affect the lumbar spine could poThe strengths of this study were the prospective design and the methods of LBP and playing exposure registrations. In addition, the sample size was relatively large. The length of follow-up varied across the sample and therefore we used Cox regression analysis. Cox regression analysis can adjust for variations in the amount of sport participation (follow-up time). Yet, due to the relatively low number of LBP events, we were unable to stratify the analyses by sex. However, it seemed that sex was not significantly associated with LBP in this sample.n\u00a0=\u200917) of the LBP recorded during follow-up was first-time LBP. We compensated for this by adjusting the risk factor analyses with a history of LBP.Risk factors can change over time and therefore we used time-varying variables in the Cox analysis, when possible. In addition, over half (54.5%) of the players had a history of LBP at the beginning of the study and 35% (n\u00a0=\u2009383) were not included in the risk factor analyses. In the SLVDJ 25% of the players and in the VDJ 19% of the players had incomplete baseline test data. The absence of these players might affect the results of this study. We are also unaware whether players refusing to participate differ from our sample. Another limitation is that we did not test the reliability of the selected tests during this study. However, the reliability of vGRF measurements has been demonstrated previously by Krosshaug and Mok and their colleagues [We should not overlook the fact that up to 25% of all players participating (lleagues , 29. Herlleagues . One limThe aetiology of LBP has been shown to be multifactorial , meaningOur results suggested that there is an association between hip-pelvic kinematics and LBP, as measured in this study. However, we did not find a statistically significant association between LBP peak vGRF or side-to-side asymmetry of vGRF during VDJ landing. In the future, the association between hip-pelvic kinematics and LBP incidence should be investigated further to verify the results from this study.Additional file 1 Supplementary Table\u00a01. Data collected in the structured injury questionnaire.Additional file 2 Supplementary Table\u00a02. Differences between players with and without baseline test result."} +{"text": "Spring migration phenology is shifting towards earlier dates as a response to climate change in many bird species. However, the patterns of change might not be the same for all species, populations, sex and age classes. In particular, patterns of change could differ between species with different ecology. We analyzed 18 years of standardized bird capture data at a spring stopover site on the island of Ponza, Italy, to determine species-specific rates of phenological change for 30 species following the crossing of the Mediterranean Sea. The advancement of spring passage was more pronounced in species wintering in Northern Africa (i.e. short-distance migrants) and in the Sahel zone. Only males from species wintering further South in the forests of central Africa advanced their passage, with no effect on the overall peak date of passage of the species. The migration window on Ponza broadened in many species, suggesting that early migrants within a species are advancing their migration more than late migrants. These data suggest that the cues available to the birds to adjust departure might be changing at different rates depending on wintering location and habitat, or that early migrants of different species might be responding differently to changing conditions along the route. However, more data on departure time from the wintering areas are required to understand the mechanisms underlying such phenological changes. Migration phenology in birds and other animals has been shifting in recent years, along with overall climate change \u20134. This https://www.ncdc.noaa.gov/teleconnections/nao/, last accessed on July 2nd, 2020). These environmental data suggest that species wintering in the Sahel and actively using stopover sites in Northern Africa might be more affected than others in their timing of passage, which should be reflected in an earlier arrival in Southern Europe.Some of the methods used for detecting switches in phenology have been object of debate , 47. StuHere, we aimed at identifying recent changes in migration phenology of migrants that cross the Mediterranean Sea, with a particular focus on within-species comparison between early and late migrants and on differences between species with different wintering areas. To this aim, we analyzed a large dataset of captures of migratory birds on spring migration from a small Italian island, where large numbers of individuals of several species are stopping over after crossing the Mediterranean Sea . We calc2) located about 50 km off Italy , where spring bird migration has been monitored since 2002 (www.inanellamentoponza.it). Ponza attracts large numbers of African-European migratory landbirds during spring migration as it is located along one of the main Mediterranean migratory routes, with daily peaks of over 1500 individual birds ringed occurring several times during the study period. Birds were caught using mist-nets from March (or April in some years) to May (exact start and end dates are shown in http://www.vbonardi.it/) and model constant throughout the entire study period. The birds were ringed, aged and sexed according to the available literature Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: PartlyReviewer #2: Yes**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: NoReviewer #2: N/A**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1: NoReviewer #2: No**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: The study entitled \u201cRecent phenology shifts of migratory birds at a Mediterranean spring stopover site: birds wintering in the Sahel advance passage more than tropical winterers\u201d reports the description of the temporal variation in the dates of passage during the pre-nuptial migration of 18 bird species in a small island of the Mediterranean sea (Ponza). The main result is that most of the species under investigation advanced their passage time, and therefore add to the hundreds of studies reporting similar findings elsewhere in the globe. It also shows that species that migrate longer distances show a smaller temporal variation in phenology (e.g. the slope of the passage date against year) than those migrating further, as well as that males generally show steeper trends than females. The study is merely descriptive but worthy for a least two reasons: 1) it is conducted at an important passage site of which no information on migration time was available to date, and 2) because it was conducted in a rather recent period (2002-2019), and it therefore shows that birds are still shifting their phenology . I am therefore generally positive about the possibility that the paper would be accepted for publication. However, I am some issues that I would like to raise below in order to improve, I hope, the quality and the readability of the manuscript. I think that they should be easily addressed by the authors.1) Statistical analyses and implications of the results. The conclusions are reasonable and compatible with the wide literature about the topic. However, I am not sure that the authors could claim that their study shows difference in the phenological variation between the species wintering in Northern Africa and in the Sahel zone and those wintering further South in the forests of central Africa without properly testing for it. I therefore encourage them adopt a proper statistical test in order to check for differences among groups of birds showing different migration strategies (and/or different wintering regions). This is also the case for the supposed differences between the phenology of males and females of some species. I am aware that the number of the species included in the analyses in not very large, but I think that there is the scope for splitting the species at least into two categories. However, if the authors think that this is not possible, in my opinion they should at least justify why they cannot analyse the data and add a caveat in the Discussion by acknowledging the reader about the limitations of their results.2) Main migration period. The study is focused on the start, end, duration and peak of the so-called \u2018main migration period\u2019. The procedure used to define this period seems accurate to me, and it is not my intention to question it. However, since the vast majority of the hundreds of studies on migration phenology published to date usually focus on mean/median and/or first (as well as last) passage dates, I am wondering why the authors opted for this different procedure, which make their results difficult to be compared with the previous ones. I would suggest to at least better justify this decision and to provide some solid reasons why this procedure is better, at least for the case under study, than that used in most of literature.3) Tables. I have also a concern about how tables have been completed. Frankly, I do not agree with reporting only significant relationships in the tables. I think it is important to report all the associations, including non-significant ones. This good practice is important, for example, to limit publication bias in meta-analytic studies. I therefore strongly encourage the authors to fully fill the tables. In addition, since the main message of the paper is a link between migration strategy and phenological change, I also think that it would be very helpful for the readers to have some additional information about the species in the first part of the manuscript (other than in the Discussion where they are now reported). A possibility is to add a column in Table 1 reporting the migratory behaviour and/or the main wintering region of each species.4) Introduction. On the whole, it is concise and well written. However, I was rather surprised not to read any clear mention about the effect of migratory behaviour on variation in migration phenology, also considering that the authors explicitly referred to short- and long-distance migrants in the Discussion. I understand that the authors mentioned where different species spend the wintering period, but information might not be enough for a generalist reader. According to literature, change in phenology depending on migratory behaviour is one of the most consolidated knowledge on the effect of climate change on birds. I would therefore encourage the authors to add a short paragraph on this (including relevant references), linked to that one focused on the effects of NAOI and wintering areas.5) Discussion. As briefly mentioned above, I think that one the main interesting point of the study is that migration data have been collected rather recently. I think that the paper would benefit from the inclusion of an additional paragraph where the authors emphasize that the phenological shifting is still ongoing and compare their recent trends with those obtained by previous studies performed on very long time-series .6) Literature cited. References quoted in the paper are generally relevant. However, I noted that only a few very recent papers (after 2015) have been cited, despite the vast literature produced in the last years. This is especially the case for Introduction, in which some very relevant and recent papers are missing. I therefore suggest to add some of them in order to better introduce the concepts, which will be developed through the manuscript. Please, find below some missing articles which are very relevant to the purpose of the study, and deserve to be cited .- Kluen, E., Nousiainen, R., & Lehikoinen, A. (2017). Breeding phenological response to spring weather conditions in common Finnish birds: resident species respond stronger than migratory species. Journal of Avian Biology, 48(5), 611-619.- Samplonius, J. M., Barto\u0161ov\u00e1, L., Burgess, M. D., Bushuev, A. V., Eeva, T., Ivankina, E. V., ... & M\u00e4nd, R. (2018). Phenological sensitivity to climate change is higher in resident than in migrant bird populations among European cavity breeders. Global change biology, 24(8), 3780-3790.- Ambrosini, R., Romano, A., & Saino, N. (2019). Changes in migration, carry-over effects, and migratory connectivity. Effects of Climate Change on Birds, 93.- Radchuk, V., Reed, T., Teplitsky, C., Van De Pol, M., Charmantier, A., Hassall, C., ... & Avil\u00e9s, J. M. (2019). Adaptive responses of animals to climate change are most likely insufficient. Nature communications, 10(1), 1-14.- Horton, K. G., La Sorte, F. A., Sheldon, D., Lin, T. Y., Winner, K., Bernstein, G., ... & Farnsworth, A. (2020). Phenology of nocturnal avian migration has shifted at the continental scale. Nature Climate Change, 10(1), 63-68.Minor comments:LL 62-65. Maybe add some references here.L 74. \u201cnear-passerine\u201d. I think this should be defined, especially in a very generalist journal like the present one.LL 89-92 and Table 1. It is ok to include information about the total number of individuals captured per species. However, because the authors declared that they selected the species to be included in the analyses based on the yearly number of captures, I think that an important missing information is also the range of individuals captured per species per year. In addition, it is not clearly described which criterion was used to decide whether including or not a species. For example, what is the minimum number of specimens per year? In addition, did the authors exclude some years within species because they could not collect information on a minimum number of individuals? In my opinion, all this information should be added to improve the understanding of the procedures used.LL 113-115. Interesting that all the species show a significant temporal variation (advance or delay) in the phenology of migration. This is a quite unexpected result, if we compare it with the available literature. How do the authors explain this result? Is possible that the procedure used to identify the main migration period has somehow affected this result?L 244. Publication number XX. Maybe there is an error here.Reviewer #2: This paper aims to analyse the spring migratory phenology of 30 species of birds in a Mediterranean island, for a period of 18 years. Authors found a broadening of the time of passage overall, due to an advancement of the passage by mostly short-distance migrants.The paper is well written, it is easy to follow and the concepts and literature used are within the expected current state of the art. A have, however, some comments to do:Main comments:The data set used in the study is impressive, and this is a robustness of the paper. However, the paper is totally descriptive, and the authors \u2018only\u2019 analyse a linear effect of year on the object, dependent variables . It would be really nice (though I acknowledge that this would entail much more work) if the authors may also look for the potential effects of conditions at wintering sites or during the passage across the Mediterranean. These analyses would allow a richer discussion around the observed changes/results. This is what I would also expect in a journal like PLOS ONE.Minor comments:L94-96. Quite often, mist nets must be closed due to very adverse weather or even logistical reasons . Even though the authors say that the fieldwork was done on a daily basis, I would like to know to what extent this protocol was done with a 100% of, most likely, whether there were some gaps. In these cases, many authors think that replacing raw data by theoretical distribution regressions is better, since these last allow to: deal with gaps in the data set and smooth the potential effects of the variables affecting capture rates . Furthermore, these theoretical distributions would also allow dealing with years when the campaign started after the onset of migration, or ended before its end. Authors should discuss/defend their statistical approach and explain why not dealing with the use of theoretical distribution curves.L103. A good reason to use theoretical curves?L106-109. For me, this would be an insufficient explanation. There can be other distribution curves that might fit reasonably well to your data.L139. Please, this should be mention in Methods section.L157. Overall, the discussion might be benefited if the authors would also consider to test for the effects of conditions in the wintering/passing areas on their dependent variables.Table 1. Please, add SE to the beta parameter estimtes (slope).Table 2. Too many NAs in the start and end of migration. Why? Maybe this might be improved by using theoretical curves? If not, please limit the analyses to the peak passage parameter.**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 8 Jul 2020Please find our answers to the reviewers' and editor's comments in the attached file.AttachmentResponse to reviewers_final_final.docxSubmitted filename: Click here for additional data file. 11 Aug 2020PONE-D-20-09340R1Recent phenological shifts of migratory birds at a Mediterranean spring stopover site: species wintering in the Sahel advance passage more than tropical winterersPLOS ONEDear Dr. Maggini,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.This new version was revised by the Reviewer that raised the most serius concerns on the first version. He/she was rather satisfied by this new version, but he/she also asked for some further minor changes. We think you can easily modify the manuscript to acccount for these further suggestions. Please submit your revised manuscript by Sep 25 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsIf applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:\u00a0We look forward to receiving your revised manuscript.Kind regards,Roberto Ambrosini, Ph.D.Academic EditorPLOS ONE[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0I am rather satisfied by the revision made by the authors because they provided all the main suggested changes to the manuscript. In my opinion, the paper has considerably improved and I therefore suggest it for publication. I have only a few additional minor comments (lines numbers refer to the version including track changes):LL 151-152. Please add references used to define wintering areas and to associate species to them.L 211.5 rather than 22\u2026a huge difference!LL 211-237. The reader is lost until the last line of this long paragraph because the table and the figure are cited only at end. I suggest to mention them before.L 237. Fig. S2, not S2 Fig.Table 2. I am not sure that P values between 0.05 and 0.10 should be mentioned in a table, even considering that exact P-values are provided in the supplementary materials.LL 383-384. There is some evidence that in many species the advance in arrival date in the breeding area is less steeper than the passage date in the Mediterranean area. This point might deserve a short discussion and some references (please see below).Both, C. (2010). Flexibility of timing of avian migration to climate change masked by environmental constraints en route. Current Biology, 20(3), 243-248.Jonz\u00e9n, N., Lind\u00b4en, A., Ergon, T., Knudsen, E., Vik, J.O., Rubolini, D., Piacentini, D., Brinch, C., Spina, F., Karlsson, L., Stervander, M., Andersson, A., Waldenstr\u00a8om, J., Lehikoinen, A., Edvardsen, E., Solvang, R. & Stenseth, N. C. (2006). Rapid advance of spring arrival dates in long-distance migratory birds. Science 312, 1959\u20131961.Bitterlin, L. R., & Van Buskirk, J. (2014). Ecological and life history correlates of changes in avian migration timing in response to climate change. Climate Research, 61(2), 109-121.LL 390-391. A very important study to be cited here is: Dunn, P. O., & M\u00f8ller, A. P. (2014). Changes in breeding phenology and population size of birds. Journal of Animal Ecology, 729-739.**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0 12 Aug 2020Reviewer #1: I am rather satisfied by the revision made by the authors because they provided all the main suggested changes to the manuscript. In my opinion, the paper has considerably improved and I therefore suggest it for publication. We thank the reviewer for the renewed assessment of our manuscript, and especially for her/his comments on the previous version that helped us to largely improve this new version.I have only a few additional minor comments (lines numbers refer to the version including track changes):LL 151-152. Please add references used to define wintering areas and to associate species to them.The references are given in line 95 (new line numbering)L 211.5 rather than 22\u2026a huge difference!We agree. This was due to a mistake in the previous analysis that we corrected in the new version. In the previous version, the large number of significant slopes caught the attention of the reviewers and was highly unlikely.LL 211-237. The reader is lost until the last line of this long paragraph because the table and the figure are cited only at end. I suggest to mention them before.We moved the reference to table and figure at the beginning of the paragraph.L 237. Fig. S2, not S2 Fig.This was suggested in the information for authors. We will change this if requested.Table 2. I am not sure that P values between 0.05 and 0.10 should be mentioned in a table, even considering that exact P-values are provided in the supplementary materials.We removed the highlights from slopes with p between 0.05 and 0.10 (we did the same in Table 3).LL 383-384. There is some evidence that in many species the advance in arrival date in the breeding area is less steeper than the passage date in the Mediterranean area. This point might deserve a short discussion and some references (please see below).Both, C. (2010). Flexibility of timing of avian migration to climate change masked by environmental constraints en route. Current Biology, 20(3), 243-248.Jonz\u00e9n, N., Lind\u00b4en, A., Ergon, T., Knudsen, E., Vik, J.O., Rubolini, D., Piacentini, D., Brinch, C., Spina, F., Karlsson, L., Stervander, M., Andersson, A., Waldenstr\u00a8om, J., Lehikoinen, A., Edvardsen, E., Solvang, R. & Stenseth, N. C. (2006). Rapid advance of spring arrival dates in long-distance migratory birds. Science 312, 1959\u20131961.Bitterlin, L. R., & Van Buskirk, J. (2014). Ecological and life history correlates of changes in avian migration timing in response to climate change. Climate Research, 61(2), 109-121.We added a sentence to acknowledge this (l. 302-304).LL 390-391. A very important study to be cited here is: Dunn, P. O., & M\u00f8ller, A. P. (2014). Changes in breeding phenology and population size of birds. Journal of Animal Ecology, 729-739.We added this reference.See also the attached file.AttachmentResponse to reviewers revision.docxSubmitted filename: Click here for additional data file. 8 Sep 2020Recent phenological shifts of migratory birds at a Mediterranean spring stopover site: species wintering in the Sahel advance passage more than tropical winterersPONE-D-20-09340R2Dear Dr. Maggini,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Roberto Ambrosini, Ph.D.Academic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 10 Sep 2020PONE-D-20-09340R2 Recent phenological shifts of migratory birds at a Mediterranean spring stopover site: species wintering in the Sahel advance passage more than tropical winterers Dear Dr. Maggini:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Roberto Ambrosini Academic EditorPLOS ONE"} +{"text": "Bacillus thuringiensis (Bt) Cry toxins have transformed insect management in maize and cotton, reducing insecticide use and associated off-target effects. To mitigate the risk that pests evolve resistance to Bt crops, the US Environmental Protection Agency requires resistance management measures. The approved resistance management plan for Bt maize in cotton production regions requires a structured refuge of non-Bt maize equal to 20% of the maize planted; that for Bt cotton relies on the presence of an unstructured natural refuge comprising both non-Bt crop and non-crop hosts. We examined how abundance of Bt crops (cotton and maize) and an important non-Bt crop (soybean) component of the natural refuge affect resistance to Bt Cry1Ac toxin in local populations of Helicoverpa zea, an important lepidopteran pest impacted by Bt cotton and maize. We show refuge effectiveness is responsive to local abundances of maize and cotton and non-Bt soybean, and maize, in its role as a source of H. zea infesting cotton and non-Bt hosts, influences refuge effectiveness. These findings have important implications for commercial and regulatory decisions regarding deployment of Bt toxins targeting H. zea in maize, cotton, and other crops and for assumptions regarding efficacy of natural refuges.Genetically engineered crops expressing Bacillus thuringiensis (Bt) has transformed insect management, especially in cotton (Gossypium hirsutum L.) and maize (Zea mays L.), resulting in reduced insecticide use and associated off-target effects, enhanced yields, and increased farmer profits6. In 2020, Bt varieties accounted for 88 and 82% of the cotton and maize, respectively, planted in the US7. This widespread adoption has resulted in area-wide suppression of some targeted pest populations and benefits that transcend Bt crops10. Resistance management measures are mandated by the US Environmental Protection Agency [EPA] as a condition for registration of Bt crops to preserve their associated crop protection and broader environmental benefits. Nonetheless, the benefits of Bt technology in maize and cotton are threatened by the evolution of resistance in several key pest species16. Principal among these in the southern US is Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae), a consistent pest of both cotton and maize, commonly referred to as bollworm or corn earworm. Field-evolved H. zea resistance to Bt crystal-forming (Cry protein) toxins in maize and cotton has resulted in increasing damage in both crops and an increase in insecticide use in cotton and sweet corn14.The widespread adoption of genetically engineered crops expressing insecticidal toxins from Bt crops relies on the presence of a refuge from selection for resistance comprising non-Bt host plants. To function effectively, the refuge should be spatially close to the Bt crop and concurrently produce enough Bt-susceptible insects to mate with Bt-resistant individuals emerging from the Bt crop, thereby minimizing the frequency of resistance alleles in the population12. This strategy is most effective when Bt-plants express the toxin at a concentration sufficient to kill individuals heterozygous for a recessive resistance allele; generally referred to as high dose17. Although Bt maize hybrids expressing a single Bt toxin continue to be planted in limited areas, Bt maize and cotton varieties expressing multiple Bt toxins (pyramids) have replaced single-toxin varieties. Insects resistant to one of the toxins in a pyramid can be killed by the other(s). Pyramids are most effective when each toxin is expressed at a high dose and there is no cross-resistance between toxins12. Concentrations of Cry toxins expressed in Bt maize and cotton are high dose for some targeted lepidopteran pests but not for H. zea, increasing the speed with which resistance is expected to develop12.Resistance management for Bt maize in cotton production regions requires planting a structured refuge of non-Bt maize equal to 20% of the total maize planted. In contrast, the refuge portion of the resistance management plan for Bt cotton relies on an unstructured natural refuge comprising non-Bt crop and non-crop host plants present in the refuge landscape18. Because the host range of H. zea encompasses many of the crops and non-crop plant species that are abundant in cotton production systems of the southeastern USA, it is assumed that the diversity and abundance non-Bt host plants in the natural refuge will produce enough Bt susceptible H. zea moths at the appropriate time to function effectively as a refuge. This assumption was critically examined by a Scientific Advisory Panel in 200619 and supported by studies that have shown H. zea populations developing on cotton were a relatively small proportion of the total population21. In 2018, a Scientific Advisory Panel18 recommended continued use of the natural refuge strategy for managing H. zea resistance to Bt toxins expressed in cotton.The US EPA-approved resistance management plan for 22, understanding factors influencing how the natural refuge functions for H. zea in cotton production systems is critical to inform development of resistance management strategies and regulatory policies relating to their implementation. Herein, we examine effects of varying abundance of two Bt crops (cotton and maize) and non-Bt soybean (Glycine max L.) within local landscapes in commercial field crop production systems on effectiveness of the natural refuge in suppressing resistance in H. zea populations to a Bt toxin (Cry1Ac). Soybean is an important non-Bt crop host of H. zea that varies greatly in abundance among locations.Because the speed of resistance evolution is inversely related to the amount of refugeHelicoverpa zea is a highly polyphagous, multivoltine pest that has numerous crop and non-crop hosts24. In the southeastern US, H. zea can complete at least four generations per year, with each generation potentially feeding on different crops at various phenological stages25. Adult moths that developed on maize ears disperse to infest other suitable crop and non-crop host plants. Included among these are bloom-stage cotton, soybean, peanut, and sorghum26. The importance of these crops as hosts following dispersal from maize varies greatly across the cotton production region, with soybean being particularly important in North Carolina26. Selection for resistance to Bt toxins occurs almost exclusively on Bt maize and cotton. However, maize also serves as an important source of H. zea that subsequently infest cotton, soybean and other non-Bt host plants comprising the natural refuge26. In cotton production areas, the non-Bt structured refuge required for Bt maize21 is a potentially important source of susceptible H. zea that subsequently infest cotton and the natural refuge for Bt cotton. However, compliance with the structured refuge requirement for maize by growers has been problematic27.Bt toxins, is currently expressed in all Bt cotton varieties; hence, populations developing on cotton are selected for resistance to Cry1Ac. Cry1Ac is not expressed in maize but the closely related Cry1Ab and Cry1A.105 toxins, also active against H. zea, are found in combination with other Cry toxins in commonly grown maize varieties. Cross resistance between Cry1Ac and Cry1Ab has been documented, so it is expected that indirect selection for resistance to Cry1Ac occurs in maize that expresses Cry1Ab29. Genetically modified soybean expressing Bt toxins is not registered for use in the US.Cry1Ac, in combination with one or more other H. zea larval populations that complete development in cotton and soybean subsequently overwinter as pupae in the soil. Previous research documented the importance of soybean as a late-season host for H. zea in North Carolina26. Based on this biology, we tested the assumption that underlies the natural refuge strategy for cotton; namely that the abundance and diversity of non-Bt crop and non-crop host plants in the local landscape are sufficient in practice to ensure the presence of a functional natural refuge. Specifically, we hypothesized that effectiveness of the natural refuge in suppressing Cry1Ac resistance in H. zea is dependent on the relative abundances of cotton and soybean in the local landscape. Because maize acts as a selection site for Cry-toxin resistance and a source of selected and non-selected populations infesting both cotton and the natural refuge, we hypothesized that effects of the relative abundances of cotton and soybean on resistance of local H. zea populations are also dependent on the relative abundance of maize in the local landscape.A majority of H. zea, collected from non-Bt maize at 59 field locations in North and South Carolina, following exposure to a diagnostic concentration of the Cry1Ac toxin. Because resistance levels of H. zea to Cry1Ac have been shown previously to vary greatly among local populations of H. zea14, we expected that selection occurring locally and in the most recent past would have a strong influence on larval survival in the bioassay. We examined the relationships between the abundances of maize, cotton, and soybean within a 1-km radius of each collection site during the preceding year and effectiveness of the natural refuge as measured by variation in larval survival in the bioassay. Larval survival was fit to a binomial distribution with random effects intercepts for sample year using a generalized linear mixed model. Independent variables included proportional areas of each crop and their respective two-way interactions .To investigate these hypotheses, we measured survival of larval offspring of H. zea populations across field locations during 2017 and 2018, larval offspring of insects collected from non-Bt maize were subjected to a diet-overlay bioassay containing a diagnostic concentration of Cry1Ac (29\u00a0\u00b5g/cm2) corresponding to the mean LC95 of four Cry1Ac susceptible H. zea populations. Overall, larval survivorship varied significantly among years among locations reveals high levels of spatial variation in resistance of local xin Fig. . BecauseBt toxins in H. zea occurs almost exclusively in maize and cotton, and selection is reduced at higher levels of relative abundance of non-Bt hosts in the landscape. The relationships are complex, reflecting how differences in the relative abundance of each of these crops affect H. zea populations and their associated inter-crop source-sink dynamics. These dynamics, in turn, influence the intensity of selection on the local populations.In our bioassay, higher survival indicates higher resistance to Cry1Ac. Selection for resistance to H. zea survival in the bioassay are highly significant but neither the main effect of soybean nor the cotton\u00a0*\u00a0soybean interaction effect is significant . Importantly, the cotton\u00a0*\u00a0maize and the maize\u00a0*\u00a0soybean interaction effects are both highly significant reveals considerable variation in abundances among years as well as among locations to verify the plots were non-Bt maize.In 2017 and 2018, H. zea diet was modified by adding casein to achieve a protein:carbohydrate ratio of 1.6:1 and supplemented with agar, anti-microbials and cellulose33. Pupae of each population were surface sterilized in a 1.3% bleach solution and placed in 1.8 L containers at a 3:1 female to male ratio (maximum 28 pupae per container). Containers were covered with cheesecloth to provide a substrate for oviposition and maintained at 25\u00a0\u00b0C, 50% RH, and natural photoperiod. Upon eclosion, moths were provided with 10% sucrose solution ad libitum. Eggs collected from the cheesecloth were transferred to 0.5-L containers where they hatched. Neonates less than 24 h old from each collection were used in a diet-based diagnostic-dose bioassay.Fifty to 120 larvae were collected from ears randomly selected from plants at least 5 m from the ends of the middle rows of each plot. Larvae were immediately placed on artificial diet in 30 mL plastic cups sealed with cardboard lids and held in coolers during transport to the laboratory where they were reared to pupation. The commercial 2, corresponding to the mean LC95 of four Cry1Ac susceptible H. zea colonies. Three colonies were started from field collections from Mississippi, North Carolina, and Louisiana in 2017, 2016, and 2016, respectively, and subsequently maintained in the laboratory at NC State University. The fourth was a laboratory colony obtained from Benzon Research, Inc. . In multiple bioassays of the Louisiana colony, this concentration resulted in a mean survivorship of 7% 34. Diet overlaid with 100 \u00b5L of aqueous Triton X-100 buffer in each of the remaining 16 cells per tray served as a control. Once the Cry1Ac solution dried, one neonate per well was added using a fine-tipped brush. The trays were then covered with a self-adhesive plate. Because colony size varied, not all assays had 112 larvae in the treated wells, but all had 16 larvae in the control wells. In 2017, bioassays were incubated in a growth chamber . In 2018, they were incubated at 25\u00a0\u00b0C and 50% RH to avoid condensation associated with a manufacturing change in cover plates. Bioassays were held for 7 days, after which mortality was assessed. Larvae that did not move after prodding with a brush were scored as dead. Proportion survival was corrected for control mortality using Abbott\u2019s method35. Although our mortality measure did not include \u201cfunctionally dead\u201d larvae, we believe it provides a meaningful measure of variation in sensitivity to Cry1Ac among locations included in the study. We base this on the consistency of mortality among our reference populations and the large variation in survivorship we observed among the field collected populations in our study . This belief is further strengthened by our finding that variation in bioassay response is related to variation in relative abundances of maize, cotton, and soybean in a biologically meaningful way.Bioassays were prepared by adding 0.75 mL of diet to each well of a 128-well plastic tray, which was then covered and refrigerated until used. Diet trays were warmed to room temperature prior to overlaying the diet in each of 112 wells per tray. A 40 \u00b5L aliquot of a solution of Cry1Ac protein dissolved in Triton X-100 (0.1%) buffer was applied to the diet surface to produce a Cry1Ac dose of 29 \u00b5g/cmBt maize plots to avoid confounding effects of resistance selection during the current year on survival. This allowed us to test for effects of maize, cotton and soybean abundance on resistance selection occurring during the prior year. Because no estimates of actual acreages of Bt maize and Bt cotton in landscapes surrounding each of the sample locations were available, our analysis was based on acreages of total maize, cotton, and soybean. Nationally, Bt varieties accounted for 88 and 82% of the cotton and maize, respectively, planted in the US during 20207. Landscape composition surrounding each sample site was determined using remotely sensed data from the USDA National Agricultural Statistics Service-Cropland Data Layer (CDL)36. Based on findings that a majority of marked H. zea moths emerging from maize fields were captured within 0.8 km of the source field37, we assumed that selection for resistance occurred locally on maize and cotton, and the abundance of these two crops would have the strongest effect on resistance levels observed in our samples. Therefore, we calculated proportional areas of maize, cotton, and soybean during the prior growing season within a 1-km buffer surrounding each collection location using ArcGIS .Larvae were collected from non-38. Proportional areas of cotton and soybean, and of cotton and maize, were not related. The proportional areas of maize and soybean were weakly but positively related; the regression accounted for only 7.2% of total variation . This model was selected over others that included the 3-way interaction because it had the lowest AIC value and allowed us to generate confidence intervals for predicted probability of survival. A Moran\u2019s I test was conducted in R version 3.6.039 to test for spatial autocorrelation among sample sites40. Using survival in bioassays as the predictor, there was no evidence of autocorrelation among sample sites (P = 0.29), indicating that bioassay survival among sample sites was independent.To examine the relationships between the abundances of maize, cotton and soybean in the local landscape and effectiveness of the natural refuge as measured by variation in larval survival in the bioassay, we used a generalized linear mixed model with a binomial distribution and a logit link function in the GLIMMIX procedure of SAS version 9.4Supplementary Information."} +{"text": "Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae), is an important agricultural pest in U.S. cotton and is managed using transgenic hybrids that produce insecticidal proteins from the bacterium, Bacillus thuringiensis (Bt). The reduced efficacy against H. zea caterpillars of Bt plants expressing Cry toxins is increasing in the field. In a first step towards understanding Bt cotton\u2013bollworm\u2013microbiota interactions, we investigated the internal bacterial microbiota of second\u2013third stadium H. zea collected in the field from non-Bt versus Bt (WideStrike) cotton in close proximity . The bacterial populations were analyzed using culture-dependent and -independent molecular approaches. We found that WideStrike samples had a higher bacterial density and diversity per larva than insects collected from non-Bt cotton over two field seasons: 8.42 \u00b1 0.23 and 5.36 \u00b1 0.75 (log10 colony forming units per insect) for WideStrike compared to 6.82 \u00b1 0.20 and 4.30 \u00b1 0.56 for non-Bt cotton for seasons 1 and 2, respectively. Fifteen phyla, 103 families, and 229 genera were identified after performing Illumina sequencing of the 16S rRNA. At the family level, Enterobacteriaceae and Enterococcaceae were the most abundant taxa. The Enterococcaceae family was comprised mostly of Enterococcus species (E. casseliflavus and another Enterococcus sp.). Members of the Enterococcus genus can acidify their environment and can potentially reduce the alkaline activation of some Bt toxins. These findings argue for more research to better understand the role of cotton\u2013bollworm\u2013bacteria interactions and the impact on Bt toxin caterpillar susceptibility.The bollworm, Gossypium hirsutum L.) is a fiber, feed, and food crop of global significance [Bacillus thuringiensis (Bt) [Pectinophora gossypiella (Saunders) (Lepidoptera: Gelechiidae) and the American cotton bollworm Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae) are important cotton pests in the USA [Cotton . The toxsis (Bt) . Caterpi the USA .Bt cotton has been in use for over two decades and has provided significant benefits, e.g., reducing the need for chemical insecticides. For example, the number of insecticide applications was reduced by at least 50% compared to non-transgenic cotton in Arizona . Bt cottH. zea to Cry toxins in cotton, was provided recently by Reisig et al. [Pest resistance to the Cry toxins is now well established ,12. The H. zea in particular, Cry resistance has been attributed to several factors. Caccia et al. [H. zea larvae compared to a susceptible strain (LC). They proposed this as the mechanism of resistance to Cry1Ac because of reduced toxin binding and increased degradation by proteolysis. Zhang et al. [H. zea resistant and susceptible strains but instead came to the conclusion that a decrease in Cry1Ac activation by midgut proteases partially contributed to Cry1Ac resistance in their GA (a field-selected population) and GA-R (derived from GA and further selected in the laboratory for increased resistance) strains. Lawrie et al. [H. zea. RNAseq analyses also revealed several additional differences associated with resistance and included the already recognized known mechanisms of Bt resistance in the same bollworm population [In Lepidoptera in general and in a et al. detectedg et al. found noe et al. also sugpulation .Burkholderia confers insecticide resistance in the bean bug, Riptortus pedestris (Fabricius) (Hemiptera: Alydidae), to fenitrothion, a widely used organophosphate. Cheng et al. [Citrobacter sp. (CF-BD), isolated from the oriental fruit fly, Bactrocera dorsalis, conferred resistance to trichlorphon by increased degradation. When it comes to the relationship between Bt and the insect gut microbiota, the majority of studies have focused on the mode of action and whether the septicemia caused by enteric bacteria is mandatory [Another possible contributing factor to tolerance or resistance to Bt could be gut symbionts. In fact, there is evidence bacteria can affect food digestion and host nutrition, provide protection from pathogens and parasitoids, and degrade toxic compounds ,20,21,22g et al. showed tandatory ,29 or noandatory ,31 for BH. zea in cotton fields. Wang et al. [H. zea larvae but the insects were collected from sweet corn. Additionally, they used culture-dependent techniques and focused only on bacteria found in fresh oral secretions. The aim of the current study was to determine the bacterial community composition of cotton bollworms from insects (2nd to 3rd stadium) collected from non-Bt versus Bt (WideStrike) cotton grown in the same field in North Carolina. Cultivable bacteria were enumerated, and the total H. zea-associated bacterial community structure was analyzed using culture-independent DGGE and next-generation sequencing of 16S rRNA gene amplicons.Despite the growing interest in studying the interactions between gut microbes and insecticides, little is known about the microbial communities associated with g et al. studied Helicoverpa zea larvae were collected in August 2016 from non-Bt (PHY425RF) and Bt cotton fields located at the Upper Coastal Plain Research Station, Rocky Mount, NC, USA. The locations of the collections are shown in TM, Becton, Dickson and Company, Sparks, MD, USA). Fourteen larvae (7 per treatment) were separately surface sterilized with 95% ethanol (30 s) followed by 1% bleach (30 s) and finally, washed 5 times with sterile water. The final washes were pooled together per treatment for subsequent verification of sterility through bacterial culture. The larvae were transferred to sterile 2 mL microcentrifuge tubes , each containing 10 sterilized 3 mm solid glass beads and were homogenized in 300 \u00b5L of sterile phosphate-buffered saline in the FastPrep\u2122 FP120 system for 45 s. To enumerate total cultivable bacteria, homogenate from each sample was serially diluted up to 10\u22127 and then drop plated (25 \u03bcL/each dilution) on the surface of the medium using the drop plating method [U-test using the R statistical software [The number of cultivable bacteria per caterpillar from non-Bt and Bt cotton was estimated using a Tryptic Soy Agar (TSA) medium was performed to investigate the bacterial community diversity of the samples collected during season 1. Total DNA was extracted from three samples per treatment using the method previously described by Ponnusamy et al. . BrieflySince differences were found in season 1 both in cultivable bacteria per insect and bacterial diversity between treatments (see Results and Discussion for more detail), this justified repeating the research in season 2 and using a higher resolution method to determine bacteria diversity.H. zea were collected in August 2018 from non-Bt and Bt cotton located at the Upper Coastal Plain Research Station, Rocky Mount, NC, USA from adjacent field plots , commonly used to assess total cultivable heterotrophic bacterial growth.Larvae were separately surface sterilized the same day they were transferred to the laboratory and homogenized in sterile PBS as described in season 1. Homogenates were then used subsequently for quantification of cultivable bacteria and DNA extraction. For cultivable bacteria, homogenates from the different samples were drop plated on the surface of Petri dishes (as described earlier) containing Plate Count Agar per the manufacturer\u2019s protocol. Two hundred microliters of each larval homogenate were used, and DNA was eluted in 60 \u00b5L of elution buffer (AE buffer). The DNA samples were further purified using the Wizard DNA cleanup system . DNA quality and quantity were assessed using a NanoDrop 1000 spectrophotometer . The genomic DNA samples were normalized to 50\u2013100 ng/\u00b5L and stored at \u221240 \u00b0C until further use.Bacterial 16 rRNA gene fragments were amplified using universal V3\u2013V4 hypervariable region-specific primers . AmplifiQuality filtering and analyses of the 16S rRNA gene sequence data see were perU-test using the R statistical software.Statistical analyses for alpha and beta diversities were conducted in QIIME2. The statistical significance of alpha diversity between groups (Bt and non-Bt) was inferred using pairwise Kruskal\u2013Wallis H-tests. To estimate \u03b2-diversity between the two groups (Bt and non-Bt), we used the weighted Unifrac distance metric . For bet10 CFUs/insect (see 10 CFUs/insect) for larvae collected in non-Bt cotton . In the second season, the bacterial count in the larvae varied between 2.78 and 8.1 log10 CFUs/insect .In the first season, the estimated total number of cultivable bacteria in the bollworms varied between 6 and 9 logsect see . One nonsect see . Bollwort cotton , with a sect see with measect see of the bollworms collected from Bt versus non-Bt cotton in season 1. Each band shown on the gel image represenAfter quality filtering, low-sequence samples were removed from the data and not processed during the subsequent analyses. These samples were defined as samples with <5000 sequence reads. Thus, a total of 1,667,663 16S rRNA gene sequences were obtained from 18 16S rRNA gene libraries (6 for non-Bt and 12 for Bt) after quality filtering using DADA2 with an average of 92,648 reads. The reads obtained after quality filtering were clustered into 2301 sequence variants and assigned to 15 phyla, 103 families, and 229 genera.H. zea larvae collected from non-Bt cotton harbored more ASVs (OTUs) compared to Bt (WideStrike) samples (p = 0.05) (p = 0.07).The shape of the rarefaction curves based on the observed OTUs (ASVs) suggests that the sequencing depth of 5000 used in this study allowed us to capture the majority of taxa present in the bollworm samples . Alpha d= 0.004) . However = 0.05) . \u03b2-diver = 0.05) . No dist = 0.05) . This waU-test, p = 0.553). In fact, 30% of the reads in the WideStrike (Bt) samples belonged to Enterococcaceae compared to 15% of the reads in non-Bt cotton , Klebsiella (with three species), Enterobacter, and Erwinia (Enterococcus casseliflavus (20% in Bt versus 15% in non-Bt) and a second non-identified Enterococcus species (9.5% in Bt versus 0.58% in non-Bt) were more abundant in WideStrike samples. Klebsiella oxytoca (the most abundant Klebsiella species identified) was also highly present in WideStrike samples (12% of the reads vs. 2% for non-Bt cotton). Enterobacter was the most abundant genus detected in our samples and comprised 19% of the reads in non-Bt cotton and 28% of the reads in WideStrike (Bt) at the phylum level were Proteobacteria (67.80%) and Firmicutes (27.26%) see . We foun Erwinia . Enterocike (Bt) . Four peike (Bt) .H. zea, as impacted by the cotton variety. Significantly higher numbers of bacteria were cultured from bollworms collected from Bt (WideStrike) compared to those from non-Bt cotton and were also the most resistant to various insecticides . It is well-known that diet is an important factor in shaping an insect\u2019s microbiome [In this study, we investigated the density and diversity of bacterial communities associated with the internal body of second and third stadium larvae of the American cotton bollworm, t cotton A during t cotton B using acrobiome , and theIn the first year, we used DGGE to quantify the number of DNA bands and band relative intensities to estimate overall OTU diversity. In the second year, we used high-throughput 16S rRNA gene amplicon sequencing which has a much greater resolution in measuring bacteria diversity than DGGE. The sequencing results showed that samples from non-Bt cotton harbored a significantly higher number of OTUs, but the Shannon diversity analysis revealed no significant difference between the two treatments. The principal coordinates analysis (PCoA) plot revealed no distinct clustering of WideStrike and non-Bt cotton collected caterpillars, confirmed by the PERMANOVA test, which was not significant. Even though these observations could, at first, suggest that the bacterial communities present in the two treatments are the same, the presence of several taxa that are selectively more abundant in one treatment over the other is worth noticing and examining. This is discussed in more detail later.Klebsiella oxytoca and Enterobacter spp., highly present in our samples, are diazotrophic bacteria and are known to help insects fix and use nitrogen [Enterobacter spp. have been shown to promote herbivory on chemically defended plants likely because of their ability to detoxify plant xenobiotics [H. zea is yet to be demonstrated.Overall, Proteobacteria and Firmicutes were the predominant phyla in our samples. Proteobacteria accounted for about 68% of the reads. This result is not surprising and is in agreement with findings in other insects, including Lepidoptera ,53,54. Tnitrogen . In addiobiotics . WhetherEnterococci (Enterococcus casseliflavus and another Enterococcus sp.). Enterococci are commonly found in the gut of larval Lepidoptera, and E. casseliflavus has been isolated from other phytophagous lepidopterans such as Manduca sexta [Spodoptera litura [Hyles euphorbiae, and Brithys crini [H. euphorbia and B. crini were dominated by E. casseliflavus, suggesting that this bacterium is involved in the tolerance developed by these insects to the toxic compounds found in their host plants. It has also been found to be associated with Spodoptera litura feeding on lima beans which are rich in toxic terpenes [Enterococcus sp. were found to enhance resistance in the diamondback moth, Plutella xylostella [E. faecalis) to acidify their local environment [H. zea larval guts might protect the insect against the insecticidal activity of Bt toxins which require an alkaline environment to be processed. The presence of these bacteria might result in Bt toxin tolerance or resistance . Moreover, some studies have previously suggested that gut bacteria have the potential to inhibit Bt growth (not applicable in our studies) and degrade Bt toxins [Enterococci highly present in the WideStrike samples in this study play a role in changing the caterpillar\u2019s susceptibility to Bt. Additional studies are needed to clarify the role of H. zea gut bacteria in Bt susceptibility, especially given the fact that high levels of resistance to Bt have been found in cotton fields in North Carolina where the insects were collected [The most striking observation in our data is the differential abundance of Enterococcaceae between Bt (WideStrike) and non-Bt cotton. Thirty percent (30%) of the reads in WideStrike belonged to this family, essentially composed of two ca sexta , Spodopta litura , Hyles eys crini . The gutterpenes . Enteroclostella , againstironment ,59. The t toxins ,61. At tollected .H. zea, using culture techniques, DGGE and Illumina MiSeq next-generation sequencing (NGS). We found higher levels of internal cultivable bacteria in bollworms from WideStrike in seasons 1 and 2 but the differences were only statistically significant for season 1. The results also showed that a few taxa dominated the microbiota of the caterpillars at each taxonomical classification level. The most intriguing result was the presence, in high abundances, of Enterococcaceae , especially in Bt (WideStrike) samples. Enterococcus spp. have been shown to enhance resistance to conventional insecticides and some members of this genus can acidify their environment, which could increase tolerance towards Bt by decreasing its activation. Therefore, studies with larger sample sizes and with varying collection sites should be conducted to characterize the role that these bacteria may play in H. zea larvae feeding on Bt cotton.In summary, we characterized for the first time the microbiota associated with field-collected (Bt and non-Bt cotton) second and third stadium cotton bollworms,"} +{"text": "Colorectal cancer is an increasingly prevalent disease that accounts for substantial mortality and morbidity and is responsible for an impaired quality of life. This scenario highlights the urgent need to better understand the biological mechanisms underlying colorectal cancer onset, progression and spread to improve diagnosis and establish tailored therapeutic strategies. Therefore, understanding tumor microenvironment dynamics could be crucial, since it is where the tumorigenic process begins and evolves under the heavy influence of the complex crosstalk between all elements: the cellular component , the non-cellular component (extracellular matrix) and the interstitial fluids. Bioengineered models that can accurately mimic the tumor microenvironment are the golden key to comprehending disease biology. Therefore, the focus of this review addresses the advanced 3D-based models of the decellularized extracellular matrix as high-throughput strategies in colorectal cancer research that potentially fill some of the gaps between in vitro two-dimensional and in vivo models.More than a physical structure providing support to tissues, the extracellular matrix (ECM) is a complex and dynamic network of macromolecules that modulates the behavior of both cancer cells and associated stromal cells of the tumor microenvironment (TME). Over the last few years, several efforts have been made to develop new models that accurately mimic the interconnections within the TME and specifically the biomechanical and biomolecular complexity of the tumor ECM. Particularly in colorectal cancer, the ECM is highly remodeled and disorganized and constitutes a key component that affects cancer hallmarks, such as cell differentiation, proliferation, angiogenesis, invasion and metastasis. Therefore, several scaffolds produced from natural and/or synthetic polymers and ceramics have been used in 3D biomimetic strategies for colorectal cancer research. Nevertheless, decellularized ECM from colorectal tumors is a unique model that offers the maintenance of native ECM architecture and molecular composition. This review will focus on innovative and advanced 3D-based models of decellularized ECM as high-throughput strategies in colorectal cancer research that potentially fill some of the gaps between in vitro 2D and in vivo models. Our aim is to highlight the need for strategies that accurately mimic the TME for precision medicine and for studying the pathophysiology of the disease. Colorectal cancer (CRC) is an increasingly prevalent disease that accounts for substantial mortality and morbidity and is responsible for an impaired quality of life and high financial resource consumption . DespiteOver the last few years, the ECM has become a hot topic of research since this complex network of macromolecules is much more than a physical and stable structure providing support to tissues. The ECM is an extremely dynamic component of the TME that modConsidering the relevant role of this cellular\u2013acellular communication, several efforts have been made to develop new CRC models that accurately mimic the interconnections within the TME to understand the disease ,21,22,23Therefore, the focus of this review is to summarize the innovative and advanced 3D-based models of CRC, with a special highlight on the decellularization-based models, which offer the intrinsic native properties of the ECM to accurately resemble and reconstruct the TME to study CRC biology and drug discovery.CRC is the most frequently diagnosed gastrointestinal neoplasia, affecting the colon and rectum . It is rCRC is a highly complex and heterogeneous disease from both histopathologic and molecular standpoints . DiseaseCRC complexity and evolution are not solely dependent on an accumulation of genetic modifications in malignant cells. Currently, it is widely accepted that the microenvironment plays fundamental roles, not only in tumorigenesis, but also in controlling progression and dictating CRC prognosis . In factDue to the high heterogeneity of these types of tumors, a new classification system was recently proposed based on consensus molecular subtypes (CMS) that reflect significant biological differences: CMS1 (MSI Immune), CMS2 , CMS3 (Metabolic) and CMS4 . AcknowlThese features highlight the complexity of the disease and have to be considered in strategies aiming to model and study CRC.Cells composing the TME are within an elaborate and active network of ECM proteins, which provides a scaffold structure in which cells communicate and proliferate . This ECIn general, ECM proteins are common to both normal and malignant tissues. However, while these proteins are homogeneously distributed in normal tissues, they present an extremely irregular and heterogeneous distribution in tumors . Among oType 1 collagen overexpression in tumor tissues has been implicated in the promotion of tumor growth, epithelial to mesenchymal transition (EMT), distant metastasis and increased stemness properties of CRC cells, through integrin \u03b12\u03b21 and the activation of PI3K/AKT/Snail and WNT/PCP signaling pathways ,55,56. FLaminin also displays an altered expression in tumors and is iHyaluronic acid, a non-sulfated glycosaminoglycan (GAG), is highly represented in tumors and constitutes an important component in promoting tumorigenesis, cell proliferation, migration and blocking apoptosis, namely by binding to CD44 and TLR4 ,63.+/CD44+ colon cancer stem cells (CSCs) require EDA-FN binding to integrin \u03b19\u03b21 for sphere formation, and tumorigenic capacity by triggering the FAK/ERK/\u03b2-catenin signaling pathway [Fibronectin is another vital component of the ECM that is upregulated in CRC and promotes cell proliferation through the NF-kB/p53 signaling pathway . The ext pathway .A proteomic analysis of colorectal normal and tumor ECM revealed that collagen IV, V and XIV, fibrilin, emilin, vitronectin, laminin and endomucin have increased expression in tumor ECM, and that periostin, versican, thrombospondin-2 and tenascin were exclusively present in tumor tissue. Interestingly, when compared with available clinical gene expression array data, these signatures correlated with tumor progression and metastasis . For exaThe thrombospondin (THBS) family has been associated with the regulation of angiogenesis and cancer progression by controlling multiple physiological processes . THBS-1 Each malignant tumor exhibits a specific proteoglycan molecular signature, which is closely associated with tumor differentiation and biological behavior . In the Altogether, these studies highlight the influence of ECM composition in cellular modulation and disease progression. However, not only the biochemical composition of the ECM but also its biomechanical properties must be considered in terms of TME dynamics.Besides alterations in composition, the ECM also suffers a structural rearrangement in the TME, with the alignment of fibers that ultimately contribute to an anisotropic configuration ,81. WhilCell mechano-sensing translates biophysical forces into cellular responses, impacting several biological pathways, mechanisms and cell behavior. Solid tumors are constantly affected by mechanical stimuli, such as compression, matrix stiffness and fluid mechanics ,84. TissLysyl oxidase (LOX) plays a vital role in this context by crosslinking collagen fibers, enhancing matrix compressing and stiffening in CRC, which then activates cell migration pathways . SpecifiThe protein cross-linking enzyme transglutaminase-2 (TG2) also has a role in modulating biomechanical properties through the formation of cross-links between glutamine and lysine sidechains of target proteins that are resistant to proteolytic degradation, exhibiting important pathophysiological functions . In cancMechanical strains produced by external forces can initiate the expression of tumor-associated genes in CRC preneoplastic tissues . RemarkaMatrix stiffness can also regulate the metastasis of CRC cells by SFK and MLCK through receptor-type tyrosine-protein phosphatase alpha (RPTP\u03b1) that senses mechanical stimulation . HCT-8 cNotably, contrary to normal ECM from the same patient, tumor ECM was able to polarize human macrophages into an M2-like phenotype, anti-inflammatory, and pro-tumor, with the expression of specific markers and the secretion of anti-inflammatory cytokines, such as CCL18 . These mRecently, the influence of matrix stiffness in treatment resistance has also gained attention. During chemotherapy, the dense and heterogenous structure of ECM in solid tumors are critical determinants for blood perfusion and interstitial transportation of the drug . The expMatrix stiffness also plays a role in radiation resistance. Ionizing radiation upregulates \u03b21 integrins activating its downstream signals and increases the adhesion of CRC cells to collagen and fibronectin, contributing to the survival of cancer cells after treatment . These eStiffening of the ECM progressively increases from early to later stages of CRC , emphasiCRC 3D models have been recently reviewed, including hydrogels, patient-derived scaffolds, spheroids, organoids, microfluidic devices, and tumor and organ on-chip devices ,118,119.Biomaterial-based 3D organotypic models can be subdivided into scaffold-free or scaffold-based systems . ScaffolThe most common 3D organotypic models for drug delivery and in situ tissue engineering are based on hydrogel-based scaffolds, frequently using natural polymers, such as collagen and alginate. These platforms are the most well-studied due to their low cost, low immunogenicity, versatility, biocompatibility and similarity to natural ECM . These sThree-dimensional bioprinting technology has gained increased relevance by allowing the standardization of the scaffold model between experiments . This teEven though hydrogel-based scaffolds and 3D bioprinting organotypic models are interesting tools to recreate the dynamics between major key players of CRC TME, they fail to incorporate fluidic dynamics between these components . ScaffolAltogether, the previously mentioned scaffold-based 3D systems represent suitable options for studying CRC TME interactions. Still, the exact native composition and structure of the ECM remains difficult to recreate reproducibly in vitro and, consequently, the recreation of the TME is highly restricted .Decellularized ECM from malignant tissues is gaining attention in the field of organotypic modeling of tumor-stroma interactions by successfully incorporating key biochemical and biophysical characteristics of the native TME ,134,135.\u00ae, proprietary knowhow Fraunhofer IGB) [TM. Moreover, HCT-116 cells displayed a higher rate of cell invasion when cultured in mouse colon cancer decellularized matrices than in cells cultured in wild-type mouse colon scaffolds or MatrigelTM [Overall, decellularization protocols aim to eliminate all cell material while maintaining ECM architecture and biochemical components and are mainly a combination of physical, chemical and enzymatic methodologies . The remfer IGB) , is an ifer IGB) . AnothertrigelTM .Three-dimensional ECM-hydrogels from decellularized human normal and tumor colon tissues have already been prepared through lyophilization, powdering and solubilization techniques and were useful for showing that tumor ECM components induced faster growth of HT-29 cells and their shift toward a glycolytic metabolism . In an eDespite the undeniable utility and potential of these works that clearly show the impact of the matrix in CRC progression and metastasis, these approaches lack the native ECM architecture and mechanical properties that exhibit an active role in cell behavior ,19.In human CRC, strategies using decellularized ECM to study TME dynamics have been broadly focused on patient-derived scaffolds. Several protocols have been developed with the aim of efficiently removing cellular components from intestinal tissue while maintaining the architecture and biomechanical/biochemical features . These sIn this field, approaches that consider paired CRC and normal adjacent tissue benefit from the direct comparison of samples from the same individual and allow the consideration of the role of tumor versus normal ECM on various cancer-associated activities and interactions with other TME components. Nevertheless, studies with access to this type of exceptionally valuable sample have been mainly restricted to recellularization with only one type of cell and require a further complex to create a structure that most trustworthily resembles the TME. Beyond proteomic and structural characterization of the decellularized ECM, reports showed that tumor ECM modulates IL-8 expression by HT29 cells and thatFrom a different perspective, Pinto et al. implemenDecellularized tissues have also been applied in the study of the CRC metastatic process. D\u2019Angelo and colleagues created a model with decellularized normal and primary tumor CRC tissue, as well as matched CRC liver metastasis with the aim of recapitulating this specific microenvironment in vitro . This syOne of the major drawbacks of CRC patient-derived scaffolds is the limited amount of tumor tissue available from each individual, since it derives from biopsies or surgical resections, from which most tissue is required for further diagnostic molecular and histological characterization. Another question to keep in mind is that normal mucosa adjacent to the tumor, while often considered a healthy control from the same individual, in fact represents an intermediate state between normal and tumor tissues . DespiteThe recellularization of decellularized ECM also presents a few challenges, namely the choice of cell(s), the cells\u2019 distribution within the scaffold and the reproducibility of recellularization efficiencies, even in samples from the same patient . ConcernThese are relevant topics to be considered when establishing an organotypic 3D model for CRC cancer with decellularized tissues, as well as for previously determining if there was previous neoadjuvant therapy. Still, the possibilities of these kinds of systems to incorporate ECM, cancer, stromal and immune cells will allow the study of the dynamic and complex crosstalks between the different components and recapitulate more closely the TME and, eventually, design strategies with potential for predicting clinical outcomes.The future of cancer research relies on the implementation of translational 3D in vitro models that accurately mimic human tissues. Such models will foster an improved knowledge of cancer physiological and pathological processes, as well as facilitate drug discovery and screening. To scrutinize the molecular and cellular mechanisms in CRC, it is imperative that the approach to complex TME is recreated, namely the genetic, cell-to-cell, and cell\u2013ECM cues that instruct cancer development and progression. To move forward on the study of TME interactions, decellularized colorectal matrices are attractive bioactive scaffolds, as they may be repopulated with different cell types and submitted to several soluble factors, pharmacological agents and/or radiation therapy.We believe it is essential to standardize tissue-specific decellularization protocols according to tissue fragment size and to the intended final application. Additionally, a consensus on the methods to assess the decellularization efficiency, as well as the structural characterization of the decellularized ECM is also required. Until now, there is still limited information about effective long-term storage methodologies for these scaffolds, but some reports indicate that slow-freezing could provide an interesting solution ,157. We In conclusion, it is widely accepted that accurate 3D cell culture models that consider interactions between CRC cells\u2013TME\u2013ECM are required for understanding disease biology and developing more advanced therapies regarding precision cancer medicine."} +{"text": "Youth is characterized by testing and crossing natural boundaries, sometimes with the help of performance-enhancing substances. In this context, doping prevention measures play a crucial role to protect individuals both within and outside the context of elite sport. Based on the PRISMA guidelines, a systematic literature search was conducted in the databases ProQuest (ERIC), Scopus, PSYNDEX/PsychInfo, PubMed, and Web of Science Core Collection to provide an overview of the impact of doping prevention measures, with particular attention to the underlying understanding of learning. As a result of the screening process, 30 of the initial 5,591 articles met the previously defined and recorded eligibility criteria. The analysis led to heterogeneous results regarding content, implementation, target group, or outcome variables considered relevant. Two-thirds of the studies related to the competitive sports context. Nevertheless, there has been a growing interest in studying doping prevention and its effects on non-elite athlete target groups in recent years. In terms of effectiveness, many measures did not achieve long-term changes or did not collect any follow-up data. This contrasts with understanding learning as sustained change and reduces the intended long-term protection of prevention measures, especially for adolescent target groups. Even young age groups from 10 years upwards benefited from doping prevention measures, and almost all doping prevention measures enabled their participants to increase their physical and health literacy. No conclusion can be drawn as to whether doping prevention measures based on constructivist ideas are superior to cognitivist approaches or a combination of both. Nevertheless, programs that actively engage their participants appear superior to lecture-based knowledge transfer. Most of the prevention measures offered a benefit-orientation so that participants can achieve added value, besides trying to initiate health-promoting change through rejection. Because of the lack of sustained changes, a further modification in doping prevention seems necessary. The review results support the value of primary prevention. Doping prevention measures should enable tailored learning and development options in the sense of more meaningful differentiation to individual needs. The implementation in a school context or an online setting is promising and sees doping as a problem for society. The review highlights the importance of accompanying evaluation measures to identify efficient prevention components that promote health and protect young people. Doping prevention is a matter for society as a whole and not an exclusive concern of elite sport. This statement is the consequence of considering the desire for performance-enhancement as a societal phenomenon and acknowledging the association of athletic success and appearance with strength, competence, social ability, or beauty use of performance-enhancing substances in the sense of education . Furthermore, anti-doping agencies offer targeted doping prevention measures for schools offer a favorable opportunity because they appeal to young target groups regardless of their athletic performance. Unlike in elite sport, they do not need to involve personally addressed repressive information components such as punishments and suspensions. Taking Germany as an example, anti-doping education is not a compulsory part of the school's general physical education curriculum. It is only offered to students who wish to gain university entrance and have chosen physical education as an examination subject concerning their high school diploma .In addition to previous recommendations, further practical implications are derived from the results of this review.The systematic review was conducted following the PRISMA recommendations , Scopus, PSYNDEX/PsychInfo, PubMed, and Web of Science Core Collection databases. The search strategy was based on a specification of the thematic content of relevant studies, the population studied, the label of the intervention, including a description of possible outcomes, and references relating to the studies' evaluation. The systematic approach and the search terms used are presented in The definition of the eligibility criteria was based on the PICOS approach to specifying the participants, interventions, comparisons, outcomes, and study designs to be considered in advance . The studies were mainly conducted in western societies and span a performance spectrum from international elite athletes . One of the notable features here is that only a few studies have previously conducted a power analysis to determine the optimal sample size or associated variables, such as nutritional supplements or diet. Less frequently, norms, behavioral control, or values were evaluated. Concerning self-reported doping behavior, it is noticeable that studies that surveyed both behavior and doping intentions and attitudes showed short-term but rarely long-term changes in doping intentions or attitudes . Programs that provided a personal benefit through the topics covered appeared to be more effective. Meanwhile, measures that focused purely on deterrence . Besides, a large part of the prevention measures also offered a benefit character. For example, the ATLAS and ATHENA programs imparted knowledge about healthy eating or efficient training Doping prevention measures should be scientifically monitored and evaluated in longitudinal or experimental designs. This approach implies the need for long-term research funding opportunities that also include consideration of follow-ups More international and transdisciplinary collaborative doping prevention networks composed of researchers and individuals from elite sport should be established In terms of tailored doping prevention measures, developers of anti-doping interventions should consider a modular system that offers participants opportunities for differentiation within an overarching theme. This differentiation should increase interest and enable more efficient learning.(d)In addition to self-reporting, alternative methods like implicit or indirect procedures should be used to consider the effects of doping prevention measures Online-based prevention interventions offer benefits of increased individualization but should be evaluated in terms of learning success.(f) Prevention measures should be integrated into school curricula at an early stage and with a positive connotation so that a constructive atmosphere can be created.(G) The perspective on clean athletes and their empowerment should be expanded can be found in the article/supplementary material.The author confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Emergency Department (ED) attendances with chest pain reduced during the COVID-19 lockdown. We performed a service evaluation project in NHS Lothian to explore how and why the COVID-19 pandemic and public health advice had affected chest pain presentations and help-seeking behaviour at an individual patient level using a qualitative interview approach.We carried out 28 semi-structured telephone interviews with a convenience sample of patients who presented with chest pain during lockdown and in patients with known coronary heart disease under the outpatient care of a cardiologist in April and May 2020. Interviews were audio recorded and voice files listened to while making detailed notes. Salient themes and issues were documented as verbatim extracts. Interviews were analysed thematically.Patient interviews revealed three main themes. 1) pandemic help-seeking behaviour; describing how participants made the decision to seek professional healthcare assessment. 2) COVID-19 exposure concerns; describing how the subthemes of perceived vulnerability, wishing to protect others and adding pressure to the health service shaped their decision making for an episode of acute chest pain. 3) hospital experience; describing the difference between the imagined and actual experience in hospital.Qualitative interviews revealed how the pandemic shaped help-seeking practices, how patients interpreted their personal vulnerability to the virus, and described patient experience of attending hospital for assessment during this time. As patient numbers presenting to hospital appeared to mirror public health messaging, dynamic monitoring of this messaging should evaluate public response to healthcare campaigns to ensure the net impact on health, pandemic and non-pandemic related, is optimised. Symptoms suggestive of acute coronary syndrome are one of the most common reasons for Emergency Department presentation . Reportsrd March 2020 with advice to \u2018stay home, protect our NHS, save lives\u2019. Patients admitted to acute medical units in NHS Lothian, Scotland, during the first 31 days after lockdown were of higher medical acuity and had a higher risk of inpatient mortality when compared to patients in the same period in the preceding 5 years [Scotland entered lockdown on 23 5 years . This suPrevious research on decision making in response to chest pain has revealed a complex series of actions. Patients perform a process of symptom interpretation and self-evaluation of coronary candidacy to assess personal risk , they ofInternal audit data from our own centre revealed the average weekly number of Emergency Department attendances with suspected acute coronary syndrome fell from 287/week between January and May 2019 to 233/week in 2020. The lowest number of attendances per week (128) was seen in the last week of March 2020 as lockdown was announced (unpublished data) . Google These data highlighted the need to explore how and why the COVID-19 pandemic and public health advice had affected chest pain presentations and help-seeking behaviour at an individual patient level using a qualitative interview approach.Single semi-structured telephone interviews were conducted with patients attending hospital for the assessment of suspected acute coronary syndrome between the 17 April 2020 and 08 May 2020. Participants were identified using an order request system for cardiac troponin which identifies all patients with suspected acute coronary syndrome in our centre and permits review of the electronic patient record . ConveniThis project was reviewed by NHS Lothian Research and Development Office and the South East Scotland Research Ethics Service. These bodies advised the project was service evaluation therefore research ethics committee approval was not required. It was registered with and approved by the local cardiology Quality Improvement Team according to local practice.This study was guided by the principles of phenomenology aiming tThe project was conceptualised through discussion with patients admitted to a cardiology ward during the COVID-19 pandemic. Key ideas for interview questions were developed by consultation with the patient group.Interview participants were aged between 39 and 88 years and 54% were female. 14 participants had an admission troponin greater than the diagnostic threshold for myocardial infarction, 7 had an admission troponin less than the diagnostic threshold for myocardial infarction and 7 were under the care of a cardiologist as an outpatient.Telephone conversations revealed three main themes; pandemic shaping of help seeking behaviour, COVID exposure concern, and hospital experience. These are summarised in Table 1.This theme describes how participants made the decision to seek professional healthcare assessment. It is divided into subthemes describing a staged response to help seeking; 1) symptom appraisal, 2) consultation of lay members for advice, and 3) accessing professional healthcare assessment. Quotations illustrating these interpretations are given in Firstly, participants reported performing a symptom appraisal which could lead to symptoms being attributed to other causes, for example indigestion, due to the transient nature of their symptoms or lack of severity. Some participants had experienced myocardial infarction previously and performed a comparison between prior experience and their current symptoms. Based on their self assessment of symptoms, participants then progressed to the next stage.Persistence of symptoms triggered discussion with local neighbourhood networks and family to decide on the next course of action. It was noted that for the participants in this study, these networks often included a healthcare professional who happened to live closeby. The subsequent outcome of this discussion typically involved contacting a GP or the NHS out of hours service (NHS24) for preliminary assessment. No barriers to accessing these services were reported. Very few participants telephoned the Scottish Ambulance Service directly without additional assessment, with only a few self-presenting to a hospital Emergency Department.Some participants reported that gaining access to physical examination by a healthcare professional was limited. Consultations tended to be telephone based which were sometimes viewed as inadequate due to difficulty in describing symptoms over the telephone to a doctor with whom they did not have a relationship. Other participants did have access to face-to-face primary care appointments but similarly these were not viewed positively due to lack of physical examination.One patient described having three GP phone consultations and an out of hours appointment where she had blood pressure and oxygen saturations recorded but no ECG performed despite describing chest, arm and jaw pain. This patient subsequently self-presented to the Emergency Department with a confirmed acute myocardial infarction.Table 2.Participants were asked whether coronavirus, and the societal-response to it, had affected their decision making to attend hospital. Participants differed in their response with some stating their decision making was completely unaffected. Some participants were very concerned about presentation to hospital due to COVID-19, however, it did not stop them attending. Concerns could largely be attributed to three subthemes; perceived vulnerability to the virus, wishing to protect others, and avoidance of adding pressure to busy health services. These will be considered separately with quotations used to illustrate interpretations given in Participants spoke about how they believed their personal vulnerability to the virus and access to treatment would be influenced by their increased age. Some participants believed attending hospital would increase their exposure to coronavirus and discussed the possible repercussions of this with reduced access to ventilators with increased age. One participant felt at increased risk of contracting COVID-19 after sharing a hospital room with three elderly patients. He cited media reports that elderly people were more at risk of severe COVID-19 disease and interpreted that to mean a greater risk of contracting COVID-19 by sharing a room with elderly people.Participants also expressed their vulnerability when talking about pre-existing health conditions. One participant felt vulnerable to COVID-19 due to an impaired immune system and a previous myocardial infarction. A family decision was made to limit her potential exposure to coronavirus from her daughter, a nurse, by leaving the family home to live in temporary accommodation. Whilst this participant did attend hospital for assessment, she stated that she was frightened. Another patient who attended hospital with myocardial infarction stated that suffering an acute cardiac event made him more vulnerable to coronavirus and he now considered himself to be in a more vulnerable category.A further participant had two ED chest pain presentations during lockdown. Initially she was assessed and discharged then represented two weeks later with acute myocardial infarction. She knew attending was the most appropriate course of action but described an internal conversation aiming to weigh up the risks of exposing herself to the virus balanced against the risk of not seeking assessment for chest pain.Protecting others was another consideration for participants. One participant actively wanted to attend hospital to access a test for COVID-19 so she knew she was not putting her carers at risk. Other examples included considering the exposure risk to grandchildren at home, and inadvertently transmitting the virus from the hospital environment to a vulnerable adult in the community by choosing to attend hospital.Some participants explicitly stated they were not concerned about adding pressure to health services by attending the ED. They described feeling so unwell they knew they had to attend hospital. Others had learnt that Emergency Departments were quiet through discussions with their GP or local networks which included health professionals. Participants also stated they knew that hospitals were fully open.For others, media images such as those being reported in Italy were a factor in their reluctance to attend hospital. Daily news reports detailing the number of new cases and deaths, the building of new emergency hospitals with large capacity and images of staff wearing protective suits all contributed to the message of \u2018Stay Away\u2019 at the beginning of the pandemic. Participants stated how their perception of this message had changed over time. Publicity about decreased hospital demand was cited as a reason why some participants who would have been reluctant to use services in the beginning were now less concerned about doing so.Table 3.Participants reported a much more positive hospital experience than they had anticipated. They stated the ED assessment areas were quiet and that they were seen quickly. Some were not aware that patients with COVID-19 symptoms entered the hospital and were assessed through a different point of access. Once in hospital they could see they were separated from suspected COVID-19 patients. Some participants were informed by their GP that this would be the case, others said they assumed the NHS would take this action. Quotations used to illustrate these interpretations are given in Many participants reported feeling safe while in hospital due to regular changing of personal protective equipment and hand washing by staff, in addition to highly visible cleaning taking place.One participant commented that nurses in the hospital ward were not social distancing. It was also mentioned that not being able to have anyone accompany you to the ED or visit you in hospital made an already worrying time even more difficult.Fig 1) suggesting that these may have impacted help seeking behaviour for chest pain during the early stages of the COVID-19 pandemic.Key public health strategies were targeted at decreasing community transmission of SARS-CoV-2 included hand washing, social distancing and self-isolation. Mass media campaigns have previously been shown to elicit potentially beneficial behaviour change in response to the SARS and H1N1 epidemics regarding hand washing and social distancing . GovernmThe majority of participants first sought an assessment of chest pain through primary care services. Reluctance to use the emergency services has been seen prior to the COVID-19 pandemic due to concern about appropriate use of the NHS and resources , 11. It As the actual hospital experience was often very different to the imagined experience, and usually positive, it may be useful for future media and government message campaigns to outline clearly a step by step mechanism by which people can access emergency services and to clearly describe safety measures adopted in Emergency Departments to minimise risk for patients that need to attend hospital urgently during future pandemic events. Commercial sectors of society, for example supermarkets, have done this with television campaigns. A \u2018Ways we are keeping you safe\u2019 campaign highlighting that hospitals have taken steps to create separate emergency assessment areas into COVID-19-free zones may make patients feel more comfortable attending the hospital.Patient concerns regarding \u2018vulnerability to the virus\u2019 has emerged as an important discourse during the pandemic possibly due to a lack of clarity on which categories of patients were and were not included in government-defined vulnerable groups and the coverage of this topic in the media. For example, a government spokesperson gave confusing messages regarding the inclusion of people over 70 years of age in the vulnerable category . ParticiWhile we aimed to capture the experience of patients who chose not to come to hospital with symptoms of chest pain by targeting those with known coronary heart disease from a community setting, none of the participants in the sample had experienced chest pain during the study period for which they would have normally sought hospital assessment. However, this service evaluation project has revealed valuable insights into how the decision to attend hospital was shaped by the pandemic. The first interviews were carried out at or just after the time of the release of a public health messaging campaign to promote attendance to the ED for urgent conditions. While some interviews included participants who had experienced symptoms two weeks earlier, perceived decision making may have changed in response to evolving media and news campaign. We did not explore factors influencing help seeking behaviour and hospital attendance during the early media campaigns advising patients to stay at home and protect the NHS.Future media and public health campaigns associated with subsequent waves of COVID-19 infection should seek to strike a balance between appropriate care-seeking and avoidance behaviour. Such campaigns should be designed to include dynamic monitoring of the public response to healthcare messaging in a way that permits rapid adjustment to ensure that the net impact on health, pandemic and non-pandemic related, is optimised.S1 Appendix(DOCX)Click here for additional data file."} +{"text": "Muscle and bone interactions might be associated with osteoporosis and sarcopenia. Urinary pentosidine and serum 25-hydroxyvitamin D (25(OH)D) might affect muscle and bone interactions. It is unclear whether these biomarkers are affected by age and sex or play a role in muscle and physical functions. We aimed to investigate the association between urinary pentosidine and serum 25(OH)D levels with muscle mass, muscle strength, and physical performance in community-dwelling adults.Two-hundred and fifty-four middle-aged and elderly adults were enrolled. There was no significant difference in age between 97 men (75.0\u2009\u00b1\u20098.9\u2009years) and 157 women (73.6\u2009\u00b1\u20098.1\u2009years). The skeletal muscle mass index (SMI), grip strength, and gait speed were assessed. The urinary pentosidine level was measured. We evaluated the association of urinary pentosidine and serum 25(OH)D levels with age and sex (student\u2019s t-test) and correlations between biomarker and each variable (Pearson\u2019s correlation coefficients). Multiple regression analysis was performed with grip strength and gait speed as dependent variables and with age, height, weight, body mass index (BMI), speed of sound (SOS), SMI, glycated hemoglobin (HbA1c), estimated glomerular filtration rate (eGFR), 25(OH)D, and pentosidine as independent variables using the stepwise method.The urinary pentosidine level was negatively correlated with grip strength, gait speed, eGFR, and insulin-like growth factor-1 (IGF-1) in men and with SOS, grip strength, and gait speed in women. The serum 25(OH)D level was positively correlated with IGF-1 in women and grip strength in men. Grip strength was associated with age, height, and pentosidine in men and height and pentosidine in women. Gait speed was associated with age, BMI, and pentosidine in men and age, height, and pentosidine in women.Urinary pentosidine levels are significantly associated with grip strength and gait speed and may serve as a biomarker of muscle and bone interactions. In recent years, there has been a considerable focus on the relationship between osteoporosis and sarcopenia. In previous studies, osteoporosis patients with fragility fractures were found to have a high prevalence of sarcopenia , 2. MuscPentosidine is a representative cross-linked structure of advanced glycation end products (AGEs), which are induced by the oxidation of bone collagen crosslinks . PentosiErgocalciferol and cholecalciferol (D3) are incorporated into the diet; vitamin D3 is also synthesized in the skin and undergoes hydroxylation in the liver to become 25-hydroxyvitamin D (25(OH)D). 25(OH)D is stable in the blood, and its level has recently been reported to be useful for assessing vitamin D sufficiency . 25(OH)DWe found that insulin-like growth factor-1 (IGF-1) is an important biomarker not only of muscle tissues but also of bones in community-dwelling middle-aged and elderly adults . We simiThis cross-sectional study used data of participants enrolled in the Good Aging and Intervention Against Nursing Care and Activity Decline (GAINA) study in the town of Hino, Tottori Prefecture, Japan \u201318. A toBlood samples were taken before the assessment of body structure and physical function parameters. Serum creatinine and glycated hemoglobin (HbA1c) levels were measured. The estimated glomerular filtration rate (eGFR) was calculated using the following formula:The serum IGF-1 and parathyroid hormone (PTH) levels were measured using a radioimmunoassay and an electrochemiluminescence immunoassay (ECLIA) kit , respectively. Serum 25(OH)D levels were measured using an ECLIA kit . The limit of quantification is the lowest analyte level that can be reproducibly measured with an intermediate precision coefficient of variance of \u226420%.The urinary pentosidine levels were measured using an enzyme-linked immunosorbent assay (ELISA) kit . The ELISA kit consisted of polyclonal anti-pentosidine IgG and a secondary antibody, and the accuracy, precision, and reliability of this kit were evaluated. In brief, the limit of blank and the limit of detection were 4.25 and 6.24\u2009pmol/mL, respectively. The intra-assay and inter-assay coefficients of variation were\u2009<\u20095%. The spiking and dilution recoveries were 101.4 and 100.5%, respectively. An analysis of cross-reactivity against seven compounds representative of AGEs and with structures close to pentosidine revealed no significant cross-reactivity. The comparability between the values obtained from high-performance liquid chromatography (HPLC) and ELISA (in the same urine samples) was r\u2009=\u20090.815 . Serum cp-value of <\u20090.05 was considered statistically significant.We assessed the association of urinary pentosidine and serum 25(OH)D levels with age and sexes using the student\u2019s t-test and correlations between biomarker and each variable on Pearson\u2019s correlation coefficients . Finally, multiple regression analysis was performed with grip strength and gait speed as dependent variables and with age, height, weight, body mass index (BMI), SOS, SMI, HbA1c, eGFR, 25(OH)D, and pentosidine as independent variables in men and women using the stepwise method. We judged multicollinearity using a variance inflation factor. All statistical analyses were performed by SPSS statistical software . A p\u2009=\u20090.198). BMI was significantly higher in men (23.0\u2009\u00b1\u20092.5\u2009kg/m2) than in women (22.0\u2009\u00b1\u20093.1\u2009kg/m2) (p\u2009=\u20090.014). The right calcaneal SOS (1503.6\u2009\u00b1\u200928.3 vs 1483.4\u2009\u00b1\u200920.2\u2009m/s), SMI (7.5\u2009\u00b1\u20090.8 vs 6.1\u2009\u00b1\u20090.8\u2009kg/m2), and grip strength (34.9\u2009\u00b1\u20097.4 vs 23.0\u2009\u00b1\u20094.5\u2009kg) were significantly higher in men than in women (p\u2009<\u20090.001). Gait speed and IGF-1 were not significantly different between sex [There was no significant difference in the mean age between men (75.0\u2009\u00b1\u20098.9\u2009years) and women (73.6\u2009\u00b1\u20098.1\u2009years) (p\u2009<\u20090.001), whereas urinary pentosidine levels were significantly higher in women than in men (p\u2009=\u20090.010). Serum 25(OH)D and urinary pentosidine levels are shown in scatter plots by age in Fig.\u00a0r\u2009=\u2009\u2212\u20090.116, p\u2009=\u20090.258) or women . On the other hand, pentosidine was positively correlated with age in both men and women .The results of the statistical comparison of serum eGFR, HbA1c, PTH, and 25(OH)D and urinary pentosidine levels between men and women are shown in Table\u00a0Pearson\u2019s correlation coefficients are shown in Table\u00a0The purpose of this study was to investigate the association between urinary pentosidine and serum 25(OH)D levels with muscle mass, muscle strength, and physical performance in community-dwelling adults. Serum 25(OH)D levels were significantly higher in men than in women; one of the reasons for this sex difference may be general inactivity and lower intake of vitamin D from daily food among Japanese elderly women compared with men .Urinary pentosidine levels are generally measured by HPLC, but this approach cannot be adapted to analyze many clinical samples, and it is also a time-consuming process. Furthermore, the detection of pentosidine using a reported ELISA kit and an HPLC system requires heat pretreatment, which generates artificial pentosidine, leading to overestimation. A novel pentosidine ELISA system that does not require sample pretreatment for analyzing urine samples has been developed . In thisWe found that urinary pentosidine levels were significantly higher in women than in men. There are few studies on the association between urinary pentosidine level and sex in community-dwelling adults; nevertheless, one of the reasons for this difference is that AGEs may be associated with low physical function in elderly women. We found that pentosidine was significantly associated with grip strength and gait speed in men and women. In recent years, the relationship between AGEs and sarcopenia has attracted much attention. However, there has been no report showing the relationship between urinary pentosidine level and specific parameters of physical function.Independent of age and eGFR , the uriOur study is limited by its relatively small sample size, cross-sectional study design, and recruitment of more women than men. The strength of this study is that it is the first to evaluate the association of urinary pentosidine levels with physical function simultaneously. In the future, it will be necessary to conduct longitudinal studies to confirm the findings of this study. These studies may eventually help elucidate the relationship between osteoporosis and sarcopenia.In this study, the urinary pentosidine level was significantly associated with grip strength and gait speed. We consider that urinary pentosidine level may serve as a biomarker affecting muscle and bone interactions in clinical practice."} +{"text": "Late-life depression is a major mental health problem and constitutes a heavy public health burden. Frailty, an aging-related syndrome, is reciprocally related to depressive symptoms. This study investigated the associations of physical frailty and oral frailty with depression in older adults. This large-scale cross-sectional study included 1100 community-dwelling older adults in Taiwan. The participants completed a dental examination and questionnaires answered during personal interviews. The 15-item Geriatric Depression Scale was used to assess depression, and information on physical conditions and oral conditions was collected. Multivariable logistical regression analysis was conducted to examine associations of interest. Significant factors associated with depression were pre-physical frailty (adjusted odds ratio (aOR) = 3.61), physical frailty (aOR = 53.74), sarcopenia (aOR = 4.25), insomnia (aOR = 2.56), pre-oral frailty (aOR = 2.56), oral frailty (aOR = 4.89), dysphagia (aOR = 2.85), and xerostomia (aOR = 1.10). Depression exerted a combined effect on physical frailty and oral frailty (aOR = 36.81). Physical frailty and oral frailty were significantly associated with late-life depression in community-dwelling older adults in a dose\u2013response manner. Developing physical and oral function interventions to prevent depression among older adults is essential. The aging population is growing worldwide. At present, 16% of Taiwanese are older than 65 years. Ten years from now, older adults are projected to constitute one-fourth of Taiwan\u2019s population . As of 2The late-life depression is part of frailty. Frailty is caused by life-course determinants and disease(s), accumulation of physical, psychological and/or social deficits in functioning, increasing the risk of negative health outcomes such as disabilities, admission to health care facilities, and death, is increasingly common in older adults . Jung etDepression has been found to be strongly associated with oral function ,12 and xStudies have demonstrated an association between frailty and depression ,20. AssoThis large-scale cross-sectional study, which was conducted from May 2018 to January 2019, included community-dwelling adults in Taiwan aged \u226565 years. Stratified cluster sampling was performed, with seniors\u2019 recreation centers selected randomly according to their location in urban, rural, and mountainous areas. Individuals were excluded if they had mental disorders or expressive language disorders, as indicated by the possession of integrated circuit card for severe illness; mild cognitive impairment or dementia, as determined using the Short Portable Mental Status Questionnaire (SPMSQ) ; and higData were collected using a structured questionnaire developed by Lu et al. . The queDental examinations were performed by seven dentists based on the World Health Organization criteria . RegardiThe GDS-15 is a self-reported measure of late-life depression in older adults. The 15 items were selected from the Long Form Geriatric Depression Scale because of their high correlation with depressive symptoms. Users respond in a \u201cYes/No\u201d format. We administered the Chinese version of the GDS-15 . Scores Physical conditions included insomnia, physical frailty, sarcopenia, and comorbidities. All variable data were collected during the study and are listed as follows.InsomniaInsomnia was measured using the Chinese version of the PSQI, a self-rated questionnaire that assesses sleep quality and sleep disturbances. The Chinese version has been validated and determined to have adequate reliability . The 19 Physical frailtyThe SOF criteria are regarded as being as effective as the frailty criteria for predicting adverse health outcomes but are easier to apply . The SOFSarcopeniaSARC-F has high specificity (94\u201399%) and is a suitable tool for community screening for sarcopenia . The fivComorbiditiesThis variable was identified by self-reports of more than two chronic diseases, such as hypertension, diabetes, heart disease, and chronic obstructive pulmonary disease.Oral condition was examined across six components, namely dysphagia, xerostomia, masticatory performance, the EI , oral diadochokinetic rate, and the Silness\u2013L\u00f6e Plaque Index . For each item, a score of 1 was defined as meeting the targeted measures as the presence of dysphagia, xerostomia, poor masticatory performance, EI categories B3\u2013C3, oral diadochokinetic rate (indicated by failure to pronounce the \u201cta\u201d monosyllable more than six times per second), and a Silness\u2013L\u00f6e Plaque Index score >0.95. Total scores of 0, 1\u20132, and \u22653 points corresponded to non-oral frailty, pre-oral frailty, and oral frailty, respectively. The six components were investigated as follows.DysphagiaDysphagia was assessed through rapid screening by using the Ohkuma questionnaire, which contains 15 items . ExampleXerostomiaA condensed version of the Xerostomia Inventory was administered to identify and classify mouth dryness . The folMasticatory performanceMasticatory performance was evaluated using color-changing chewing gum . This chewing gum contains xylitol, citric acid, and red, yellow, and blue dyes that change color when subjected to masticatory forces from chewing. The red dye is pH sensitive and changes color under neutral or alkaline conditions. Citric acid maintains the internal pH of the initially yellowish-green gum at a low level before chewing commences. As chewing progresses, the yellow and blue dyes seep into the saliva, and the release of citric acid causes the gum to turn red . ParticiOcclusal supportOcclusal support was evaluated using the EI , which iOral diadochokinesis rateOral diadochokinesis was assessed by examining articulatory oral motor skills at sites such as the lips, tip of the tongue, and the dorsum of the tongue . The parOral hygieneMeasurement of the state of oral hygiene by using the Silness\u2013L\u00f6e Plaque Index is based on the examination of both soft debris and mineralized deposits on the number 12, 16, 24, 32, and 44 teeth. Each of the four surfaces of the teeth is given a score from 0 to 3.Data were collected through face-to-face interviews, which were administered by well-trained interviewers in compliance with a standard protocol. The collection process comprised three steps lasting an hour total. First, a dentist performed the dental examination and recorded the dental status. Second, an interviewer administered the structured questionnaire. The entire interview process took approximately 30 to 45 min. Finally, the research personnel collected masticatory performance data, recorded the monosyllable pronunciation data, and physical function as complete five consecutive chair rises, walking 5 m, climbing a flight of 10 stairs, lifting and carrying 4.5 kg.2 test were conducted to assess the relationship between the factors and outcomes. Variables exhibiting statistically significant associations in univariate analysis were included in the multivariate analysis. The adjusted odds ratio (aOR) and 95% confidence interval (CI) obtained through the exponentiation of the corresponding regression coefficient were employed in evaluating the association between the study variables and depression status after the effects of potential confounders were adjusted for. Multivariable logistical regression models were used to evaluate the combined effects of physical frailty and oral frailty on depression status.The participants\u2019 depression status was categorized as late-life depression or non-late-life depression. The data are expressed as means and standard deviations or as frequencies and percentages, and the two-sample t test and \u03c7p < 0.001). Of the older adults aged \u226575 years, 62.3% had an occlusal condition in the B3\u2013C3 category, and the mean of the Silness\u2013L\u00f6e Plaque Index was 1.04. Significantly higher proportions of older adults aged \u226575 years failed to pronounce the \u201cpa\u201d \u201cta\u201d and \u201cka\u201d monosyllables six or more times per second compared with those in the \u226474 years .The characteristics of the participants, categorized according to age, are presented in p for trend <0.001).The results confirm the premise that physical frailty and oral frailty are associated with late-life depression. Notably, physical frailty, sarcopenia, insomnia, oral frailty, dysphagia, and xerostomia exerted significant combined effects in the participants with all these conditions.Older adults with physical frailty, sarcopenia, and insomnia were more likely to have late-life depression than were older adults without all these conditions. The rates of physical frailty and sarcopenia in the participants aged \u226575 years were three and nine times the corresponding rates in the participants aged \u226474 years. Sarcopenia and depressive symptoms are associated with frailty reportedPre-oral frailty and oral frailty were associated with late-life depression in a dose\u2013response manner. Oral frailty was twice as prevalent among the older adults aged 75 and older compared with among those aged 65 to 74 years, indicating that oral function declines with age. This was also found in previous research. The researchers found a turning point in oral health from the age of 75 . Eating We observed the participants with both dysphagia and xerostomia were more likely to have late-life depression. Polypharmacy is common among older adults. Medication-induced xerostomia was higher with polypharmacy, most notably in those taking antidepressants . In the Examination of combined effects revealed that the participants with both physical and oral conditions such as physical frailty and dysphagia, or physical frailty and xerostomia, are at increased risk of developing late-life depression. Azzolino et al. identifiThis study has some limitations. First, because the sample comprised community-dwelling older adults from southern Taiwan, the findings may not be generalizable to all older adults. However, the study methodology can be extended to investigations of community-dwelling older adults\u2019 physical and oral function. Second, we did not evaluate lips and tongue muscle strength, which is related to swallowing motion. However, oral diadochokinetic can be applied to evaluate oral motor skills at sites such as the lips and the tip and dorsum of the tongue in community-dwelling older adults. Third, we used subjective Xerostomia Inventory to identify and classify mouth dryness rather than objective instruments to measure xerostomia. However, the scale is a valid measure for discriminative use in clinical and epidemiologic research . Fourth,Physical frailty and oral frailty were significantly associated with late-life depression in community-dwelling older adults in a dose-response manner. Physical frailty, sarcopenia, insomnia, oral frailty, dysphagia, and xerostomia exerted significant combined effects on late-life depression. The findings suggest that developing early intervention strategies is integral to prevent frailty among older adults, which can in turn reduce the likelihood of late-life depression onset among them."} +{"text": "The current research aims to aid policymakers and healthcare service providers in estimating expected long-term costs of medical treatment, particularly for chronic conditions characterized by disease transition. The study comprised two phases , in which we developed linear optimization-based mathematical frameworks to ascertain the expected long-term treatment cost per patient considering the integration of various related dimensions such as the progression of the medical condition, the accuracy of medical treatment, treatment decisions at respective severity levels of the medical condition, and randomized/deterministic policies. At the qualitative research stage, we conducted the data collection and validation of various cogent hypotheses acting as inputs to the prescriptive modeling stage. We relied on data collected from 115 different cardio-vascular clinicians to understand the nuances of disease transition and related medical dimensions. The framework developed was implemented in the context of a multi-specialty hospital chain headquartered in the capital city of a state in Eastern India, the results of which have led to some interesting insights. For instance, at the prescriptive modeling stage, though one of our contributions related to the development of a novel medical decision-making framework, we illustrated that the randomized versus deterministic policy seemed more cost-competitive. We also identified that the expected treatment cost was most sensitive to variations in steady-state probability at the \u201cmajor\u201d as opposed to the \u201csevere\u201d stage of a medical condition, even though the steady-state probability of the \u201csevere\u201d state was less than that of the \u201cmajor\u201d state. According to the world health organization (WHO), the work on healthcare costing and accompanying efficiencies explores questions around the usage of healthcare resources, particularly in the public health sector WHO, . StrategGlobal healthcare spending has been spiraling since the early 2000s, as per a report released by the WHO in 2019 , there are important accompanying downsides, both at the macro and micro levels. At the macro level, healthcare spending has been accompanied by a significant increase in catastrophic and out-of-pocket spending (OOPS) as a share of total health expenditure in the past decade . There would be a certain level of treatment decision in each of these states depending on the extent of medical, medicinal, and surgical interventions. In such transitions, the expected costs, therefore, likely depend on whether treatment decisions are deterministic or randomized in nature. A deterministic decision reflects when the healthcare service provider prescribes an appropriate level of treatment depending on past experiences in that a certain expected cost of medical treatment would result.On the other hand, a randomized policy pertains to an attempt by the medical service provider so that certain acceptable treatment decisions can be mapped to individual states of the medical condition to minimize the expected treatment costs. Further, long-term steady-state probabilities , would significantly impact the expected costs of treatment under both deterministic and randomized policies. Furthermore, data related to the accuracy of diagnosis can also impact the long-term steady-state probabilities and, therefore, treatment costs , we employed the theoretical construct of thematic analysis to identify and classify major cardiovascular condition states. The clinicians for this classification were 115 retired cardiovascular specialists and surgeons . Using the clinicians\u2019 expert inputs, we also obtained a sense of the broad frequencies of each of these states, transitions, and accuracy of diagnoses. We further conducted statistical validations to eliminate outliers to ensure that the data from the remaining clinicians were not statistically different. Following this, in the second stage of our research , using the linear programming (LP)-based exact-method technique, we modeled the frameworks of the minimization of expected medical costs of treatment under both deterministic and randomized policies. Using the data-related costs, transition probabilities, and so on, we illustrated the workings of the models developed. Figure\u00a0From Fig.\u00a0The remainder of the paper is organized as follows: Sect.\u00a0The two primary research streams around which our research revolves are a) the cost and economic modeling of healthcare using analytical and statistical methods; b) the cost-dominant medical decision-making in healthcare. We now present the extant research literature related to both streams.Using the Bayesian Markov-chain Monte Carlo simulation method, Cooper et al. devised In a study related to healthcare budgets and the decision rules of cost-effective healthcare providers, Baal et al. convergeMost current studies, including those by Lin et al. , Morid eAlthough de Gues et al. deployedPrior to developing the prescriptive framework based on linear optimization models, we collected real-life and empirically validated data on cardiovascular conditions from retired medical professionals (experts) who had experience diagnosing and treating such conditions. In particular, data on states of the said medical condition, transitions, and treatment accuracy were systematically collected during primary data collection. Further, we relied on robust statistical validation, thereby eliminating outliers and attempting to ensure that the data did not remain statistically dissimilar. All these nuances related to data and hypothesis formalized at the qualitative stage stage 1) were key to establishing and validating the prescriptive model developed in stage 2. Therefore, an important contribution associated with our study is that, instead of evolving a prescriptive model characterized only by theoretical underpinnings, our model is grounded in real-life data backed further by strong statistical validation. Table were keyMost current studies, including those by Cooper et al. , Lin et This section presents the research gaps addressed in the current study. Though studies such as Beaulieu and Bentahar , Lin et Practicality dictates that the most important determinants of research methodology are often formulated research questions over video calls (primarily due to the ongoing COVID-19 pandemic). The objective of such interviews as a primary method of data collection is anchored in the fact that such a method enables (1) insightful discussions with experienced medical professionals, thus obtaining richness in primary first-hand data; (2) the uncovering of newer knowledge through allowing clinicians to express their ideas freely; (3) the privacy of clinicians who are not willing to share personal experiences in front of peers; and (4) two-way communication between interviewer and interviewee. Following this, the use of thematic analysis helped identify, analyze, and report themes within the data. Further, thematic analysis has shown itself to be flexible and tangible, particularly in the context of qualitative data Are there any statistical differences in the transitions of different states of medical conditions?; and (c) How can we work with the means of different parameters that are statistically similar by filtering out the outliers?At the end of the information-gathering phase, it was broadly expected that the major states of the medical condition along with its accompanying transitions would be finalized. These thematic inputs then acted as an input to collect the pertinent objective data, followed by rigorous statistical testing and validation to ensure that elemental relationships were identified and verified. In particular, we aimed to substantiate a few important questions: (a) Once the rigorous statistical validation was conducted, we moved to the second stage of the work , wherein we conducted detailed modeling of deterministic and randomized policies, as detailed in Sect.\u00a0The indices, parameters, and decision variables of the model are presented in Table A(mt) requires the corresponding treatment. The treatment would be appropriate only if the corresponding treatment is T(mt). This means that for the specific state A(mt), only T(mt) would be appropriate, and no other treatment decisions such as T(m1), T(m2), \u2026 T(mt-1), \u2026, T(mt+1),\u2026. T(mT) would be acceptable. If A(mt) is mapped to T(mt), then the treatment decision would be termed \u2018true treatment.\u2019 Otherwise, the treatment would be termed \u2018false treatment.\u2019 Table A(mt) and T(mt), the treatment is true, and the corresponding probability of treatment corresponding to medical condition A(mt) is p{ A(mt), T(mt)}.Suppose a patient suffering from a specific medical condition, state A(m1) denotes a \u201cminor condition\u201d. A(m1) can remain at this state itself with a probability of p{A(m1)}. A(m1) can further deteriorate to lower states of the medical condition with individual probabilities. For instance, state A(m1) can deteriorate to a lower state A(m2) with a mean transition probability of Another dimension of medical treatment is related to the Markovian property in that the state of a specific medical condition can evolve to other severe states of the specific medical condition. For example, an individual with a specific \u201cminor\u201d cardiovascular condition can remain in this same condition with a probability of 1/2 or can deteriorate to a \u201cmajor\u201d and \u201csevere\u201d conditions over a period of time with probabilities of 3/8 or 1/8, respectively. This means that, on average, there is a 50% likelihood of the patient remaining steady at the same \u201cminor\u201d condition. The patient\u2019s \u201cminor\u201d condition would deteriorate to \u201cmajor\u201d and \u201csevere\u201d with a 37.5% and 12.5% likelihood, respectively. Referring to Table A(m1), A(m2), A(.), A(mt), A(mT), A(..), A(mT) are all possible exhaustive states of a medical condition such that A(m1)\u2009>\u2009\u2009>\u2009A(m2)\u2009>\u2009\u2009>\u2009A(.)\u2009>\u2009\u2009>\u2009A(mt)\u2009>\u2009\u2009>\u2009A(mT)\u2009>\u2009\u2009>\u2009A(..)\u2009>\u2009\u2009>\u2009A(mT) with\u2009>\u2009\u2009>\u2009depicting progressively degenerate states, then the sum of the corresponding mean transition probabilities corresponding to all the rows would be equal to 1. For instance, within the medical condition transition matrix If A(m2) cannot transition back to better prior states. Therefore, the values of such transition probabilities would be equal to zero.Further, a particular degenerate state, for instance, Similarly, for the remainder of the row, the following mathematical expressions can be used:Equations\u00a0\u20135) ensu ensu5) eA(mT), will not improve from any previous state and will remain in this state only with a probability of 1. Therefore, the value of p{A(mT)} would be equal to 1. The interplay of likelihoods of states of a medical condition and corresponding medical treatments and transition probabilities would have a combined effect in that the resulting probabilities would be indicative of uncertainties associated with a specific state of medical condition/corresponding treatment while at the same time taking into account deterioration to a degenerate state of a medical condition. This means that the effective transition probability would be a key input in subsequent modeling.Notably, the last state, When we consider the mean effective transition probability, we can ascertain the elements of the effective transition probability using the independent property of the two probability element sets depicted in Table A(m1)\u2014is expressed as x{A(m1)} and can be determined using the following mathematical expression:The effective transition probability can be represented using two types. The first is a transition within the same state , and the second is a transition from one state to another . The effective transition probability in the same state of a medical condition\u2014for instance, related to state A(m1) transitions to A(mT), is expressed as The effective transition probability when the state of a medical condition transitions to some other state, for instance, when state p would be replaced by x.Similar to the approach specified in Eq.\u00a0, the remA(mT) represents the absorbing state in that once the state of medical condition finally transitions to A(mT), it becomes an infeasible state and needs to be brought back to state A(m1). For instance, if the patient goes into a \u201csevere\u201d state in certain cardiovascular conditions, the patient might have to be administered a pacemaker. Therefore, the corresponding probability for A(m1) would assume a value of 1.Once the elements of the matrix If Equations\u00a0\u201313) ens ens13) eSince Equation\u00a0 ensures c{A(m1)}, c{A(m2)}, c{A(.)},..C{A(mt), C{A(..)}..C{A(mT) represent the average costs of treatment corresponding to states A(m1), A(m2), A(.),..A(mt), A(..), A(mT), respectively, then the deterministic long-term average cost of treatment per patient can be ascertained as per Eq.\u00a0(If per Eq.\u00a0:15\\documD(1), D(2), D(.), D(k), D(..), and D(K) are the various treatment decisions, then these treatment decisions could be mapped with individual states of the medical condition in binary terms, as shown in Table D \u03f5 . The rationale for this is that depending on the actual state of a medical condition, different treatment decisions can be taken depending on the patient. For instance, for a \u201cminor\u201d cardiovascular condition, a relatively healthy patient\u2019s treatment might be accompanied by non-surgical and lifestyle change-oriented approaches. On the other side, for a \u201cminor\u201d cardiovascular condition for a patient with comorbidities, a moderate surgical procedure with some medicinal interventions might be more appropriate. However, the flip side to a deterministic policy of a treatment decision is that the medical service provider would not have any leeway to consider different treatment decisions corresponding to different states of a medical condition in a commensurate manner. Therefore, a probability distribution should be employed to map a particular treatment decision with a certain medical condition state.If A(mT) and treatment decision D(k), let y be a steady-state unconditional probability, which can be interpreted as the following:A randomized policy matrix with the mapping of states of a medical condition and decision is shown in Table The abovementioned steady-state unconditional probability holds the form of a joint probability containing both the state of a medical condition and the medical decision corresponding to that state.y is closely related to D{k, mt) such that the following mathematical expression formulated in Eq.\u00a019) captuThere exist three sets of constraints for Equations\u00a0 and 21)21) ensur(b) From the results on steady-state probabilities, the following can be formulated:Equation\u00a0 ensures Hence, the long-term expected cost Equation\u00a0 expressey so as toHence, the linear programming model chooses Equation\u00a0 signifieThis is subject to the constraintEquation\u00a0 capturesEquation\u00a0 establisEquation\u00a0 ensures (MT\u2009+\u20092) functional constraints and K(MT\u2009+\u20091) decision variables. Because the above LP model can be solved using commercial solvers, once the The model represented by mathematical Eqs.\u00a0\u201329) rep rep29) rIn order to illustrate our methodology, we consider the data obtained from 115 clinicians (experts) who had been affiliated with a multi-specialty hospital chain in Eastern India as cardiovascular specialists and are now retired. These hospital chains are part of a large private multi-specialty hospital chain headquartered in the capital city of an Eastern Indian state. Being a reputable hospital in Eastern India, patients from both the native state and other states often flock to this hospital due to its affordable health care costs. To preserve anonymity, we do not explicitly mention the names of the hospital chain and 115 clinicians. The pertinent data related to the study and experts\u2019 profiles are listed in Table A belief about certainty (or uncertainty) by an expert about some state of a medical condition is often grounded in data or experience gained during professional practice, or both opined that they had typically observed four major states of cardiovascular conditions, which often form the basis for the corresponding level of medical diagnosis and treatment. These levels also form the basis for corresponding administrative procedures, including those of the hospital and insurance companies (in cases wherein the patients were insured). Table Referring to Table Thereafter, the clinicians were asked to fill out a Google form-based survey created to capture the inputs related to cardiovascular conditions during the questionnaire phase. The survey essentially captured three broad dimensions. First, what were the typical frequencies of reported diagnosis of the four states of the medical condition? Second, what were the frequencies of the accuracy of medical treatment corresponding to each of the four states of the diagnosed condition? Third, what were and to what extent did the transitions from a particular state of medical condition impact itself and subsequent degenerative states? Table When we observed the reported frequencies of the four states of a diagnosed cardiovascular condition, we ascertained that out of 115, 13 clinicians reported that frequencies were outliers. In the case of the remaining 102 clinicians\u2019 data, there was a clear pattern in the reported frequencies. This pattern manifested clearly in that frequency of diagnosis of the \u201cminor\u201d state was greater than that of the \u201cmoderate\u201d state, which in turn was greater than that of the \u201cmajor\u201d state. Finally, the \u201csevere\u201d state had the lowest reported frequencies. Therefore, we only considered the inputs of 102 clinicians. In particular, one of the primary variables of interest was the mean probability of a diagnosed state (based on the mean frequency of 102 considered clinicians). Figure\u00a0p-value was greater than 0.05, indicating that the probabilities corresponding to each state of diagnosis were approximately normally distributed. Thereafter, we carried out an analysis of variance (ANOVA) one-factor test at p\u2009<\u20090.05 for the data from the 102 clinicians (leaving aside the outlier data from 13 clinicians), where the following hypothesis represented the null hypothesis:H(0): Reported frequencies of diagnosed states remain statistically the same for the twelve clinicians within a cutoff percentage of 2.However, before assuming that we could obtain and use these mean values for the subsequent modeling stage, it was essential to ascertain whether the mean frequency (probability) of a particular state of diagnosis was statistically the same across the 102 clinicians considered . Before performing this, we examined the normality of sample data corresponding to each state of diagnosis, and we tested for normality of the sample at a 95-percent confidence interval. We first laid out the data graphically and looked at the histogram and Q-Q plots, which indicated a normal distribution. Further, we also performed the Shapiro\u2013Wilk test for the reported probabilities. In this test, the p-value and t statistic, we failed to reject the null hypothesis of frequencies at a 95-percent confidence level.Based on the H(0): The reported frequencies of the diagnosed states are such that order remains: frequency of \u201cMinor\u201d state\u2009>\u2009frequency of \u201cModerate\u201d state\u2009>\u2009frequency of \u201cMajor\u201d state\u2009>\u2009frequency of \u201cSevere\u201d state for all 102 clinicians.In order to test the order of the mean probabilities of the reported states (such that the mean probability of diagnosis at a minor state was greater than that at a moderate state and so forth), the following null hypothesis was postulated.t-tests were carried out at a 95-percent confidence level . For the sake of brevity, we report two such comparisons, which are provided in Table In order to test the aforementioned hypothesis, a total of six pair-wise two-tailed p-value and t statistic, it was verified that the mean probability of diagnosis at the minor stage\u2009>\u2009the mean probability of diagnosis at the moderate state\u2009>\u2009the mean probability of diagnosis at the major state\u2009>\u2009the mean probability of diagnosis at the severe state. We also concluded that the frequencies (probabilities) of finding a particular state of the cardiovascular condition remained statistically the same for the 102 clinicians. Therefore, we used the mean medical condition probabilities p{A(mt), T(mt)}s and mean transition probabilities Based on the Notably, all statistical tests performed to validate the postulated hypotheses were parametric in nature. There were inherent assumptions about the population parameters from which the 102 relevant (and workable) samples were drawn. Since we were also interested in the important nuances related to the sample data (such as the order of reported frequencies at each state), we naturally preferred parametric tests over non-parametric tests, even though parametric tests often require normality tests and non-parametric tests do not necessarily require such tests on the sample.Table Referring to Table Table Using the values given in Table p (minor) was determined as follows (transition to own state):For instance, Another instance of transition to a different state\u2014that is, Along similar lines, the remaining elements of the matrix If Solving the set of equations given in Eq.\u00a0 on ILOG EDet(c) was determined to be $4859.84.The treatment costs corresponding to each of the four medical states were $570, $1590, $6500, and $13,500, respectively, as determined by the average costs of the previous three years. Considering these costs and the optimal steady-state probabilities, and using Eq.\u00a0, the clinicians revealed that decisions about a specific medical condition also have certain subjectivities and, therefore, there cannot always be clear one-to-one mapping. The experts reasoned that other dimensions related to patients, such as comorbidities , the extent of physical fitness, and a history of specific medical conditions, also play a major role in warranting one decision over another.ys, were determined. The mapping of these variables with the state of medical conditions and corresponding decisions is illustrated in Table ERand(c) can be expressed as follows:Using the information provided in Table It is worth mentioning that, in the above equation, though decision D1 corresponding to the \u201cminor\u201d state does not have a clear treatment/medical cost, as per the description of this decision in Table Following this convention, as formalized by Eq.\u00a0, the folCorresponding to Eqs.\u00a0 and 29)29), therThe developed mathematical model belongs to linear programming and was solved using ILOG CPLEX Optimization Studio. The following results were obtained:Ds were determined:Using equation number 19, the following ERand(c) was determined to be $3031.42.Comparing the results from both the deterministic and randomized policy produced a couple of important points. First, as opposed to any lack of mapping of the pertinent decision policy for the various states of the medical condition, the randomized policy resulted in the steady-state probabilities and the mapping of the pertinent decision policy with respect to the various states of the medical condition. Second, the expected cost of treatment under the randomized policy was superior to that of the deterministic policy, indicating there might be a preference for the randomized medical treatment policy over a long horizon from an economic standpoint. Finally, the randomized policy was slightly more predisposed than the deterministic policy toward dealing with severe states of the medical condition, considering that the steady-state probability in the case of the randomized policy was higher than that of the deterministic policy.In this section, we thematically discuss the important findings and accompanying nuances related to both stage 1 and stage 2 of the study.The data reported by the remaining 102 clinicians were fairly homogenous in that the probability of the cardiovascular condition remaining in any particular state, as reported by 102 of the 115 clinicians, lay within the 2% range. An important reason for this relative homogeneity is that most of these clinicians belonged to the same facility of the hospital chain, while several others belonged to a different facility (in a different city) within the state. Post-study follow-up discussions with clinicians showed that the vast majority (more than 85%) of the patients belonged to the same geographic regions and were predominantly aged 55 or over. Further, a fairly low and uniform population sample size also supports the relative homogeneity of the probabilities of different states remaining in a narrow range.Further, referring to Fig.\u00a0Referring to Table A comparison of the randomized policy and deterministic policy of medical treatment revealed that the expected long-term cost of treatment in the case of the randomized policy was significantly lower than that of the deterministic policy. An important reason for this difference is that in the case of the randomized policy, the optimal steady-state unconditional probability is less skewed toward the \u201csevere\u201d state than a deterministic policy, which is more skewed toward the \u201csevere\u201d state. The cost of diagnosis, treatment, surgeries, and post-treatment in a \u201csevere\u201d state was found to be significantly higher in the case of a \u201csevere\u201d state as opposed to the other states. Because the treatment decision in the deterministic policy revolves around the healthcare service provider prescribing an appropriate level of treatment depending on the history of the case, a certain expected cost of the medical treatment would result. On the other hand, the randomized policy revolves around an attempt by the service provider in such a manner that certain acceptable treatment decisions can be mapped to individual states of a medical condition such that the expected treatment costs can be minimized. The adoption of randomized medical treatment policies can be of particular value to large developing countries such as India, which are often characterized by limits on governmental spending, inadequate healthcare infrastructure (relative to developed countries), and variable quality of healthcare is varied from 90 to 100% of its optimal value, the higher side of the expected cost would result when \u03b8(moderate) and \u03b8(major) remains constant, and \u03b8(severe) varies based on variation in \u03b8. When \u03b8(minor) is varied from 101 to 110% of its optimal value, a higher side of the expected cost will result when \u03b8(major) and \u03b8(severe) remain constant, and \u03c0(moderate) varies based on variation in \u03b8(minor). When \u03b8 (minor) is varied from 90 to 100% of its optimal value, the lower side of the expected cost will result when \u03b8(major) and \u03b8(severe) remain constant, and \u03b8(moderate) varies based on variation in \u03b8(minor) . When \u03b8(minor) is varied from 101 to 110% of its optimal value, the lower side of the expected cost will result when \u03b8(major) and \u03b8(major) remain constant, and \u03b8(severe) varies based on variation in \u03c0(minor). Table To understand the impact of the changes in the steady-state probabilities on the total expected treatment cost spread over time, we conducted a sensitivity analysis for the deterministic policy. In particular, we varied the steady-state probability corresponding to the states \u201cminor\u201d, \u201cmoderate\u201d, \u201cmajor\u201d, and \u201csevere\u201d one at a time within\u2009\u00b1\u200910%. Table \u03b8(minor) from 90 to 110% resulted in a decreasing trend in the expected cost of treatment. Referring to Fig.\u00a0\u03b8(severe) from 90 to 110% resulted in an increasing trend in the expected cost of treatment.Figure\u00a0It can also be observed that the expected cost of treatment was most sensitive to variations in the steady-state probability at the \u201cmajor\u201d stage of a medical condition as opposed to the \u201csevere\u201d stage of a medical condition, though the steady-state probability of the \u201csevere\u201d state was less than that of the \u201cmajor\u201d state.From a managerial implications perspective, our study can aid medical service providers such as private and public hospitals, practitioners, and surgeons in several important ways. First, the study enables such entities to approach the cost modeling of treatment costs in a structured and scientific manner while considering real-life data. The robust statistical validation performed at the qualitative stage (stage 1) ensures that samples taken are subject to reasonable testing and validation. Such an approach can be helpful to medical service providers in that large countries are often associated with challenges to data collection in a structured manner. It remains impossible in many cases to collect data at a larger sample level or even as part of the population level. Thus, our study shows a possible way in which data pertaining to disease prevalence at various stages of severity, accompanying transitions with accompanying frequencies, and true/false treatment can be collected and handled, leading to meaningful conclusions. Second, the analytical part of the study (stage 2) enables parametrization of specific medical decisions to be made under both the randomized and generalized policy. This parametrization can aid medical service providers in developing some guiding decision support systems such that, by including the contextual factors such as the medical history of a patient in line with treatment cost rationalization, optimal medical treatment decisions can be reached. In particular, such an approach can be extremely beneficial in those cases wherein the doctors\u2019/clinicians\u2019 discretion, depending on different patients, is lower.Our study augments the extant research in several important ways at a policy level. First, in the context of low- and middle-income countries (LMICs) and from a resource planning perspective, there has been increasing interest in understanding the cost of healthcare programs that can deliver the desired healthcare services to patients (Clarke-Deeler et al., In this two-stage research, we developed mathematical frameworks to determine the expected cost of treatment per patient in the long-term, considering the integration of various interrelated nuances such as the transition of the medical condition, accuracy of medical treatment, and medical decisions taken at various severity levels of the medical conditions. Further, LP-based and exact method-oriented modeling approaches were deployed to ascertain the steady-state probabilities corresponding to the respective severity levels of the medical condition under both the deterministic and randomized policy. However, before delving into the modeling aspect of the study, thereby taking into context the prescriptive setting and at the qualitative stage of the study, we also focused on the data collection and validation of various cogent hypotheses, thus providing input to the modeling stage of research. To this end and to ensure a strong empirical underpinning to the research, we relied on the data collected from 115 different cardiovascular medical professionals to understand the nuances related to disease transition, the accuracy of medical treatment, and treatment decisions about individual disease severity levels. In particular, we relied on semi-structured interviews and thematic analysis to understand the characteristics related to the empirical setting. Based on a few key hypotheses developed and their subsequent validation, we utilized the empirical data as an input to the modeling stage of the study. Thematically, there were four broad severity levels of the cardiovascular condition identified: \u201cminor,\u201d \u201cmoderate,\u201d \u201cmajor,\u201d and \u201csevere.\u201dAt both the qualitative and prescriptive modeling stage of the study, several interesting insights emerged based on the case example of a history of cardiovascular treatment at the service facilities of a well-known multi-specialty hospital chain in Eastern India. For instance, at the qualitative stage of the research, it was determined that treatment accuracy was better in more severe states of cardiovascular conditions and inferior in relatively less severe states of the condition. Counter-intuitively, it was also determined that the probabilistic value of the transition from \u201cmoderate\u201d to \u201csevere\u201d was higher compared with the transition from \u201cmajor\u201d to \u201csevere\u201d. At the prescriptive modeling stage, though one of our primary contributions relates to developing the novel mathematical framework, with subsequent optimization runs, we illustrated that the randomized policy seems to be cost-competitive compared with the deterministic policy. Further, using a sensitivity analysis, we showcased the impact of the varying steady-state probabilities of the respective states of a medical condition on the expected cost of treatment. Finally, there are several ways our study can be aligned with favorable policy implications, one of which is possibly considering a randomized treatment policy in LMIC countries to treat pervasive but less life-threatening conditions.Like any study, ours is also not devoid of limitations. First, our research only considered direct costs when modeling the expected cost of treatment. Other indirect costs, such as wages, administrative costs, and so on, were not considered. This implication is particularly important for countries like the US, wherein administrative costs typically constitute a significant proportion of overall healthcare expenditure. Second, in demonstrating our modeling framework, the sample data used was rather limited and primarily belonged to the same geographical setting and similar age group. Therefore, to further generalize the study's findings, it is imperative that the framework developed to be tested in a larger setting with a more heterogeneous population. Third, at the qualitative stage of our study, we relied on respondents\u2019 inputs to ascertain the transitions pertinent to the medical condition and the accuracy of the treatment. An implicit assumption here is that the data did not significantly suffer from the clinicians\u2019 biases in that the IDEA framework works effectively in such cases. Fourth, another future research direction would be to explore the application of more advanced supervised learning methods such as deep learning and structure analysis to improve the performance of cost prediction methods. Such forecasts can be conducted with respect to certain extant data over a sufficiently long period. Specifically, adding the features of medical treatment and benefiting from their predictive and explanatory power can be an important step in such approaches."} +{"text": "Escherichia coli transporter knockout strains exposed to sub-inhibitory concentrations of 18 diverse antimicrobials. We found numerous knockout strains that showed more resistant or sensitive phenotypes to specific antimicrobials, suggestive of transport pathways. We highlight several specific drug-transporter interactions that we identified and provide the full dataset, which will be a useful resource in further research on antimicrobial transport pathways. Overall, we determined that transporters are involved in modulating the efficacy of almost all the antimicrobial compounds tested and can, thus, play a major role in the development of antimicrobial resistance.Antibiotic resistance is a major global healthcare issue. Antibiotic compounds cross the bacterial cell membrane via membrane transporters, and a major mechanism of antibiotic resistance is through modification of the membrane transporters to increase the efflux or reduce the influx of antibiotics. Targeting these transporters is a potential avenue to combat antibiotic resistance. In this study, we used an automated screening pipeline to evaluate the growth of a library of 447 Antibiotic resistance is a major global healthcare burden, with over 1 million deaths attributable to antibiotic resistance in 2019, and with the World Health Organization listing antimicrobial resistance as one of the top 10 global public health threats facing humanity , albeit In order to exert their effects, most antibiotics must first cross the bacterial cell membrane. There is now strong evidence to suggest that in order to enter cells, nearly all compounds must pass through membrane transporters and that \u201cpassive bilayer diffusion is negligible\u201d ,7,8. TheWhile modelling suggests that mutations that cause antibiotic resistance will predominantly be in exporters rather than importers , there aEscherichia coli K-12 chromosome, an estimated 598 encode established or predicted membrane transporters [Of the ~4401 genes in the sporters . Despitesporters . We hyposporters . In thisE. coli transporters, we investigated the previously developed library of 447 E. coli transporter knockouts [In the present study, which forms part of an ongoing project to deorphanize all orphan nockouts ,20 for gWe selected a range of compounds that were available and readily soluble in our LB media. We endeavored to include compounds from several major antibiotic classes as well as compounds with activity that have previously not been tested extensively against Gram-negative bacteria (ornidazole and paraquat). Cefiderocol was very recently approved, so was included due to novelty. E. coli growth in a two-fold serial dilution of the relevant antibiotic in LB. We defined MIC values as those with OD levels at 48 h less than 10% of the antibiotic-free media condition. For screening of the transporter library, we selected concentrations that showed some inhibition of growth in the WT strain without causing full inhibition. Generally, this was a concentration half that of the MIC; however, when effects on growth could be clearly seen at concentrations significantly lower than that, then these concentrations were used. The 18 antimicrobial compounds included in this study, as well as the MIC and screening concentrations, are listed in The workflow for the screening of each compound initiall\u22121. The WT growth rate was slightly higher than the mean growth rate, possibly due to the burden of expression of the constitutively active kanamycin cassette present in all knockout strains [Generally, the growth of nearly all transporter knockout strains in LB without supplementation was similar. The empirical area under the curve (AUC) had a mean of 30.0 and an interquartile range (IQR) of 1.90, while the maximum growth rate had a mean of 1.19 and an IQR of 0.14 h strains ,22. HistCompared to growth in LB, we saw a much larger degree of variation in growth in the presence of the antimicrobial compounds, consistent with multiple transporter knockouts influencing the level of intracellular antibiotic accumulation. We observed that many transporter knockouts had effects on the sensitivity to the compounds tested. Due to the large volume of data generated, detailed descriptions of all relevant results are beyond the scope of this paper. As we intend to perform targeted follow-up validation experiments, given that this is an initial exploratory study, we are hesitant to define specific criteria for sensitivity or resistance. Indeed, one advantage of our study over previous high-throughput growth assays is the generation of full growth curves rather than the reduction in growth of a single metric, as we elaborate on in the discussion.Selected specific results are discussed below. Generally, when searching for novel results, we looked at strains ranked highest or lowest in normalized growth rate or AUC and then manually inspected the growth curves. We selected those highlighted in this paper based on interactions with y-genes or where we felt there were plausible and interesting mechanisms for discussion. macB) did not result in azithromycin sensitivity, and we also could not see evidence of increased sensitivity to azithromycin in a previously published dataset [Our study replicated some previously established drug\u2013transporter interactions. First, we observed that knockout of acrB, a promiscuous drug-efflux protein well-known to play a role in export of many xenobiotics , caused dataset .E. coli BW1556 of 800 mg/L is a widely used herbicide, which exerts toxic effects, after conversion to a superoxide radical, once inside the cell . We foun 32 mg/L A. We scrts \u2206cusA B and \u2206fipermease C. The mop values for the correlation and found a peak near zero . These are shown in sapB, a putrescine exporter knockout, have a high correlation with the knockout of orphan transporter knockout \u2206ydjE PCR to confirm the strain is correctly labelled, and (2) MIC determination to validate the observed result . Work toA limitation to the approach described in this study for transporter pathway identification is that growth is used as a proxy for transport with no direct measurement. While the simplest mechanism by which a transporter knockout changes growth in the presence of an antimicrobial is through the direct alteration of transport, there are other possible mechanisms. Gene knockouts are well-known to cause pleiotropic effects, with knockouts of a single gene often causing altered expression in many other genes that may affect transporter sensitivity . FurtherE. coli knockouts under different conditions (including numerous antibiotics) on solid media [There are studies that have investigated the growth of the full Keio collection of id media ,38,39. Wid media ,38, fittid media . There is a large degree of homology found between transporters of different pathogenic bacterial families. Given this, as well as the recent advances in prediction of protein structure from sequence ,42, we eAntibiotics were sourced from Sigma, apart from rifampicin, ceftriaxone and azithromycin, which were purchased from Tokyo Chemical Industries .Plates were prepared by first inoculating deep-well plates with 1 mL Lucia Broth (LB) from glycerol stocks of the transporter library. The inoculated LB was grown overnight under agitation at 37 \u00b0C. Following overnight growth, the cultures were mixed with 1 mL 50% glycerol. CR1496c polystyrene plates were prepared for growth assays, by dispensing a 3 \u00b5L droplet of the culture and glycerol mixture into the bottom of the well. These loaded plates were then stored at \u221220 \u00b0C for up to 3 weeks or \u221280 \u00b0C for longer storage. E. coli), with pictures taken every 20 min.The growth assays were initiated by adding 297 \u00b5L LB, containing the appropriate dose of antibiotics to the pre-inoculated plates, and sealing with CR1396b Sandwich covers . Growth was assessed using the Growth Profiler 960 , which uses camera-based measurements to estimate growth rates simultaneously in up to 10 96-well plates. As our library encompassed 5 plates, this allowed us to run the full library in duplicate. The Growth Profiler 960 was set to 37 \u00b0C with 225 rpm shaking (recommended settings for Inoculation and media loading were performed using an Opentrons OT2 robot fitted with a 20 \u00b5L multichannel pipette and a 300 \u00b5L multichannel pipette. Scripts used in operation can be found at github.com/ljm176/TransporterScreening (11 August 2022). Growth plates were sterilized between uses by washing and UV in accordance with the instructions of the manufacturer.G-values were obtained from the plate images using the manufacturer\u2019s software. G-values were converted to OD600 values using the formula:a = 0.0158 and b = 0.9854, which were found by measurement of a standard curve in accordance with the instructions of the manufacturer.With the predetermined values tN is total cell population, K is the carrying capacity, initial cell population is N0 and r is the intrinsic growth rate. For strains that showed growth in only a single replicate, the replicate without growth was filtered from analysis, as this was determined to be the result of missed inoculation during automated loading. Data analysis and generation of figures was performed in R (version 4.1.2). Growth rates were determined by using the R package Growthcurver . Growthcgithub.com/ljm176/TransporterScreening (11 August 2022). Figures were generated using ggplot2 for R. The R script used in data analysis and figure generation is available at"} +{"text": "This article presents a novel optimization algorithm for large array thinning. The algorithm is based on Discrete Particle Swarm Optimization (DPSO) integrated with some different search strategies. It utilizes a global learning strategy to improve the diversity of populations at the early stage of optimization. A dispersive solution set and the gravitational search algorithm are used during particle velocity updating. Then, a local search strategy is enabled in the later stage of optimization. The particle position is adaptively adjusted by the mutation probability, and its motion state is monitored by two observation parameters. The peak side-lobe level (PSLL) performance, effectiveness and robustness of the improved PSO algorithm are verified by several representative examples. Thinned arrays have been the focus of research in recent years for their lower cost, lower energy consumption and lighter weight compared with the conventional uniform arrays. The main purpose of array thinning is to obtain a lower peak side-lobe level (PSLL) on the condition that the antenna array satisfies gain demand. Planar array thinning can be achieved by adjusting the \u201cON\u201d or \u201cOFF\u201d states of each element in a uniform array.To suppress the PSLL, several optimization methods have been proposed. As suggested by Liu in , the thiBenefitting from excellent global search performance, some intelligent optimization algorithms, such as the real genetic algorithm (RGA) and asymFirst proposed by Eberhart and Kennedy in 1995 , particlIn general, in order to avoid the rapid loss of particle distribution in solution space, the existing PSO methods give a variety of evaluation strategies for the diversity of particle population distribution. However, these methods mainly focus on improving the algorithm efficiency and ignore the balance between global search and local search, resulting in the lack of ability to adjust the search focus dynamically in different search stages.In this paper, a new novel optimization algorithm for large array thinning is proposed. The innovative part of this algorithm is the combined usage of different particle learning strategies with discrete particle swarm optimization, which enhances the ability of global search and effectiveness of the algorithm.The rest of this paper is organized as follows: The planar array structure and optimization problem model are briefly outlined in d along M columns and N rows, as shown in Assume a large planar array with elements arranged in square grids with a spacing of A and B are set as:mna and mnb are the excitation of element and the \u201cON\u201d or \u201cOFF\u201d state of the element , respectively. So, mnb is 1 or 0. In \u03b8 and \u03c6 are the elevation and the azimuth angles in the spherical coordinate, respectively. u and v are direction cosines defined by u = sin \u03b8 cos \u03c6 and v = sin \u03b8 sin \u03c6. If is the desired main beam direction, the radiation beam pattern F can be expressed as:Matrices S denotes the angular region excluding the main beam. Considering the constraint of the array filling rate, the sum of all elements in matrix B should be a definite constant. If the aperture remains the same, the four corner elements of the planar array must be \u201cON\u201d. The model of optimization can be represented as:K denotes the number of elements that turned \u201cON\u201d.The fitness function considered in the present study is the PSLL of the radiation beam pattern, which is desired to be as low as possible. The PSLL of the planar array can be computed as:This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.The fundamental PSO algorithm assumes a swarm of particles in the solution space, and the positions of these particles indicate possible solutions to the variables defined for a specific optimization problem. The particles move in directions based on update equations impacted by their own local best positions and the global best position of the entire swarm.D-dimensional solution space, where the position xi and velocity vi of ith particle can be expressed as:Assume a swarm composed of NP particles is uniformly dispersed in a i = 1, 2, \u2026, NP, j = 1, 2, \u2026, D. Obviously, NP represents the number of particles, and D represents the number of dimensions. c1 and c2 are the acceleration constants, and r1 and r2 are two random numbers within the range . ijv(t) and ijx(t) are the velocity and position along the thj dimension for the thi particle at tht iteration, respectively. ijpBest(t) is the best position along the thj dimension for the thi particle at tht iteration, also called \u201cpersonal best\u2019\u2019. Finally, \u201cglobal best\u2019\u2019 igBest(t) is the best position found by the swarm along the thj dimension at tht iteration.The velocity update equation is given below:thj dimension for the thi particle is given bys(ijv) denotes a function that maps the particle velocity to the probability of the particle position, and r represents a random number within the range .The value of a position along each dimension for each particle is limited to \u201c0\u201d and \u201c1\u201d in discrete algorithms. The position update equation along the The improved PSO algorithm proposed in this paper consists of three main strategies, which are the global learning strategy based on niche technique, dispersed solution sets, the local mutation search strategy and the motion state monitoring strategy. The execution of the corresponding strategies is adaptively adjusted in different search stages of the optimization.It is necessary to maintain as high a swarm diversity as possible in the early stage of optimization to avoid premature convergence. So, two strategies are utilized in the early stage, which are the niche technique and the gravitational search algorithm.G is proposed in this paper, as shown in G consist of two kinds. One is the optimal solutions of all particles, the other is the eliminated optimal solutions of some particles with excellent fitness function values. Preserving the possibility of interaction between each particle and its neighbors, particles can also learn from PG directly. This structure will substitute for the role of the global best position gBest in (8).The niche technique proposed in can formG may be uniformly distributed in the whole solution space, particles under the guidance of elements in PG have a considerable number of potential motion directions, which improves the diversity of the population, and avoids premature convergence.Because the positions of elements in PipBest in (8) to enhance the correlation between particles.The gravitational search algorithm (GSA) mentioned in is a novtht iteration, the gravitational attraction of particle q on particle i in the thk dimension can be defined as:qM(t) represents the inertial mass of the particle applying force, iM(t) represents the inertial mass of the particle subjected to the force, iqR = ||xi(t) \u2212 xq(t)||, iqR denotes the Euclidean distance between particle i and particle q, and \u03b5 is a small constant to make sure the denominator is not zero. G(t) is the gravitational constant whose value changes dynamically, which can be expressed as:\u03b1 represents the attenuation rate, T is the max iteration times, and G0 is the initial value.At the tht iteration, the resultant force thi particle in the thk dimension, so:qrand is a random number within the range .At the According to Newton\u2019s second law, the acceleration produced by this resultant force can be expressed as:i(t) represents the fitness function value of particle i at tht iteration, and gWorst(t) is the worst position found by the swarm at tht iteration.The inertial mass of each particle is calculated from its fitness function value and the updating equation of inertial mass is given below:d = 1, 2, \u2026, D, D represents the number of dimensions, C denotes a set of q dimensions randomly selected from all D dimensions, idpGood(t) is the best position of l candidate solution randomly selected from PG and neighborhood particles along the thd dimension for the thi particle at tht iteration, ida(t) is the acceleration along the thd dimension for the thi particle at tht iteration.Therefore, after applying the global learning strategy, the updating equation of particle velocity can be rewritten as:An algorithm should have strong local search ability to improve the convergence efficiency in the later stage of optimization, especially when it is applied to the optimal design of large-scale array. The flowchart of the local search strategy is shown in i can be regarded as moving from a certain position xi to another position \u2019xi in the solution space when xi along some dimensions are mutated. Therefore, we propose a local search method that the neighborhood of the optimal position of a particle i is searched by only changing partial elements of its position xi under the guidance of a Gaussian random variable. The position moving the equation along the thj dimension for the thi particle is given by:ijX represents a Gaussian random variable with a mean value of 0 and standard deviation of \u03c3, and ir is a random number within the range .It can be considered that a particle xi if it has a better fitness function value, and the standard deviation \u03c3 should be amplified to increase the moving distance of particles. Otherwise, keep xi unchanged and reduce \u03c3. The updating equation of standard deviation \u03c3 can be expressed as:k represents the number of variations, \u03c1 is the expansion parameter and \u03c1 > 1, \u03bc is the contraction parameter and 0 < \u03bc < 1. If the size of the standard deviation \u03c3 is less than a preset threshold e\u03c3, that means there is no better solution for particle i in the adjacent position. So, the update should be stopped.The mutated position In order to avoid some particles falling into local optimum prematurely and to improve the performance of the optimization algorithm, a condition monitoring strategy is proposed to adjust the position of the particle according to the moving state of the particle.i to trigger a mutation operation. The first one is that the optimal solution of particle i has not been updated for u iterations, which can be expressed as:i represents the number of iterations of the optimal solution that have not been updated. The second precondition is that the moving distance of the position xi is less than the set value \u03b5 for successive h iterations, which can be expressed as:i represents the number of iterations in which the moving distance of the position xi is less than the set value \u03b5. The moving distance is defined as the number of different elements in all dimensions before and after the particle position changes.There are two preconditions for a particle i has fallen into local optimum when both preconditions are met, and then an individual variation probability a\u03b3 is used to vary all dimensions of position xi. The equation of variation can be represented as:d = 1, 2, \u2026, D, D represents the number of dimensions, and dr is a random number within the range .It can be assumed that particle Algorithm 1: DPSO with hybrid search strategiesipBest, gWorst, and gBest. Initialize observation parameters Tpbi and iTxb. Initialize the solution set PG.\u00a02. Calculate the fitness function value of each particle and update ipBest, gWorst, gBest, and Tpbi. Replenish the set PG with the good solutions that have been eliminated.\u00a03. Update the velocity and position of the particle according to (16), (9), and (10). Determine whether the number of iterations t is larger than tL. If so, go to Step 4. Otherwise, go to Step 5.\u00a04. Initiate the local search strategy.\u00a05. Determine whether Tpbi> u and Txbi> h are both valid. If so, update the position of the particle according to (21). Otherwise, go to Step 6.\u00a06. Constrain the particle position according to the constraint condition and update the parameter Txbi.\u00a07. Do boundary treatment for particle velocity.\u00a08. Output gBest. Determine whether the termination conditions are met. If so, end the optimization. Otherwise, t = t + 1, and return to Step 2.1. Generate the initial particle swarm that satisfying the conditions. Initialize The new Algorithm 1 exploits the hybrid search strategies (HSS) described above to improve the performance of a fundamental DPSO. Therefore, it is called DPSO-HSS. The detailed steps of the improved algorithm are summarized as follows.In this section, several examples are presented to compare the performance, effectiveness and robustness of the DPSO-HSS algorithm and some contrast algorithms.The algorithms tested include the proposed DPSO-HSS algorithm, RDPSO algorithm in , and NPSThe five typical functions used to compare the performance of each algorithm are shown in The mean value of multiple simulation results;The variance of multiple simulation results;The minimum value of multiple simulations.The three statistical characteristics as evaluation criteria are:T, the dimension D, the learning factors c1 and c2, and the inertia weight w of the three algorithms are the same. The simulation results are shown in The population size NP, the iteration times Among the five test functions, the proposed algorithm outperforms the other two algorithms in the mean and variance of Ackley, Rastrigin, and Sphere. In Rosenbrock\u2019s test, DPSO-HSS has a better variance. In Griewank\u2019s test, the test results of DPSO-HSS and RDPSO are close to the same. Considering the test results of the above five functions comprehensively, the proposed DPSO-HSS algorithm performs well in both mean and variance compared with the RDPSO and NPSOWM algorithm. The low mean indicates the excellent global search ability and convergence effect of the algorithm, whereas the low variance indicates that the stability of the algorithm results in multiple runs.D), where D is the number of dimensions. By substituting this result into the optimization of the DPSO-HSS algorithm, the time complexity of DPSO-HSS can be calculated as followsD represents the number of dimensions, p is the swarm population, q denotes the number of dimensions randomly selected from all D dimensions in (16), and t is the number of iterations. It can be seen that, compared with the general PSO algorithm, the improved algorithm has higher time complexity.The time complexity of each test function is O = , that is, = . The filling rate is 50%, so there are 200 elements in the \u201cON\u201d state. The population size NP = 100, iteration times T = 500, and dimensions D = 400 are the same for all algorithms used.Consider a planar array consisting of 20 \u00d7 20 elements with equal spacing The convergence curve of PSLL using four algorithms for array thinning is shown in u, v) = with the normalized amplitude of 0 dB, and the PSLL is \u221218.32 dB. These results mean the result of the array thinning design is correct.The elements distribution and beam pattern diagram of the optimized array of DPSO-HSS algorithm are shown in u-cut of the beam pattern diagram of a thinned array optimized by four kinds of algorithms. The main lobe width of the beam pattern of each algorithm is identical. The PSLL of the DPSO-HSS algorithm is \u221218.32 dB, whereas those of RDPSO, NPSOWM, and MOPSO-CO are \u221217.15 dB, \u221217.42 dB, and \u221217.66 dB, respectively. Compared with the other three algorithms, the PSLL of DPSO-HSS is decreased by 6.82%, 5.17%, and 3.74%, respectively.We can see the PSLL performance in details in v-cut of the radiation beam pattern diagram is shown in \u03b8, \u03c6) = , = , and = . The PSLLs are \u221218.32 dB, \u221218.30 dB, and \u221218.31 dB, respectively. The difference is less than 0.1 dB, which is consistent with the conclusion in [In the above example, we set the main beam to a normal direction. Reference proved tusion in .The array thinning effect may be affected by both array aperture and array filling rate. The results of PSLL under different filling rates show that the filling rate does have a certain influence on the optimization effect of thinned array and the optimization effect will be reduced if the number of open elements is too many or too few. The PSLL is improved when the filling rate is a constant of 60% and the aperture is changed from 5\u03bb to 10\u03bb, which indicates that the aperture also has a certain influence on the performance of array thinning. As shown in A novel optimization algorithm for large array thinning based on DPSO with hybrid search strategies is proposed to improve the performance of large planar array thinning. The proposed algorithm, named DPSO-HSS, utilizes a global learning strategy to improve the diversity of populations at the early stage of optimization. A dispersive solution set and the gravitational search algorithm are used during particle velocity updating. Then, a local mutation strategy is employed in the later stage of optimization so that the local convergence is enhanced by continuous search around the best position of the particle. Several of the above-mentioned representative examples of large planar arrays thinning are provided to demonstrate the effectiveness and robustness of the DPSO-HSS algorithm. The brief comparison results are shown in the"} +{"text": "We find a diminishing knowledgegap: people with low previous knowledge catch up on the better informed, butoverall knowledge remained low and learning was limited. This suggests a ceilingeffect: possibly journalistic media did not provide enough new information forthe well-informed. Closing knowledge gaps may also be explained by the mediasystem with public television and regional newspapers reaching broad segments ofthe population. Higher knowledge was predicted less by media use than byeducation, concern, and being male.A basic understanding of climate politics is necessary for citizens to assesstheir government\u2019s policies. Media use is supposed to enable learning, whilewidening knowledge gaps. We analyze whether such a gap opened up in times ofintense media coverage during the 2015 climate conference in Paris and explainlearning through hierarchical regression analyses, drawing on a 3-month panelsurvey ( Thus, the conditions were ideally suited for exploringlearning from media content: media attention was guaranteed and highly relevantpolitical decisions on climate politics were taken\u2014yet, we were intrigued to find onlymodest learning and a diminishing rather than widening knowledge gap.We apply this framework to the case of learning about To explain why some people learn more from media use than others, one of the mostcommon reference points is the knowledge gap hypothesis . In its sociodemographic factors, thatis, formal education. In many studies, formal education is used asa proxy for previous knowledge, since it is easy to ascertain in a survey. Indeed,many empirical studies have shown a relevant positive correlation between formaleducation and political and scientific knowledge levels . This event was chosen because the UNclimate conferences are main triggers of media coverage on climate change , andserWhile most previous studies focus on the US context, our survey was conducted inGermany. Here, climate change is seen as an important and barely controversial topic, and medThe panel survey took part 2\u2009weeks before, during, and 4\u2009weeks after the UNclimate conference 2015 (COP 21). For the following analyses, we only usedata from the first and the last wave (conducted in November 2015 andJanuary 2016) to avoid conclusions based on short-term changes during theconference period, since we are interested in learning as a long-termeffect.n\u2009=\u20091121 participants (wave 1:n\u2009=\u20092098).The external panel provider respondi (certified according to ISO 26362)recruited the respondents via an online access panel with 100,000respondents in Germany. First, participants from the panel were randomlyinvited to take part in the survey. Second, a quota regarding age andgender, federal state, and formal education was applied to the sample forthe first survey wave to represent the distribution of these variableswithin the German population aged 18\u201369. The final sample after the thirdwave comprised politics, butconcentrated on knowledge of causes, consequences, and individualcounter-measures of climate change cover the three maindimensions of climate politics as defined by 2 emissions ofstates). Apart from two items (concerning the Kyoto Protocol and emissionstrading), which were modified from a study on political knowledge and knowledge that is more closely related to the specific summit (e.g.asking for the key objective of COP 21). The seven items are multiple-choicequestions with four alternative answers plus the option to respond with\u201cdon\u2019t know.\u201d For the analyses presented in this article, correct answerswere coded as 1, while incorrect and \u201cdon\u2019t know\u201d answers were coded as 0.The learning effect is operationalized as the difference between the firstwave and the last wave of the survey, with knowledge in wave 3 representingthe dependent variable of the regression.The questions vary in their level of difficulty, as shown by a qualitativepre-test with graduate students. In addition, the questions were validatedby an independent expert from the Climate Service Center Germany of theHelmholtz-Zentrum Geesthacht. Our knowledge scale includes both basicbackground knowledge .Gender (H1) was captured as a dichotomous variable ; foreducation (H2), we used a 5-point Likert-type scale from 1 \u201cno graduation(yet)\u201d to 5 \u201cuniversity diploma\u201d .Topic-specific previous knowledge (H6) sums up the correct answers in thefirst wave of the survey .The motivation to process information about climate change was measured bytaking personal relevance of the topic (H3) as a proxy .The resM\u2009=\u20094.84,SD\u2009=\u20091.84), commercial television, national printnewspaper , weeklynewspaper or magazine , regional print newspaper, tabloid newspaper\u201cBILD-Zeitung\u201d , themost widely used online newspapers spiegel.de and bild.de , other online newspaper and interpersonaldiscussions .Media use (H7 and H9) was included in form of habitual media use, since weexpect an effect on knowledge from the media use aggregated over time ratherthan from single occasions. We measured the habitual use of media sourceswith a 7-point Likert-type scale ranging from 1 (\u201cnever\u201d) to 7 . For all types of media, examples were provided. The sourcesincluded were public television and monthlyincome (Mdn\u2009=\u20092000\u20132999\u20ac).The models used in the analysis included the control variables age. Comparing the group of peoplewith low previous knowledge (0\u20132 correct answers) to the group of people with higherthan average knowledge in T1 (3\u20137 correct answers), we found thatthe former had a significantly lower formal education and less income \u2014a typical set of factorsaccompanying knowledge gaps (t(1120)\u2009=\u20095.42,p\u2009<\u2009.001).We found a rather low initial level of public knowledge about climate politics\u2009=\u200910.28, p\u2009<\u2009.001), contrary tothe assumption of the knowledge gap hypothesis.In wave 1, the mean difference between both groups was 2.66 correct answers. In wave2, the difference was only 1.84 correct answers, since the \u201clow previous knowledge\u201dgroup had improved their score, while the \u201chigh knowledge\u201d group had given lesscorrect answers in the mean. To test whether this difference between the two groupswas significant, a new variable for the learning effect was calculated (a sum indexof the difference between the first and second waves per item), which indicateswhether a person performed better, equally well, or worse after the climate summitthan before. Subsequently, the learning effect was compared between the two groupsby an unpaired F\u2009=\u200947.348,p\u2009<\u2009.001. The R\u00b2 for the final model was .483(adjusted R\u00b2\u2009=\u2009.473), thus explaining roughly half of the variance,indicative for a high goodness-of-fit according to To explain these differences in learning, in the following, we present the results ofour hierarchical regression analysis. The final model fulfills all necessarypreconditions for a multiple regression and was able to statistically significantlypredict learning, We see that the sociodemographic variables gender and education have a significanteffect in all models. A gender difference is obvious (H1): Males have learnt moreabout climate politics by the end of the climate summit. There is also a positiveeffect of formal education (supporting H2), consistent with the knowledge gaphypothesis. However, the sociodemographic variables only explain less than 2% of thevariance.not support our sixth hypothesis that higher knowledge willlead to more learning, as we will argue below when discussing this result in lightof the prior result of a diminishing knowledge gap. The set of topic-specificindividual factors raises the explained variance of the model by 30%.Motivational and cognitive factors prove most important to explain higher knowledgelevels after the climate summit. A higher personal importance of climate changecorrelates with a higher knowledge on the topic, as expected by Hypothesis 3.However, climate change skepticism has no effect; thus, Hypothesis 4 is rejected.Higher information literacy concerning climate change coverage is also associatedwith higher knowledge levels (H5 is supported). Prior knowledge about climatepolitics positively correlates with the final knowledge level. Yet this doesIn the final regression model, we additionally included media use. Reading regionalprint newspapers and the online newspaper spiegel.de has a small positive effect onknowledge . While public television has no significant effect (H8 isrejected), commercial television even seems to interfere with learning. No othersources show a significant effect . All media effects are small and significant only on a low level\u2014all typesof media use combined only add less than 2% of explained variance to the model.To sum up the analysis and findings: We conducted a regression analysis on panelsurvey data to find out how public knowledge about climate politics evolves in timesof high media coverage, and which factors influence learning about climate politics.Based on the knowledge gap hypothesis and its advancements, we includedsociodemographic, cognitive, and motivational factors as well as a differentiatedmeasurement of media use in our analysis. Prior knowledge was the most importantfactor to predict knowledge after the COP 21. This is not surprising as the overalllearning effect was small and knowledge levels remained mostly stable. Learning ispositively influenced by higher education and gender. Higher motivation toprocess climate information and self-assessed information literacy generated amoderate positive effect. The use of regional newspapers and spiegel.de had a minorpositive effect, while private television had a minor negative effect.These findings advance our understanding of learning from media content in three waysby advancing and challenging some assumptions related to the knowledge gaphypothesis: We find that factors explaining learning and the evolution of knowledgegaps are far more dependent on the sociocultural context than prior discussionssuggest\u2014both the media system and issue-specific national opinion cultures play arole.widening knowledge gap as a media effect.Our first and most important finding challenges the assumption of a wideningknowledge gap during a period of intense media coverage. With regards to thedistribution of knowledge about climate policy among the German public, there is nowidening gap. Instead, the gap between those who are ignorant about a number ofsubstantial aspects of climate policy and those who are better informed shrinks.Media coverage may thus have served as a leveler of knowledge\u2014albeit on a fairly lowlevel of knowledge before and after the summit and fairly small overall learningeffects. Being one of few studies that have actually tested the evolution ofknowledge over time e.g. , and theOne explanation could be that knowledge gap research has (a) neglected the contextualfactor of the media system and (b) often not measured media effects in adifferentiated way, distinguishing different types of, for example, television oronline sources. In our data, the regional press and spiegel.de have fosteredknowledge gains, while private television correlates negatively. Both positivefactors reach very broad audiences in Germany, so that they were able to diminishthe knowledge gap. For example, in the United States, public television is veryweak, the regional press and elite outlets are less widely read today, and privatetelevision is very strong, leading to differenteffects on knowledge gaps. Thus, it is plausible that media coverage, representingthe total output of a media system, may have different effects in countries withdifferent media systems.Second, the knowledge gap hypothesis is to some degree not a media effect hypothesis,but rather describes an \u201ceducation gap\u201d effect, since education predicts learningabout climate politics far better than media use\u2014confirming the findings of mostother studies on the important influence of education.Third, we find that national issue cultures influence the power of certainexplanatory factors. Surprisingly, political orientation and climate change denialwere not relevant to explain learning about climate politics. This is interestingbecause in many studies analyzing knowledge about and attitudes toward climatechange, these variables are significant predictors e.g. . HoweverIn line with prior research e.g. , those wTable b in the supplemental material)\u2014of course, those persons witha \u201cperfect\u201d previous knowledge score were not able to learn in the course of ourstudy, but their share is extremely minor.Finally, it is important to discuss our descriptive finding of an overall smalllearning effect. Knowledge levels remained mostly stable, and well-informed peoplelearnt even less than people with low prior knowledge. To understand why people witha high previous knowledge on climate politics did not learn more, ceiling effectscome into play. First, a methodological ceiling effect comes to mind, but does notseem plausible: Only 0.4% of the participants (five persons out of 1121) were ableto answer all questions correctly in the first wave of the survey have a small positive effect. Prior knowledge positively predictsknowledge levels after the summit\u2014however, the difference in knowledge betweenwell-informed and less informed people diminishes in a media environment that alsoprovides some basic knowledge to the less educated publics.This shows that potential knowledge gap effects can be overshadowed by a media systemleading to leveling of knowledge and ceiling effects due to limited provision ofin-depth information in the most commonly used media outlets. However, thisassumption of a media ceiling effect needs to be confirmed with a content analysis.A combination of a panel survey with a content analysis would be especiallyinsightful.We measured factual knowledge with close-ended multiple-choice items. Within a panelsurvey like ours, other types of knowledge, such as structural knowledge, aredifficult to include, but would be relevant to analyze learning in itsentirety\u2014other studies could go deeper in this regard. Furthermore, our items are anormative selection of facts we deem relevant to understand climate politics. Yet,we do not claim to have covered all facets of climate politics. Furthermore, thereis no objective standard as to which aspects are most important to know for informedcitizens (see, e.g. Although we used a panel survey design, the timeframe for learning was rather short,covering only 3\u2009months. Thus, future studies could look into learning effects over amuch longer period of time. We still expect media to contribute a relevant effect tolearning in the long term, in a dynamic process combined with other factors, andmaybe more on issues not covered by our items. The effect of media use seems to beweak in the case of the relatively short-term event-specific learning. Qualitativestudies analyzing a concrete learning process could also contribute valuableinsights into how individuals make sense of the information they receive in theirmedia repertoires and thus acquire new knowledge.Our results show that the learning process from media use cannot be explained bydisregarding the information environment provided by media systems, issue-specificpolitical discourse cultures and differences between types of media. Going beyondthe limits of this study, by combining long-term panel studies with qualitativeanalyses, is worth further research.Click here for additional data file.Supplemental material, sj-docx-1-pus-10.1177_09636625211068635 for Learning aboutclimate politics during COP 21: Explaining a diminishing knowledge gap by FenjaDe Silva-Schmidt, Michael Br\u00fcggemann, Imke Hoppe and Dorothee Arlt in PublicUnderstanding of Science"} +{"text": "CENP-SX is a histone-fold complex that is involved in chromosome segregation and DNA repair. Biochemical and crystallization analysis suggested that multiple molecules of CENP-SX may be involved in DNA binding. P21 and C2, where the volume of the P21 asymmetric unit is twice as large as that of the C2 asymmetric unit. Analysis of the self-rotation function revealed the presence of twofold and fourfold symmetry in both crystals. This suggests that there may be multiple molecules of CENP-SX and DNA within the asymmetric unit with respective symmetry. Structure determination of the present crystals should reveal details of the DNA-binding properties of CENP-SX.The CENP-SX (MHF) complex is a conserved histone-fold protein complex that is involved in chromosome segregation and DNA repair. It can bind to DNA on its own as well as in complex with other proteins such as CENP-TW and FANCM to recognize specific substrates. CENP-SX binds nonspecifically to dsDNA, similar to other histone-fold proteins. Several low-resolution structures of CENP-SX in complex with DNA are known, but a high-resolution structure is still lacking. The DNA-binding properties of CENP-SX and FANCM\u2013CENP-SX complexes with various lengths of dsDNA were compared and the band-shift patterns and migration positions were found to differ. To confirm the DNA-binding properties in detail, CENP-SX\u2013DNA and FANCM\u2013CENP-SX\u2013DNA complexes were crystallized. Analysis of the crystals revealed that they all contained the CENP-SX\u2013DNA complex, irrespective of the complex that was used in crystallization. Detailed diffraction data analyses revealed that there were two types of crystal with different space groups, Eukaryotes, in particular, undergo mitotic and meiotic cell cycles to proliferate and produce the next generation. Chromosome segregation and DNA repair play pivotal roles in these processes. The CENP-S (MHF1)\u2013CENP-X (MHF2) (CENP-SX) complex is a conserved histone-fold complex that participates in these processes was incubated with dsDNA of various lengths (1.25\u2005\u00b5M) at 42\u00b0C for 60\u2005min in binding buffer . The mixtures were analyzed by 10\u201320% gradient native PAGE (Wako) and stained with ethidium bromide.EMSA was performed as described previously using 31\u2005bp DNA and Natrix 2 condition No. 29 using 19\u201349\u2005bp DNA. For diffraction analysis, 1,4-dioxane was replaced by 30% MPD and the crystals were cryoprotected using 30% ethylene glycol.To form a protein\u2013DNA complex, a mixture of protein and DNA was incubated at 20\u00b0C for 60\u2005min. Initial crystallization screenings for FANCM\u2013CENP-SX\u2013dsDNA were performed using Natrix and Natrix 2 (Hampton Research) by the sitting-drop vapor-diffusion technique in a 96-well format crystallization plate. The final volume of the drop was 0.2\u2005\u00b5l, with 0.1\u2005\u00b5l of the reservoir solution and the protein\u2013DNA complex, and the plate was incubated at a constant temperature of 20\u00b0C. Initial crystals were obtained in two conditions: Natrix condition No. 12 .CENP-SX\u2013dsDNA crystallization was performed using 29\u201331\u2005bp DNA. To improve the crystallization, the mixing ratio of protein and DNA, the DNA length and the overhang structures were varied. The optimized crystal condition was 20\u2005mConditions for the production of CENP-SX\u2013dsDNA crystals with improved diffraction quality are summarized in Table\u00a022.3.HKL-2000 package (HKL Research) or XDS synchrotron facility (KEK) and were processed with the 3.et al., 2012et al., 2014et al., 2014et al., 2012Chicken and human CENP-SX bind a single dsDNA at regular intervals, whereas human FANCM\u2013CENP-SX prefers branched molecules (Nishino a). Analysis by SDS\u2013PAGE revealed that the crystals contained CENP-S and CENP-X, whereas FANCM was absent . FANCM was present as a film-like structure in the air\u2013liquid interface of the crystal droplet. This situation is similar to a previous report where FANCM was observed to detach from CENP-SX in the presence of organic solvent and oxidative conditions . Irrespective of the length of DNA used, FANCM\u2013CENP-SX\u2013DNA crystals appeared in the presence of 30% 1,4-dioxane. The shapes of the crystals differed according to the length of the DNA Fig. 2. Rectangnt Fig. 3b. FANCMP21, with unit-cell parameters a = 101, b = 84, c = 112\u2005\u00c5, \u03b1 = 90, \u03b2 = 105, \u03b3 = 90\u00b0. The other crystal belonged to space group C2, with unit-cell parameters a = 128, b = 81, c = 100\u2005\u00c5, \u03b1 = 90, \u03b2 = 124, \u03b3 = 90\u00b0 (Table 3C2 and P21 crystals contain \u223c80\u2005000 and \u223c160\u2005000\u2005Da, respectively, with a calculated Matthews coefficient of 2.7\u2005\u00c53\u2005Da\u22121 and a solvent content of 60%. These results suggest that multiple CENP-SX heterodimers and DNA are present in the asymmetric unit. The situation resembles previous low-resolution CENP-SX\u2013DNA complex crystal structures, in which several different crystals were formed and multiple molecules were present in the asymmetric units.The initial crystals diffracted to \u223c7\u2005\u00c5 resolution with high mosaicity. Optimization of the DNA and cryoprotectant improved the resolution Fig. 4. Data an\u00b0 Table 3. The volP21 and C2 crystals, twofold peaks in the ac plane and fourfold peaks in the 90\u00b0 plane (Fig.\u00a05et al., 2014To analyze the relationship between the multiple molecules of CENP-SX and DNA within the crystal, the self-rotation function was calculated. In both the ne Fig.\u00a05 were obs"} +{"text": "This paper addresses whether supervisory responsibility is a challenging job demand in the Job Demands-Resources (JD-R) model in different cultural contexts. We investigate how job satisfaction responds to a supervisory role with job control and selected cultural dimensions using a cross-cultural dataset of 14 countries with more than 43,000 adults using ordered logit regression models. We find that a supervisory role enhances job satisfaction and appears to be a challenging job demand. However, no studied cultural dimension, masculinity, power distance, individualism, or uncertainty avoidance, increases job satisfaction derived from this kind of responsibility. Our study indicates that there might be stereotypical assumptions about cultural dimensions concerning the job satisfaction of supervisors. There is a consensus that employee wellbeing necessitates sufficient resources that are required to perform one's job. This can be witnessed in the increasing popularity of the Job Demands-Resources (JD-R) model , covering 22 countries. The PIAAC, a survey produced by the Organization for Economic Co-operation and Development (OECD), provides comprehensive information on adults' work-life and demographic characteristics. Even though the acknowledgment of potential cultural effects on JD-R predictions has increased , reported persistently higher job satisfaction in the former group contribute to <10 percent of the observations, these categories are merged into one.In the PIAAC questionnaire, the following question was asked from respondents: \u201cDo you manage or supervise other employees? By managing or supervising other employees, we mean that a person is in some way responsible for how other employees do their work\u201d. Supervisory responsibility is a dummy variable and takes a value of 1 if the respondent is supervising at least one subordinate and 0 otherwise.The variable reflecting how much control an employee has over their own work and the working environment is calculated as the mean of four items. The example questions are \u201cTo what extent can you choose or change the sequence of your tasks?\u201d and \u201cTo what extent can you choose or change your working hours?\u201d The scale ranged from 1 to 5 (to a very high extent). Cronbach's alpha for job control was 0.816.Scores for MAS, PD, IND, and UA were obtained from a publicly available database using European Social Survey and European Values Survey indicators, calculated by Kaasa et al. 3 for 14 age, tenure, health conditions, number of children, wage, and hours worked, are vital factors that directly influence the wellbeing of a person. Another important characteristic is gender; while job demands and resources are positively related in male respondents, it tends to be negative in females . Employee-specific characteristics, particularly n et al. reveals This study employs the ordered logit estimation model with robust standard errors to examine if supervisory responsibility is associated with job satisfaction and the role of moderators in this link. The choice of model is justified because the job satisfaction variable is an ordered choice variable. Alternative models, such as the logit regression for binary variables and OLS regression for continuous variables, are not suitable here. By using stepwise regression modeling, the first stage of analysis estimates the direct effect of the supervisory role on job satisfaction. The second stage of investigation looks at the challenging demand hypothesis by adding job control to the regression. The third stage will predict how country-level cultural dimensions and their interactions with individual-level variables are associated with job satisfaction. Furthermore, all cultural dimensions are standardized by using the sample mean and standard deviation (the process of subtracting the mean and dividing it by the standard deviation), and their z-scores are used in regressions. In models without cultural dimensions, country fixed effects are included. Further, an empirical analysis was performed using STATA 15.1 software.b = 0.17, p < 0.01; see Model 1). Model 2 tests if supervisory responsibility together with job control enhances job satisfaction. We find that the supervisory role coupled with job control is indeed positive and significant . Thus, H1a and H1b are confirmed, and we can conclude that the supervisory role can be deemed a challenging job demand and a positive contributor to job satisfaction by itself.First, we investigated if the supervisory role contributes to job satisfaction while holding the control variables fixed see , Model 1b = 0.005), but insignificant.Before moving on to cross-level interactions, we note that, in line with previous research, cultural dimensions themselves determine the level of job satisfaction. People are more satisfied with their jobs when located in higher IND countries and less satisfied with higher PD, MAS, and UA countries. In Models 3\u20136, we present the analysis of the moderating role of cultural dimensions. In H2, we suggested that job satisfaction decreases with supervisory responsibility in MAS cultures. Model 3 interaction term is positive . Most surprisingly, the interaction coefficient for IND is negative and significant , which is opposite to H4. UA is insignificant, and hence H5 is not confirmed. Therefore, our results indicate that none of the cultural dimensions favor job satisfaction from supervisory responsibility.In H3, we proposed that the supervisory role has a stronger positive effect on job satisfaction in high PD cultures. Model 4 shows that the interaction effect is positive but insignificant , see b = 0.09, p < 0.01). The moderating role of cultural dimensions changes on some occasions: for MAS, the interaction coefficient becomes negative but is still insignificant. For PD, the coefficient is negative and significant . IND remains negative but has become insignificant, similarly to UA. Hence, none of our hypotheses would hold under the described variable specification.Although empirical studies show that job satisfaction is more affected by the managerial yes/no status rather than the number of subordinates is associated with job satisfaction, even though, similarly to earlier findings (Bless and Granato, We expected that a more powerful role should be appreciated in high PD countries, but our results refute this intuition. Bless and Granato offer anMoreover, there might be fewer job resources available for managers in high PD cultures. For example, feedback provided to managers in high PD cultures is almost absent, and the manager's subjective feedback to employees is feared (Hwang and Francesco, We expected that security is also more important to job satisfaction in MAS cultures (Hauff et al., We expected that an IND cultural context would facilitate a supervisor's job satisfaction because challenging and interesting work is valued positively in IND culture (Hauff et al., Finally, conforming to the general notion, higher UA does not enhance job satisfaction from the supervisory role, but no significant negative effect was found either. UA can be considered a litmus test for potential stress factors (Naseer et al., Therefore, an implication of our study is that the design of corporate incentive strategies may use stereotypical assumptions about cultural dimensions that should be revised. Assigning a supervisory role in high PD and IND environments maybe even detrimental to the person's job satisfaction in the long term. Based on the general notion that being a manager is reputable in high PD cultures, it does not, however, follow that highly respected managers are more satisfied as well. Similarly, high IND promotes employee job satisfaction in general but does not mean that supervisory responsibility is more satisfying in this environment.Regarding this study's limitations, PIAAC comprises self-reported data and standard caveats applied in this respect, including accuracy and social desirability bias. Due to cross-sectional nature of the data there are potential endogeneity issues between variables. We thus cannot claim for certain that the supervisory role increases job satisfaction, it may well be that more satisfied employees accept supervisory roles.Job satisfaction was a single-item measure in the questionnaire, and its reliability may be criticized. However, this approach in JD-R research had been adopted by Demerouti et al. , FarndalCultural dimensions were assumed on a national level, though the dimensions may vary to a large extent within one country (Kaasa et al., PIAAC covers only OECD countries, therefore, we must be careful in making global generalizations. OECD countries are aging societies with labor shortage in many industries, making employers more willing to design jobs that ensures employees' job satisfaction.Finally, our data is from 2011 to 2012, and drastic changes in the work environment have occurred since. Changes like digitalization and the gig economy have profoundly affected managerial work. Future studies should explore the second cycle of PIAAC data anticipated to be available by 2024.https://www.oecd.org/skills/piaac/ and https://lepo.it.da.ut.ee/~akaasa/culturaldistances/datasources.html.Publicly available datasets were analyzed in this study. This data can be found here: All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.This work was supported by the Estonian Research Council grants PRG1513 and PRG791. Also, the support by the European Union Horizon 2020 research and innovation program grant agreement no. 822781 GROWINPRO is acknowledged.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Over the past decades, a growing body of evidence has demonstrated the impact of the psychosocial work environment on workers\u2019 health, safety and wellbeing. These factors may also affect employees \u2019job satisfaction.To explore psychosocial determinants of job satisfaction among workers in a Tunisian electricity and gas company.A cross-sectional survey was conducted among male workers in a Tunisian electricity and gas company. The Copenhagen Psychosocial Questionnaire (COPSOQ), the Job Content Questionnaire and the general health questionnaire (GHQ12) were used to assess psychosocial risk factors at work. A principal component analysis (PCA) was used to assess correlations between instruments \u2019scores. Multiple linear regression analysis was applied to explore the specific factors associated with job satisfaction. Data were analysed using R software.A total of 83 workers participated in the survey (the age range: 21-60 years). Job satisfaction score varied from 0 to 100% with a mean of 73.09 %. In the PCA, job satisfaction had a positive correlation with high social support and a negative one with work-family conflicts, a high psychological demand, stress, burnout and quantitative demands. In multivariate analysis, factors negatively associated with job satisfaction were: age, stress and low social support. In contrast, seniority was positively associated with job satisfaction.Job satisfaction is deeply influenced by the psychosocial work environment. Therefore, it is necessary to provide supervision, communication, and social support for these workers to increase or maintain a high level of job satisfaction.No significant relationships."} +{"text": "ADAMTS10 have long been known to cause autosomal recessive Weill-Marchesani Syndrome which is characterized by short stature and ocular abnormalities, more recent work has shown that certain mutations in ADAMTS10 cause glaucoma in dogs. In humans, glaucoma is the leading cause of irreversible vision loss that affects tens of millions of people world-wide. Vision loss in glaucoma is a result of neurodegeneration of retinal ganglion cells that form the inner-most layer of the retina and whose axons form the optic nerve which relays visual information to the brain. ADAMTS10 contributes to the formation of microfibrils which sequester latent transforming growth factor \u03b2 (TGF\u03b2). Among its many biological functions, TGF\u03b2 promotes the development of retinal ganglion cells and is also known to play other roles in glaucoma pathogenesis. The aim of this study was to test the hypothesis that ADAMTS10 plays a role in retinal ganglion cell development through regulation of TGF\u03b2 signaling. To this end, Adamts10 expression was targeted for reduction in zebrafish embryos carrying either a fluorescent reporter that labels retinal ganglion cells, or a fluorescent reporter of pSmad3-mediated TGF\u03b2 family signaling. Loss of adamts10 function in zebrafish embryos reduced retinal ganglion cell reporter fluorescence and prevented formation of an ordered retinal ganglion cell layer. Targeting adamts10 expression also drastically reduced constitutive TGF\u03b2 signaling in the eye. Direct inhibition of the TGF\u03b2 receptor reduced retinal ganglion cell reporter fluorescence similar to the effect of targeting adamts10 expression. These findings unveil a previously unknown role for Adamts10 in retinal ganglion cell development and suggest that the developmental role of Adamts10 is mediated by active TGF\u03b2 family signaling. In addition, our results show for the first time that Adamts10 is necessary for pSmad3-mediated constitutive TGF\u03b2 family signaling.Although mutations in ADAMTS10 as disease-causative for a colony of Beagle dogs with autosomal recessive inheritance of glaucoma , a rare connective tissue disorder characterized by short stature and ocular abnormalities including glaucoma (FBN1) , suggesta (FBN1) . FBN1 asa (FBN1) . Microfia (FBN1) . Mutatioa (FBN1) . ADAMTS1a (FBN1) . Recent activity . Mutatioactivity . Therefoactivity .adamts10 mRNA has been shown in the developing zebrafish embryo (ADAMTS10 is abundantly expressed in the developing mouse eye and up-regulation of h embryo . In the Danio rerio) were maintained in aquatic housing units on a monitored recirculating system at 26\u00b0C with standard light/dark (14/10\u00a0h) cycle. Experiments were performed using embryos from outcrosses of wildtype AB strain to a transgenic reporter line that expresses GFP under control of a pou4f1 (brn3a) enhancer element, Tg (pou4f1-hsp70l:GFP)rw0110bTg formerly Tg (brn3a-hsp70l:GFP)rw0110bTg provided by Dr. Takanori Hamaoka and Dr. Hitoshi Okamoto of the Riken Brain Institute through the European Zebrafish Resource Center (Tg(12xSBE:EGFP)ia16Tg, generated by Dr. Francesco Argenton, Universita di Padova, obtained through the European Zebrafish Resource Center following the manufacturer\u2019s protocol. The adamts10 probe set was checked to avoid cross detection of the adamts6 which has high homology with adamts10. A probe set for DapB, a bacterial gene, was used as negative control. Briefly, embryo sections were post-fixed in 4% PFA/PBS for 15\u00a0min at 4\u00b0C, dehydrated through gradient ethanol then treated with hydrogen peroxide for 10\u00a0min at room temperature. Target retrieval was conducted by submerging slides in retrieval solution for 5\u00a0min at 100\u00b0C. Following 30\u00a0min protease plus treatment at 40\u00b0C in a hybridization oven , samples were hybridized with probes, subjected to signal amplification steps and then reacted with Fast Red. Bright field images were captured using a Nikon Eclipse microscope equipped with a \u00d740 objective.Zebrafish embryos or adult eyes were embedded in 20% sucrose/OCT and 7\u00a0\u03bcm-thick cryosections were made as described below for immunohistochemistry. RNA echnique using a Injections of 1\u20132\u00a0nl were made into the yoke adjacent to the embryo at the one-cell stage at approximately 12:00 p.m. To visualize injections, 0.025% phenol red was included in the injection solution. After injection, embryos were incubated at 29\u00b0C in egg water . At approximately 10:00 a.m. on post-injection days 1, 2 or 3 , embryos were euthanized in 300\u00a0mg/L tricaine methanesulfonate, then fixed in 4% PFA/PBS. For fluorescence microscopy, 0.2\u00a0mM phenylthiourea (PTU) was added post-injection to prevent pigmentation.adamts10 mRNA (referred to from here on as adamts10 MO) was injected at 4\u20135\u00a0ng/embryo. Human ADAMTS10 mRNA was transcribed from expression plasmids described previously (in vitro T7 ultra mMessage mMachine\u2122 kit with poly-A tail addition (Thermo Fisher Scientific). For MO rescue experiments, embryos were co-injected with 4\u00a0ng adamts10 MO with 400\u00a0ng of either normal or G661R mutated human ADAMTS10 mRNA.MO obtained from Gene Tools , were reconstituted to 1\u00a0mM with RNase/DNase-free water and stored at room temperature as suggested by the manufacturer. Translation-blocking MO (5\u2032-CCA\u200bAAC\u200bTCC\u200bTCC\u200bACA\u200bCCG\u200bTTT\u200bCCA\u200bT-3\u2032) targeting the translation initiation complex of eviously using thTg(12xSBE:EGFP) embryos within 8\u00a0h after treatment (Tg (pou4f1-hsp70l:GFP) or Tg(12xSBE:EGFP) embryos. This treatment protocol did not induce gross disruption of eye morphology, though treated embryos were slightly smaller than untreated , a concentration previously shown to drastically reduce reporter fluorescence of reatment , or vehintreated . At 48\u00a0hGFP-positive embryos were dechorionated, fixed in 4% PFA/PBS for 2\u00a0h at room temperature or at 4\u00b0C overnight, then stored in PBS at 4\u00b0C. For imaging, embryos were placed ventral side up in 3% (w/v) methyl cellulose and fluorescence and brightfield images taken with a \u00d74 objective with a Nikon AZ100M fluorescence microscope.Embryos were embedded in 20% sucrose in optimal cutting temperature compound (OCT) as described by For immunostaining, cryosections from the center of the eye were rehydrated with PBS before addition of blocking buffer for 1\u00a0h. Blocking buffer was removed and primary antibody solution in incubation buffer was added. The antibodies used were 1:1,000 rabbit anti-GFP (Torrey Pines Biolabs) and 1:25 mouse anti-Isl1 . After overnight incubation at 4\u00b0C in a humidified chamber, sections were washed 3 times in PBS for 10\u00a0min and incubated in 1:1,000 Alexa Fluor-donkey-anti-mouse-546 and 1:1,000 Alexa Fluor-donkey-anti-rabbit-488 (Invitrogen) in secondary incubation buffer for 1\u00a0h. Sections were then washed 3 times with PBS and mounted with Prolong Gold mounting fluid with DAPI (Invitrogen).http://rsb.info.nih.gov/ij/plugins/surface-plot-3d.html). Statistical analyses were performed in Graphpad Prism. Statistical tests and sample sizes are indicated in the figure legends.Integrated fluorescence density of fluorescent embryo images, defined as the product of the area of the region of interest (ROI) and the mean gray value, was measured by drawing an ROI around the retina using the ImageJ polygon tool. 3D surface plots were generated from single-channel GFP images using the NIH ImageJ interactive 3D surface plot plug-in (adamts10 mRNA in the developing zebrafish embryos and in the adult zebrafish eye has been shown in a previous RT-PCR study (in situ hybridization (Advanced Cell Diagnostics), we found adamts10 mRNA expression in the eye of zebrafish embryos at 24, 48, and 72\u00a0hpf line of zebrafish, which expresses GFP driven by a pou4f1 enhancer element is specifically expressed in \u223c80% of developing and differentiated RGCs . We used element to test rescence . In uninrescence . As showrescence . Howeverc nerves . At 48\u00a0hadamts10, rescue experiments were performed in which Tg(pou4f1-hsp70l:GFP) embryos were co-injected with adamts10 MO and human ADAMTS10 mRNA (which is not targeted by the zebrafish adamts10 MO). GFP fluorescence was compared at 48\u00a0hpf to embryos that were uninjected or injected with adamts10 MO only. In uninjected embryos, the retina displayed strong fluorescence with staining of the optic nerve consistent with labeling of RGCs , with a more uniform distribution of fluorescence intensity along the retina in integrated fluorescence density embryos using an antibody against GFP and an antibody against Isl1, which is expressed in postmitotic RGCs and is required for RGC development line of zebrafish which expresses GFP in response to activation of a pSmad3 binding element, thereby acting as a reporter of active TGF\u03b2 superfamily signaling (Tg(12xSBE:EGFP) embryos (adamts10 MO resulted in a 66.6% reduction in fluorescence as compared to uninjected (p < 0.0001). These results indicate that targeting Adamts10 expression drastically reduces constitutive pSmad3-mediated TGF\u03b2 superfamily signaling in the retina.ADAMTS10 plays a role in microfibril structure and function . Since mignaling . In unin embryos , indicatadamts10 MO suggested that Adamts10 may exert its effect on RGC development through suppression of TGF\u03b2 signaling. To test this hypothesis, embryos were treated with an inhibitor of the TGF\u03b2 receptor, SB431542, at 11\u00a0hpf, a time at which the optic primordium has just formed (Tg(12xSBE:EGFP) line treated with SB431542 showed nearly complete reduction of pSmad3-driven GFP fluorescence, verifying effective inhibition of TGF\u03b2 signaling, while vehicle control (DMSO) had no effect (Tg(pou4f1-hsp70l:GFP) embryos were treated with SB431542. Treatment with SB431542, but not vehicle control, strongly reduced pou4f1 enhancer-driven GFP expression which is expressed early in post-mitotic RGC precursors and plays an important role in their development (Tg(pou4f1-hsp70l:GFP) line of zebrafish that express GFP driven by a pou4f1 enhancer element embryos using antibodies against GFP and Isl1 which is expressed in post-mitotic RGCs and plays key roles in RGC development line of zebrafish that expresses GFP in response to activation of a pSmad3 binding element. We found that uninjected Tg(12xSBE:EGFP) embryos expressed abundant GFP throughout the retina at 48\u00a0hpf signaling in mouse embryo fibroblasts (ADAMTS10 deficiency could vary between tissues, developmental stages and species investigated. Another signaling role for ADAMTS10 described by Cain et al. that we did not investigate is its role in formation of focal adhesions and epithelial cell-cell junctions (A role for ADAMTS10 in TGF\u03b2 family signaling contrasts with the results of Mularczyk et al., who found that a premature termination mutation of roblasts . Althougunctions .adamts10 MO (via a pSmad3-mediated TGF\u03b2 family pathway.TGF\u03b2 is involved in neuronal development, including in the retina, in programmed cell death and in axon specification . This lemts10 MO . Howevermts10 MO . Althougmts10 MO . Our resadamts10 expression. This indicates that the primary defect of RGC development in adamts10 MO treated embryos is likely a failure to migrate to the appropriate position resulting in a retinal lamination defect.Isl1 and Pou4f1 are expressed in early post-mitotic RGCs and are known to play important roles in their differentiation . ExpressIn summary, we have discovered a previously unknown role for Adamts10 in RGC development, possibly contributing to their apical to basal translocation and formation of an ordered ganglion cell layer. Our results suggest that the developmental role of Adamts10 is mediated by active TGF\u03b2 family signaling. In addition, our results show for the first time that Adamts10 is necessary for pSmad3-mediated constitutive TGF\u03b2 family signaling in the developing retina."} +{"text": "The endogenous hypercortisolism that characterizes Cushing\u2019s syndrome (CS) is associated with a state of hypercoagulability that significantly increases the risk of thromboembolic disease, especially, venous events. Despite this certainty, there is no consensus on the best thromboprophylaxis strategy (TPS) for these patients. Our aim was to summarize the published data about different thromboprophylaxis strategies, and to review available clinical tools assisting thromboprophylaxis decision making.Narrative review of thromboprophylaxis strategies in patients with Cushing\u2019s syndrome. A search was carried out on PubMed, Scopus and EBSCO until November 14th, 2022, and articles were selected based on their relevance and excluded in case of redundant content.Literature is scarce regarding thromboprophylaxis strategies to be adopted in the context of endogenous hypercortisolism, most often being a case-by-case decision according to the centre expertise. Only three retrospective studies, with a small number of patients enrolled, evaluated the use of hypocoagulation for the thromboprophylaxis of patients with CS in the post-operative period of transsphenoidal surgery and/or adrenalectomy, but all of them with favourable results. The use of low molecular weight heparin is the most frequent option as TPS in CS context. There are numerous venous thromboembolism risk assessment scores validated for different medical purposes, but just one specifically developed for CS, that must be validated to ensure solid recommendations in this context. The use of preoperative medical therapy is not routinely recommended to decrease the risk of postoperative venous thromboembolic events. The peak of venous thromboembolic events occurs in the first three months post-surgery.The need to hypocoagulate CS patients, mainly in the post-operative period of a transsphenoidal surgery or an adrenalectomy, is undoubtable, especially in patients with an elevated risk of venous thromboembolic events, but the precise duration and the hypocoagulation regimen to institute is yet to be determined with prospective studies. Cushing\u2019s syndrome (CS) is associated with an eighteen-fold higher risk of venous thromboembolic events (VTE) compared with the general population . Indeed,This hypercoagulable state is consequence of both quantitative and qualitative alterations in the haemostatic system induced by the cortisol excess.In one hand, by the increase in plasma clotting factors - especially factor VIII and Von Willebrand factor (VWF), but also factors IX, X and XI -, the decrease in plasma tissue factor pathway inhibitor (TFPI) and the impairment of fibrinolysis \u2013 by the upregulation of the synthesis of plasminogen activator inhibitor type I (PAI-1). On another hand, the overexpression of abnormally high molecular weight VWF multimers capable of inducing spontaneous platelet aggregation contribute to a higher risk of thrombotic events.Coagulation profiles in patients with CS are heterogeneously affected. The hemostatic abnormalities most consistently reported are shortening of activated partial thromboplastin time (aPTT) and increased thrombin generation \u201322. Intep\u2009=\u20090.007) OR \u201cCushing disease\u201d [MeSH Terms] OR \u201chypercortisolism\u201d [MeSH Term] AND \u201ccoagulation\u201d [MeSH Term] OR \u201chypercoagulability\u201d [MeSH Terms]. Secondly, with the intention to find articles that could bring us some light about different thromboprophylaxis strategies implemented in these patients, and also their benefits and risks, with carried out the research with the terms \u201cCushing syndrome\u201d [MeSH Terms] OR \u201cCushing disease\u201d [MeSH Terms] OR \u201chypercortisolism\u201d [MeSH Term] AND \u201canticoagulant agents\u201d [MeSH Terms].Only three retrospective studies Table\u00a0, with a p\u2009<\u20090.001), without any significant haemorrhagic event in the group of patients that received hypocoagulation strategy.The first one was done by Boscaro M et al. in 2002 . They cop\u2009=\u20090.081). No bleeding complications were observed during the follow-up in either group.The second one was conducted in 2015 by Barbot M et al. . LM. LM44]. insights . So, basinsights .It is also important to mention that this pharmacologic thromboprophylaxis should be used in addition to the mechanical thromboprophylaxis, e.g., intermittent pneumatic compression stockings or elastic stockings, to augment the efficacy of the thromboprophylaxis strategy.With respect to the use of PMT to reduce the risk of post-operative VTE, given its unclear benefit and potential adverse effects, it is not routinely recommended. Nevertheless, it should be offered on an individual basis, especially if surgery is delayed or if hypercortisolism is markedly severe .In conclusion, thromboprophylaxis strategy is not about a single, simple decision to initiate or not hypocoagulation drugs. It is all about the continuous balancing of risks and benefits, which must always be individualized, in the pre and postoperative period, for each patient who presents with CS. Nevertheless, based on the literature reviewed here, we can state a general recommendation about the adequate strategy of thromboprophylaxis to implement in patients with CS Fig.\u00a0."} +{"text": "This review systematically evaluated radiomics analysis procedures for characterizing salivary gland tumors (SGTs) on magnetic resonance imaging (MRI). Radiomics analysis showed potential for characterizing SGTs on MRI, but its clinical application is limited due to complex procedures and a lack of standardized methods. This review summarized radiomics analysis procedures, focusing on reported methodologies and performances, and proposed potential standards for the procedures for radiomics analysis, which may benefit further developments of radiomics analysis in characterizing SGTs on MRI.Radiomics analysis can potentially characterize salivary gland tumors (SGTs) on magnetic resonance imaging (MRI). The procedures for radiomics analysis were various, and no consistent performances were reported. This review evaluated the methodologies and performances of studies using radiomics analysis to characterize SGTs on MRI. We systematically reviewed studies published until July 2023, which employed radiomics analysis to characterize SGTs on MRI. In total, 14 of 98 studies were eligible. Each study examined 23\u2013334 benign and 8\u201356 malignant SGTs. Least absolute shrinkage and selection operator (LASSO) was the most common feature selection method (in eight studies). Eleven studies confirmed the stability of selected features using cross-validation or bootstrap. Nine classifiers were used to build models that achieved area under the curves (AUCs) of 0.74 to 1.00 for characterizing benign and malignant SGTs and 0.80 to 0.96 for characterizing pleomorphic adenomas and Warthin\u2019s tumors. Performances were validated using cross-validation, internal, and external datasets in four, six, and two studies, respectively. No single feature consistently appeared in the final models across the studies. No standardized procedure was used for radiomics analysis in characterizing SGTs on MRIs, and various models were proposed. The need for a standard procedure for radiomics analysis is emphasized. Salivary gland tumors (SGTs) constitute approximately 2\u20136.5% of all head and neck tumors, with about 80% originating from the parotid gland ,2,3. TheMagnetic resonance imaging (MRI) is widely employed in mapping the SGTs for treatment plans because it provides detailed information on soft tissue. MRI has also demonstrated comparable efficacy to FNAC in characterizing SGTs ,10,11. MThis systematic review aims to evaluate studies that have assessed the performance of radiomics analysis in characterizing benign and malignant SGTs or PA and WT on MRI. The primary focus is analyzing the methodologies and performances reported in each eligible study to provide a comprehensive summary of the approaches employed during the radiomics analysis procedure. The aim is to contribute to the standardization of a radiomics analysis procedure for characterizing SGTs on MRI, which may facilitate its translation into clinical practice.The analysis and inclusion criteria methods were pre-defined and documented for this systematic review. The review protocol was registered at PROSPERO International Prospective Register of Systematic Reviews (ID: CRD 42023446728). This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) 2020 statement [A systematic literature search was conducted through PubMed, Embase, Web of Science, Scopus, and Cochrane Library. The search encompassed studies published from the inception of the electronic databases up to 20 July 2023. The search terms used were \u201c AND radiomics AND (MRI OR magnetic resonance imaging)\u201d. These terms were chosen to ensure the inclusion of all salivary gland tumors and to summarize the methodologies and diagnostic performance of radiomics analysis in characterizing SGTs on MRI. The search results were stored in an Excel spreadsheet . Duplicate titles were screened and removed. As the data for this review were obtained solely from previously published studies, Institutional Review Board approval or written patient consent was not required and, therefore, waived.Original articles published in English.Participants: studies involving patients with SGTs who underwent pre-treatment head and neck MRI scans, including at least T1-weighted (T1W), T2-weighted (T2W), contrast-enhanced T1W (CE-T1W), CE-T2W, or diffusion-weighted imaging (DWI).Comparison: studies reporting the performance of radiomics analysis in characterizing SGTs on MRI.Outcomes: the primary outcome was the performance of radiomics analysis in characterizing benign and malignant SGTs on MRI; the second outcome was the performance of radiomics analysis in characterizing PA and WT on MRI.The inclusion criteria were as follows:Articles in the form of reviews, guidelines, conference proceedings, or case reports/series.Studies that did not report the area under the curve (AUC) of the radiomics models in characterizing SGTs.Studies with patient populations overlapped with previous studies conducted in the same investigated institution for assessing the same outcomes. The exclusion criteria were based on the publication time, with later studies being excluded.The exclusion criteria were as follows:Two observers (QYHA and KFH) independently screened the records based on the title and abstract, and full-text evaluation was performed for selected records. Any disagreements were resolved through discussion or consultation with a third observer (TYS).Study characteristics: first author, journal name, year of publication, city, patient recruitment period, and study design (prospective or retrospective).Patient characteristics: number of patients in the training, testing, and external datasets, methods for diagnosis of the nature of salivary gland tumors.MRI characteristics: MRI sequences used for analysis.Radiomics analysis procedure: segmentation method, number of features extracted, feature categories, methods for feature selection, number and categories of the selected features, names of the selected features, classifiers used for model build-up, and final model.Outcomes: model performance in training, testing, and external datasets.One observer (KM) extracted the following data from the included studies:A second observer (QYHA) verified the received data in an Excel spreadsheet.All of the eligible studies underwent assessments of study quality by one observer (KM) using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and The Radiomics Quality Score (RQS). The QUADAS-2 evaluates four domains: patient selection, index test, reference standard, and flow and timing . Each doThe inter-observer agreements for article selection based on titles and abstracts were calculated using Cohen\u2019s kappa coefficients.Inter-observer agreement for the extracted features was assessed in 7/14 studies (50%), with all studies using an intra-class correlation coefficient (ICC) threshold of >0.75 to indicate high repeatability. The most common method for feature selection was the least absolute shrinkage and selection operator (LASSO), used in 8/14 studies (57.14%), followed by analysis of variance (ANOVA) in 3/14 studies (21.43%). Cross-validation or bootstrap techniques were applied in 11/14 studies (78.57%) to enhance the stability of the selected features. Logistic regression was the most commonly employed classifier for building the radiomics model, used in 7/14 studies (50%), followed by support vector machine (SVM) in 6/14 studies (42.86%).Nine studies reported the final selected features for characterizing benign and malignant SGTs. The number of features selected for building the final model ranged from 3 to 17, a total of 61 features . Four studies reported the final selected features for characterizing PA and WT. The number of features selected for building the final model ranged from 4 to 13, a total of 36 features . No features included in the final models were found to be present in more than two studies.Thirteen studies reported final radiomics models . The perFor characterizing benign and malignant tumors, the AUCs ranged from 0.74 to 1, of which studies that used only T1W or T2W images achieved AUCs ranging from 0.74 to 0.85, and those that used only DWI images achieved AUCs ranging from 0.76 to 0.89. The highest AUC was achieved using features selected from multi-parametric MRI validated on a cross-validation dataset, while the lowest AUC was obtained using T2WI.Three studies compared the performance of radiomics models built using different classifiers. One study reported that radiomics models constructed with support vector machine (SVM) (AUC 0.893) and logistic regression (LR) (AUC 0.886) outperformed those built with k-nearest neighbors (KNN) (AUC 0.796) . AnotherTwo studies suggested that models incorporating clinical and radiomics features exhibited superior performance compared to models built solely with clinical or radiomics features ,20.Three studies compared the performances of radiomics models constructed using different MRI sequences. It was observed that models utilizing multiple MRI sequences for feature extraction outperformed those utilizing a single sequence ,21,23. AOne study demonstrated that a radiomics model constructed using a single feature category outperformed a model utilizing all feature categories . AnotherFor characterizing PA and WT, the AUCs ranged from 0.80 to 0.96, of which studies that used only T1W or T2W images achieved AUCs of 0.80\u20130.96, and those that used only DWI images achieved an AUC of 0.93. The highest and lowest AUCs were achieved using features selected from T2W images.Similar to the BT vs. MT studies, three studies indicated that models incorporating clinical and radiomics features outperformed those constructed using only clinical or radiomics features ,22,25. OThe initial flow of QUADAS-2 assessments is presented in The median score of the RQS was 12.5 out of 36, with a range of 5 to 16 . A totalIn this systematic review, we evaluated the procedures for radiomics analysis in characterizing SGTs on MRI based on 14 eligible studies. Ten studies focused on characterizing benign and malignant tumors, reporting AUC values ranging from 0.74 to 1. Seven studies differentiated PA and WT, with AUCs ranging from 0.80 to 0.96. Despite the promising accuracy of radiomics, with most studies achieving AUCs above 0.80, these studies employed various methods for radiomics analysis, and none of the resulting models have been further validated or widely implemented in clinical practice. To note, no features included in the final models were found to be present in more than two studies. This systematic review provides valuable insights into MRI sequence selection, image preprocessing, feature extraction, feature selection, and model development, which can help standardize the procedures for radiomics analysis in characterizing SGTs on MRI.The MRI protocol for SGTs typically includes multiple MRI sequences, such as T1WI, T2WI, and CE images. Functional MRI techniques, such as dynamic contrast-enhanced (DCE) MRI and DWI, have also demonstrated potential for characterizing SGTs and are increasingly included in the MRI protocol for SGTs . In radiImage preprocessing and feature extraction play crucial roles in radiomics analysis, but their implementation and reporting were inconsistent across the studies reviewed. Only five studies reported on image preprocessing, while the details of this step were unclear in the remaining studies. Image preprocessing is essential for standardizing images obtained from different institutions with varying protocols. This step is particularly critical but challenging to implement for MRI as MRIs are constructed by weighting the signals corresponding to the magnetic properties of the tissues being imaged. Several common methods have been proposed to address different protocol parameters and improve the quality and consistency of MRI images, including the \u201c\u03bc \u00b1 3\u03c3\u201d method, N4ITK bias field correction, resampling, and Z-score normalization ,37,38,39The selection of ROI is an important consideration and depends heavily on the heterogeneity of the tumor as depicted on imaging. Since SGTs can exhibit high heterogeneity, it is suggested to use an ROI covering the whole tumor. Interestingly, one study showed that radiomics models built using features from manual segmentation outperformed those using automatic segmentation for characterizing benign and malignant SGTs on MRI . This maOn the other hand, in diseases with low incidences, such as SGTs, the number of features extracted for analysis should be carefully considered. The reviewed studies had total patient numbers ranging from 31 to 334, and the total number of extracted features ranged from 29 to 3396. Most studies extracted over 400 features. These increased the risk of overfitting the model. One study notably demonstrated that the radiomics model built up by the selection from over 1000 radiomics features did not outperform that from 91 features . FurtherThe review showed the LASSO method as the most commonly used feature selection method in the analyzed studies, with 8/14 studies employing it. LASSO is a regularization-based method that can effectively remove irrelevant or redundant features by shrinking their coefficients to zero . This heLR and SVM were the most commonly used classifiers for building radiomics models to characterize SGTs on MRI. However, the best-performing classifier varied among the studies. In three studies that compared multiple classifiers, SVM and LR outperformed KNN in one study, XGBoost and SVM performed better than DT in another study, and SVM and LDA performed similarly in the third study ,18,21. MFocus on non-contrast-enhanced MRIs.Implement image preprocessing.Limit the number of features extracted and consider the feature categories.Evaluate inter-observer agreement for the extracted features and select those with high repeatability for further analysis.Use multiple feature selection methods.Ensure feature stability by different approaches.Build models using different approaches and identify the best model.Validate models using at least cross-validation dataset with/without internal or external datasets.Report the final models for future validations.Open datasets for future validations.Based on the findings of this review, the following suggestions can be made to facilitate radiomics analysis and the development of radiomics models in characterizing SGTs on MRI:Implementing these recommendations will enhance the standardization, reproducibility, and generalizability of radiomics analysis and radiomics models in characterizing SGTs on MRI, ultimately facilitating their broader use in clinical practice.This review has limitations that may result in inherent heterogeneity and publication bias. Firstly, all studies included in this systematic review were retrospective studies. Due to the retrospective nature, the concerns regarding the risk of bias in the included studies could not be avoided. However, the results from QUADAS-2 showed that most of the studies had a low risk of bias and high applicability. Secondly, a meta-analysis that evaluated the performance of radiomics models in characterizing SGTs on MRI was inappropriate due to the significant heterogeneity of the radiomics models among the eligible studies. Thirdly, according to the registered study protocol, eligible studies that were published after July were not included in the analysis. Furthermore, the added value of radiomics analysis to clinical practice remains underreported, and no studies have analyzed cost-effectiveness. Therefore, it is recommended that further studies be conducted to evaluate radiomics analysis using standardized procedures for characterizing SGTs on MRIs.Previous studies have demonstrated the potential of radiomics analysis in characterizing SGTs on MRI. However, the lack of standardized procedures for implementing radiomics analysis across these studies may have led to the limited generalizability of the final radiomics models, thereby restricting their application in clinical practice. To develop radiomics models that can be widely utilized for characterizing SGTs on MRI in the future, it is crucial to establish a consensus on the procedures for conducting radiomics analysis."} +{"text": "Yersinia pestis, is a zoonotic disease that can reemerge and cause outbreaks following decades of latency in natural plague foci. However, the genetic diversity and spread pattern of Y. pestis during these epidemic-silent cycles remain unclear. In this study, we analyze 356 Y. pestis genomes isolated between 1952 and 2016 in the Yunnan Rattus tanezumi plague focus, China, covering two epidemic-silent cycles. Through high-resolution genomic epidemiological analysis, we find that 96% of Y. pestis genomes belong to phylogroup 1.ORI2 and are subdivided into two sister clades (Sublineage1 and Sublineage2) characterized by different temporal-spatial distributions and genetic diversity. Most of the Sublineage1 strains are isolated from the first epidemic-silent cycle, while Sublineage2 strains are predominantly from the second cycle and revealing a west to east spread. The two sister clades evolved in parallel from a common ancestor and independently lead to two separate epidemics, confirming that the pathogen responsible for the second epidemic following the silent interval is not a descendant of the causative strain of the first epidemic. Our results provide a mechanism for defining epidemic-silent cycles in natural plague foci, which is valuable in the prevention and control of future plague outbreaks.Plague, caused by Yersinia pestis isolates spanning more than 60 years and covering two epidemics and \"silent\u201d cycles in-between reveal the phylogeographic and evolutionary background of these plague cycles.Genome sequencing of Chinese Yersinia pestis1. Three historic plague pandemics have caused over 160 million deaths worldwide2. The most recent of these pandemics, which began at the end of the 19th century in Hong Kong, China, and then spread to Africa, America, Oceania, and other parts of the world via maritime trade, lasted until the mid-20th century5. At present, plague remains a threat in many parts of the world and has been categorized by the World Health Organization since 2000 as a reemerging disease8.Plague is a deadly infectious disease caused by the gram-negative bacterium Y. pestis can shape the natural plague foci under suitable ecological conditions, sylvatic plagues in these foci exhibit recurrent cycles of epidemic and silent periods with intervals ranging from several years to decades, rather than a single continuous epidemic10. During the epidemic period, animal plague usually occurs prior to human plague, whereas in the silent period, human plague disappears and animal plague fades out with occasional rodents detected carrying the F1 antibody or antigen around the corresponding foci12. However, there is a lack of knowledge regarding the genomic diversity of Y. pestis during these epidemic-silent intervals. It is unknown whether the reactivation of plague foci is caused by invasion of a new Y. pestis strain from other populations or the awakening of offspring from the latent strain.Although Rattus tanezumi plague focus is one of the 12 major natural plague foci in China13. Active animal plague surveillance in this plague focus has been conducted since 195114, and two epidemic periods and two silent periods have been documented since 195016. The first plague epidemic period (named Epidemic1) was observed from 1950 to 1956, during which a plague outbreak occurred in 12 counties/cities in western Yunnan, involving 2950 human cases and 633 deaths16. Since 1957, plague has been well controlled, with few Y. pestis strains, bacteriophages, and positive sera of animals observed within the investigation area17. However, after a silence of 25 years, which we named the Silent1 period (1957\u20131981), another epidemic wave (Epidemic2) emerged in animals and subsequently spread to humans, lasting until 2007. Since 2008, cases of plague have faded again and the natural plague focus has entered a new silent period (Silent2), which is ongoing to present18.The Yunnan Y. pestis isolates in Yunnan provide us with an opportunity to investigate the dynamics of the pathogen\u2019s genomic diversity during these epidemic-silent cycles. Here, we analyzed the whole genomes of 356 Y. pestis strains isolated between 1952 and 2016 in the Yunnan R. tanezumi plague focus and combined genomic analysis with epidemiological data to infer genomic diversity, spread, and transfer patterns of Y. pestis across epidemic-silent intervals.Continuous surveillance data of more than 60 years and density sampling of 16.In this study, 356 strains were isolated from 11 prefectures or cities in Yunnan Province, China, and Myanmar, near the China-Myanmar border, between 1952 and 2016, spanning two epidemic-silent cycles Fig.\u00a0. Among tY. pestis in Yunnan during the two epidemic-silent cycles, we reconstructed the phylogeny of 356 isolates from the Yunnan R. tanezumi plague. Ninety-six percent of the strains (343 strains) were assigned to phylogroup 1.ORI2, which is associated with the third historic pandemic was higher than those in Sublineage2 Fig.\u00a0. TherefoY. pestis during the two epidemics in Yunnan, we integrated the information of phylogenetic relationships, sampling dates, and geographic distribution of the 343 Y. pestis strains. Phylogeographic analysis revealed that strains of Cycle1, mostly corresponding to Sublineage1, were distributed in western Yunnan, including Dehong Dai-Jingpo Autonomous Prefecture (DH), the Dali Bai Autonomous Prefecture (DL), and Baoshan City (B) Figs.\u00a0. Most stSublineage2, including >99.29% of Cycle2 strains, can be further subdivided into four sub-clades (1.ORI2.2.3.1\u20131.ORI2.2.3.4) Figs.\u00a0. Strains20, isolated in 1962 (Saigon-Nhatraung-62-3), 1967 , 1986 , and 1988 (P-14709) were grouped together and formed a distinct clade holds that an ancestral-descendant connection exists between the two cycles. Y. pestis is preserved in individual hosts and vectors, or in the environment, such as soil, during the silent period, and its offspring can cause outbreaks again under suitable ecological conditions25. The second possibility (Hypothesis2) is that the introduction of new Y. pestis strains from other populations lead to the resurgence of plague in a specific focus26. This can occur through various scenarios, such as the rise of a distinct local population, spillover from an adjacent plague focus, or the introduction of a strain from a distant plague focus.Epidemiological evidence indicates that plague in a natural focus is characterized by periodic alternations between epidemics and silence. Normally, plague outbreaks cause a population decline of local susceptible hosts so the population size falls\u00a0below the threshold to maintain the epidemic, which then leads to the silence phase of plague in the focus. Following years or decades of recovery, the number of R. tanezumi plague provides an ideal sample set to test the above assumptions. The surveillance here indicated that between 1952 and 2016, it experienced two epidemic periods and two silent periods and Cycle2 (including Epidemic2 and Silent2) could be subdivided into two sister clades: Sublineage1 (including 98.33% of Cycle1 strains) and Sublineage2 (including 99.29% of Cycle2 strains), respectively DH strains collected in Epidemic1, no obvious pattern could be summarized for spread during this period.We also inferred the spread pattern and possible origin of 27. Epidemiological data also indicated that DH was the area where plague first emerged and last disappeared during Epidemic2, and Y. pestis strains could be isolated in this region throughout the epidemic. Given that DH is a prefecture situated on the western edge of Yunnan Province\u00a0and bordering northern Myanmar were attributed to two independent migration events from the Central Highlands occurring in the early 1980s, a decade before the first human cases were reported. They propose that Y. pestis might survive in local wildlife before transmission to humans. This could also be the case for the Yunnan plagues, as human plague appeared four years later than the epizootic plague in Epidemic216.Epidemic-silent cycles of plague have been reported in other countries, with the plague in Mahajanga, a northwestern coastal city in Madagascar, being well-studiedY. pestis strains in Southeast Asia, it is difficult to investigate the origin of Yunnan R. tanezumi plague epidemics. Second, although no significant genomic variations in the strains responsible for Cycle1 and Cycle2 were identified, a bacterial pathogenicity experiment is still needed to determine the difference between Y. pestis isolated from these two periods to further explore the mechanism of epidemic cycles. Finally, ecological and environmental dynamics in the niche, such as climate change, have been proven to be related to plague outbreaks32. A comprehensive analysis of eco-evolutionary dynamics should be conducted in the future to clarify the causes and mechanisms, leading to the resurgence of the plague epidemic.Our study has several limitations. First, a sampling bias should be noted. The severity of plague was comparable between DL and DH during the Epidemic1 period. However, because of staffing\u00a0shortages and material and financial resources in the 1950s, Epidemic1 strains were rarely collected in DH. Thus, there is insufficient genomic evidence to infer the origin and spread pattern of Epidemic1. Together with insufficient sampling of Y. pestis strains sampled over 60 years from the Yunnan R. tanezumi plague focus, which were involved in two epidemic-silent cycles. We found that the plague in Yunnan Province originated from DH or its adjacent countries and spread from west to east. Importantly, we found that the two plague epidemics in Yunnan were caused by two sister clades that evolved in parallel from a common ancestor and showed different genetic diversity. Our results provide robust genomic evidence that the second epidemic was led by a common ancestor, rather than a descendant of Epidemic1. A similar pattern might be the cause of epidemic-silent cycles in other natural plague foci globally, which needs further verification using a suitable dataset in the future.In this study, we analyzed the genetic diversity and spread pattern of Y. pestis genomes were used in this study. Among them, 470 global isolates were downloaded from The National Center for Biotechnology Information GenBank on October 19, 2020 kit . Whole-genome sequencing was performed on the Illumina X-Ten platform with a 150-bp paired-end sequencing library. An average of 500\u2009Mb of clean data were generated for each strain. The reads were then assembled using SPAdesY. pestis CO92 (1.ORI1 phylogroup strain) (accession no. NC_003143.1) using MUMmer (v3.0)34 to generate base alignments and identify SNPs. Sequencing reads were mapped to the reference to evaluate SNP accuracy for each strain using BWA (v0.7)36 and GATK (v3.8)38. Only high-quality SNPs, with base quality >20 , supported by >10 reads, and present in the core genome shared by at least 95% of the total strains, were retained for further analysis. SNPs located in repetitive regions were removed from the SNP dataset.All assemblies were compared to the reference genome, Y. pestis strains, which were concatenated to construct a maximum likelihood tree using IQ-TREE (v1.6)39 under the Generalized Time Reversible (GTR) model with 100 bootstrap replicates to assess tree topology support40. Phylogenic analysis showed that 96.35% (343/356) of the Yunnan Y. pestis strains belonged to phylogroup 1.ORI2.A total of 3851 SNPs were identified in the 826 Y. pestis strains, with CO92 as the outgroup.To obtain a reliable topology with a high resolution of 1.ORI2 strains, we recalled the SNPs for the 343 Yunnan strains within this clade based on the same pipeline. Finally, 263 high-quality and non-homoplastic SNPs were identified and imported into Applied Maths Bionumerics 6.6 software to build a minimum spanning (MS) tree with hypothetical intermediate nodes for all 343 41 was employed to estimate the temporal signal before utilizing the concatenated sequences of SNPs, sampling locations, and dates of strains for discrete phylogeographic analysis in the BEAST software (v1.10)40 to speculate the spread routes of Y. pestis in Yunnan. We selected the GTR+\u03b3 substitution model, uncorrelated relaxed clock for substitution rate, constant population size, and Bayesian skyline for tree priors to perform Bayesian analysis. For each analysis, three independent Markov Chain Monte Carlo chains of 107 cycles for sub-clade 1.ORI2.2.3.3, and 106 for sub-clade 1.ORI2.2.3.4 were carried out, respectively, and then combined using LogCombiner (v2.6). Tracer (v1.7)42 was used to access the convergence and ensure that the effective sample sizes of all relevant parameters were above 20043. TreeAnnotator (v1.10) was used to generate a maximum clade credibility (MCC) tree, with the first 10% of the states excluded as burn-in. The MCC tree was visualized and modified using FigTree (v1.4) (http://tree.bio.ed.ac.uk/software/figtree/).TempEst (v1.5)Further information on research design is available in the\u00a0Supplementary InformationDescription of Additional Supplementary FilesSupplementary Data 1Supplementary Data 2Supplementary Data 3Reporting Summary"} +{"text": "Moringa stenopetala plant material is not studied yet. Thus, parts of the plant has been studied as bio adsorbents for removing toxic manganese ion from aqueous solutions in batch adsorption model. The maximum percent removal of manganese ion obtained from laboratory synthetic wastewater at equilibrium are 96.05\u00a0%, 98.90\u00a0% and 97.93\u00a0% by M. stenopetala plant leaf, bark and seed, respectively. However, the use of M. stenopetala plant leaf procedures an intensive color with unpleasant odor, which is inauspicious. Therefore, M. stenopental plant leaf was no longer examined for isotherm and kinetics studies. The fitness of adsorption data were confirmed based on the value of correlation coefficient (R2). Thus, adsorption by bark best fits of Temkin model with R2 value of 0.9707, while adsorption by seed follows the Langmuir model with R2 value of 0.9733. Adsorption kinetics result indicates that pseudo second-order model well fitted with R2 value of 0.9912 and 0.9947 for bark and seed adsorbents, respectively. Additionally, the applicability of laboratory-developed method was also evaluated on a multicomponent real sample taken from KK textile industry from Addis Abeba, Ethiopia. After characterization, the percentage removal of manganese ion were 79.53\u00a0% and 88.93\u00a0% for bark and seed, respectively. This achievement is promising and in a good agreement with the results of single component laboratory synthetic wastes.Removal of heavy metal ions from industrial effluents using environmental friendly bioadsorbents is currently promising approach. However, removal of manganese metal ion via Industrial waste dispose into enviromental segments is a significant issue that affects the biota in the receiving environment. Heavy metal concentrations in industrial waste effluents from the metal-finishing and metallurgical sectors are frequently high and pose major environmental pollution issues. One of the biggest issues is heavy metal contamination into waterbody and soil. In addition, toxic metal compounds came into contact with earth's surface can also leach into underground water after rain and snowfall. Consequently, several harmful metals may be present in the earth's water as well ,2. The bM. stenopetala plant parts including bark and seed were reported that contains different functinal groups/phytochemicals like OH-, amines, carbonyls, sulphates, arromatics, and the like which is responsible for adsorpative removal of metal ions from industrial influents [Heavy metals and harmful compounds can be removed from industrial effluents using a variety of techniques, including chemical precipitation, ion exchange, coagulation/flocculation, reverse osmosis, and others . The majnfluents .M.stenopetala seed and bark as adsorbent material for manganese ion adsorption removal in batch adsorption model.Nowadays, there is a wide expansions of industries established for the manufacture of processed goods along with excessive wastes throughout Ethiopia. However industrial wastes are not successfully treated before disposed into the nearby environmental segments. The economy of the country is not strong enough to use more advanced waste treatment technology. As a result, there is an urgent need to develop low-cost, effective and biofriendly adsorbents that effectively remove wastes. In line with this, there are reports that natural adsorbents constitute an excellent alternative for chemical remediation of heavy metals from industrial effluents . Kebede M. stenopentala plant materials from wastewater. The results made by this study confirms excellent, enviromental friendly and inexpensive adsorbents for removal of manganese metal ion from industrial effluents.The main objective of this study was to investigate manganese ion removal efficiency of 2M.stenopetala plant samples were collected from koso share village, near Arbaminch in Southern Ethiopia, which is 500\u00a0km from Addis Ababa. M.stenopetala tree leaf, bark and seed were washed by distilled water to remove any contaminates. Then, seed was dried for 24\u00a0h at 110\u00a0\u00b0C and bark was dried at 105\u00a0\u00b0C in an oven overnight [, [vernight . The drinight [, .2.14.H2O in double distilled water [Analytical-grade reagents and deionized water were used in this study. 1000\u00a0mg/L of a synthetic stock solution of manganese was prepared using MnSOed water . The wor2.2m.stenopetala plant parts characterization taken from Arbaminch, Ethiopia confirmed the availablity of phytochemicals with a wide ranges of organic functional groups as reported Kebede and his coworkers in 2018 [The in 2018 ,11. Havi2.3The batch experiment was performed by adding desired amount of metal solution in 50\u00a0mL volumetric flask at desired adsorbent dose, pH, agitation speed and temperature. The solution was transforrmed in to 250\u00a0mL conical flask and shaken by mechanical shaker at desired rpm for definite periods. Adsorbent dose, metal ion concentration, contact time, agitation speed, pH, and temperature were optimized by continuous variation approach (studying one parameter keeping the others constant) . The dif2.4Wastewater samples were collected in triplicates from local industries found in Addis Abeba, KK textile industry. Three waste water samples were colleted from the discharge point of the effluent using preclear and acidified plastic bottles with a 30\u00a0min interval and it was mixed together. The sample was then, kept in ice bag and transported to laboratory for analysis.2.5and coworkers (2020) [i and Ce being the initial and metal concentration in mg/L at equilibrium, respectively.The percent of metal removed, R, was calculated according to Mahmoud, A. E. s (2020) given asqe (mg/g), was calculated using equation in 2021 [For isotherm and kinetic study the data were taken at the equilibrium conditions using the equations dislayed under in 2021 ,20.2qe= are bark(98.90\u00a0%)\u00a0>\u00a0seed (97.93\u00a0%)\u00a0>\u00a0leaf (96.05\u00a0%).From the batch adsorption study: result indicated in 52 are regarded as a measure of the goodness-of-fit of experimental data. Also values of RL found to be between 0 and 1 indicating that both adsorbents are favorable for adsorption of Manganese ion from aqueous solution. Additionally, the adsorption intensity (n) is greater than unity implies that the forces within the surface layer are attractive confirming manganese ion is favorably adsorbed by M.stenopetala plant (bark and seed) [To find the most appropriate model, the data were fitted to different isotherm models and computed using equations presnted in nd seed) ,33. PartR2 value of the isotherms showed both isotherms fit well. In comparison with each other, the Langmuir model was able to adequately describe the adsorption of Manganese ion by seed with R2 value of 0.9733. While, Temkin model successfully describes the adsorption of Manganese ion by bark with R2 value of 0.9707, which reflects the occupation of the more energetic adsorption sites at first. It is highly probable that their adsorption sites are energetically non-equivalent. Similar results were also reported in previuos studies [M.stenopetala. This is obviously may be because of the surface morphology and functional groups available on each part of M.stenopetala plant.Where, 1/n), n\u00a0=\u00a0Freundlich Adsorption tendency,B = Related to the occupied surface of Temkin modelAs shown in studies . The ads6To ascertain the adsorption kinetics of heavy metal ions, the kinetics parameters for the adsorption process were evaluated for various contact times by measuring the metal ion % adsorption. Following that, the data were regressed by different kinetics models, such as the Elovich kinetics equation, the Intraparticle diffusion kinetics equation, pseudo-first order kinetics equation and a pseudo-second order kinetics equation as presented in 2 value indicated in M. stenopetala bark and seed is regarded as pseudo-second order kinetics. Similar results were also found in Refs. [M.stenopetala.The value of correlation coefficient, in Refs. ,31. The 6.1The adsorbent materials have various types of phytochemicals with multifunctional groups as well as surface morphologies, according to previous study reports on the adsorption phenomena ,11,13,206.2M.stenopetala tree (bark and seed) as an adsorbent material was assessed by its application in treatment of industrial waste water sample. Textile industry waste water samples containing the selected metal ions were collected from local industries situated in Addis Abeba, KK textile industry. The physicochemical characteristics of industrial wastewater that could affetc adsorption process was studied as shown in The utilization of the 6.3M.stenopeta bark and seed as adsorbent materials for the removal of manganese ions from industrial effluents were evaluated as shown in Application of M. stenopetal plant bark remove about 79.53\u00a0% of manganese ion from real industrial wastewater taken from KK textile industry, in Addis Abeba, Ethiopia. In the other hand the seed adsorption is 88.93\u00a0% of manganese from same real wastewater taken from KK textile industry. According to the obtained results on adsorption removal of mananese ion from KK textile industry, the seed is more efficient in adsorption of manganese ion than the bark (seed- 88.93\u00a0%\u00a0>\u00a0bark \u221279.53\u00a0%). It shows a slight decrease of adsorption efficiency comparing to single manganese ion containing synthetic wastewater due to computation of other ions from real wastewater to occupy adsorption sites. This result indicates that M.stenopetala plant parts are a suitable cheap and effective adsorbent for the removal of manganse metal ions from industrial effluents.From this result, it is possible to suggest adsorbent effectiveness of manganese metal ion holding the optimum conditions obtained using laboratory prepared synthetic wastewater. M. stenopetala bark and seed were compared with literature values shown in M. stenopetala bark and seed adsorbents for remediation of environment from manganese pollution. The antagonistic effect of heavy metal removal under ideal circumstances was confirmed by the results in Adsorption efficiency of 7M.stenopetal plant bark and seed adsorbent materials in lab scale. The maximum adsorption efficiency of bark and seed using single component synthesic wastewater are 98.90\u00a0% and 97.93\u00a0% while in multicomponent real wastewater are 79.53\u00a0% and 88.93 respectively. Based on the obtained results, the researcher concluded that the bark and seed of M.stenopetala plant parts are effective adsorbents for remoal of manganese ion from industrial enffluents. This study found that adsorption parameters such as pH, contact time, adsorbent dosage, temperature, and stirring speed affect Manganese ion adsorption. The equilibrium data were applied to the Langmuir, Temkin and Freundlich isotherm models and pseudo first order, pseudo second order kinetics, Elovich and Intraparticle diffusion kinetics model. It was found that the adsorption of manganese by bark well fitted to the Temkin isotherm model while Langmuir isotherm successfully describes the adsorption of Manganese ion by seed very well. Among the kinetics models, the pseudo second order kinetic model best fits for the kinetics study that describes the chemosorption mechanisms of adsorption. The developed method is promising in treatment of industrial wastes. In conclusion, this study confirms a potential of environmental friendly M. stenopetala plant bark and seed as an adsorbent in removing manganese ion from wastewater. The results obtained are therefore very encouraging for developing industrial application of bio-adsorption technique.This study evaluated the removal of manganese ion from synthetic wastewater using a low-cost and effective No funding was received for conducting this study.The data used to support the findings of this study are included in this report.Ashenafi Zeleke Melaku: Writing \u2013 review & editing, Writing \u2013 original draft, Validation, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization.There is no conflict of interest in this study." \ No newline at end of file