text
stringlengths 15
1.64k
| category
stringclasses 1
value | url
stringlengths 39
44
| title
stringlengths 4
133
| __index_level_0__
int64 140k
194k
|
---|---|---|---|---|
Growth factor receptor A growth factor receptor is a receptor that binds to a growth factor. Growth factor receptors are the first stop in cells where the signaling cascade for cell differentiation and proliferation begins. Growth factors, which are ligands that bind to the receptor are the initial step to activating the growth factor receptors and tells the cell to grow and/or divide. These receptors may use the JAK/STAT, MAP kinase, and PI3 kinase pathways. A majority of growth factor receptors consists of receptor tyrosine kinases (RTKs). There are 3 dominant receptor types that are exclusive to research : the epidermal growth factor receptor, the neurotrophin receptor, and the insulin receptors. All growth factor receptors are membrane bound and composed of 3 general protein domains: extracellular, transmembrane, and cytoplasmic. The extracellular domain region is where a ligand may bind, usually with very high specificity. In RTKs, the binding of a ligand to the extracellular ligand binding site leads to the autophosphorylation of tyrosine residues in the intracellular domain. These phosphorylations allow for other intracellular proteins to bind to with the phosphotyrosine-binding domain which results in a series of physiological responses within the cell. Research in today’s society focus on growth factor receptors in order to pinpoint cancer treatment. Epidermal growth factor receptors are involved heavily with oncogene activity | Biology | https://en.wikipedia.org/wiki?curid=11832350 | Growth factor receptor | 193,963 |
Growth factor receptor Once growth factors bind to their receptor, a signal transduction pathway occurs within the cell to ensure the cell is working. However, in cancerous cells, the pathway might never turn on or turn off. Furthermore, in certain cancers, receptors (such as RTKs) are often observed to be overexpressed, which corresponds to the uncontrolled proliferation and differentiation of cells. For this same reason, tyrosine receptors are often a target for cancer therapy. | Biology | https://en.wikipedia.org/wiki?curid=11832350 | Growth factor receptor | 193,964 |
Latent TGF-beta binding protein The latent TGF-beta binding proteins (LTBP) are a family of carrier proteins. LTBP is a family of secreted multidomain proteins that were originally identified by their association with the latent form of transforming growth factors. They interact with a variety of extracellular matrix proteins and may play a role in the regulation of TGF beta bioavailability. | Biology | https://en.wikipedia.org/wiki?curid=11834300 | Latent TGF-beta binding protein | 193,965 |
Polypyrimidine tract-binding protein Polypyrimidine tract-binding protein, also known as PTB or hnRNP I, is an RNA-binding protein. PTB functions mainly as a splicing regulator, although it is also involved in alternative 3' end processing, mRNA stability and RNA localization. | Biology | https://en.wikipedia.org/wiki?curid=11835159 | Polypyrimidine tract-binding protein | 193,966 |
Nuclear cap-binding protein complex is a RNA-binding protein which binds to the 5' cap of pre-mRNA. The cap and nuclear cap-binding protein have many functions in mRNA biogenesis including splicing, 3'-end formation by stabilizing the interaction of the 3'-end processing machinery, nuclear export and protection of the transcripts from nuclease degradation. When RNA is exported to the cytoplasm the nuclear cap-binding protein complex is replaced by cytoplasmic cap binding complex. The nuclear cap-binding complex is a functional heterodimer and composed of Cbc1/Cbc2 in yeast and CBC20/CBC80 in multicellular eukaryotes. Human nuclear cap-binding protein complex shows the large subunit, CBC80 consists of 757 amino acid residues. Its secondary structure contains approximately sixty percent of helical and one percent of beta sheet in the strand. The small subunit, CBC20 has 98 amino acid residues. Its secondary structure contains approximately twenty percent of helical and twenty-four percent of beta sheet in the strand. Human nuclear cap-binding protein complex plays important role in the maturation of pre-mRNA and uracil-rich small nuclear RNA. | Biology | https://en.wikipedia.org/wiki?curid=11835253 | Nuclear cap-binding protein complex | 193,967 |
Léo-Pariseau Prize The is a Québécois prize which is awarded annually to a distinguished individual working in the field of biological or health sciences. The prize is awarded by the Association francophone pour le savoir (Acfas), and is named after Léo Pariseau, the first president of Acfas. The award was inaugurated in 1944 and was the first Acfas prize (there are now approximately ten annual prizes in different disciplines). Source: Acfas – Prix de la Recherche Scientifique de l'Acfas – Prix Léo-Pariseau | Biology | https://en.wikipedia.org/wiki?curid=11851243 | Léo-Pariseau Prize | 193,968 |
Halbert L. Dunn Award The is the most prestigious award presented by the National Association for Public Health Statistics and Information Systems (NAPHSIS). The award has been presented since 1981 providing national recognition of outstanding and lasting contributions to the field of vital and health statistics at the national, state, or local level. The award was established in honor of the late Halbert L. Dunn, M.D., Director of the National Office of Vital Statistics from 1936 to 1960. Dr. Dunn was highly instrumental in encouraging the states to establish state vital statistics associations and played a major role in developing NAPHSIS. The award is presented at the Hal Dunn Awards Luncheon during the association’s annual meeting. The winners of the have been: Source: NAPHSIS | Biology | https://en.wikipedia.org/wiki?curid=11859948 | Halbert L. Dunn Award | 193,969 |
History of molecular evolution The history of molecular evolution starts in the early 20th century with "comparative biochemistry", but the field of molecular evolution came into its own in the 1960s and 1970s, following the rise of molecular biology. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the last common ancestor. In the late 1960s, the neutral theory of molecular evolution provided a theoretical basis for the molecular clock, though both the clock and the neutral theory were controversial, since most evolutionary biologists held strongly to panselectionism, with natural selection as the only important cause of evolutionary change. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life. Before the rise of molecular biology in the 1950s and 1960s, a small number of biologists had explored the possibilities of using biochemical differences between species to study evolution. Alfred Sturtevant predicted the existence of chromosomal inversions in 1921 and with Dobzhansky constructed one of the first molecular phylogenies on 17 Drosophila Pseudo-obscura strains from the accumulation of chromosomal inversions observed from the hybridization of polyten chromosomes | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,970 |
History of molecular evolution Ernest Baldwin worked extensively on comparative biochemistry beginning in the 1930s, and Marcel Florkin pioneered techniques for constructing phylogenies based on molecular and biochemical characters in the 1940s. However, it was not until the 1950s that biologists developed techniques for producing biochemical data for the quantitative study of molecular evolution. The first molecular systematics research was based on immunological assays and protein "fingerprinting" methods. Alan Boyden—building on immunological methods of George Nuttall—developed new techniques beginning in 1954, and in the early 1960s Curtis Williams and Morris Goodman used immunological comparisons to study primate phylogeny. Others, such as Linus Pauling and his students, applied newly developed combinations of electrophoresis and paper chromatography to proteins subject to partial digestion by digestive enzymes to create unique two-dimensional patterns, allowing fine-grained comparisons of homologous proteins. Beginning in the 1950s, a few naturalists also experimented with molecular approaches—notably Ernst Mayr and Charles Sibley. While Mayr quickly soured on paper chromatography, Sibley successfully applied electrophoresis to egg-white proteins to sort out problems in bird taxonomy, soon supplemented that with DNA hybridization techniques—the beginning of a long career built on molecular systematics | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,971 |
History of molecular evolution While such early biochemical techniques found grudging acceptance in the biology community, for the most part they did not impact the main theoretical problems of evolution and population genetics. This would change as molecular biology shed more light on the physical and chemical nature of genes. At the time that molecular biology was coming into its own in the 1950s, there was a long-running debate—the classical/balance controversy—over the causes of heterosis, the increase in fitness observed when inbred lines are outcrossed. In 1950, James F. Crow offered two different explanations (later dubbed the "classical" and "balance" positions) based on the paradox first articulated by J. B. S. Haldane in 1937: the effect of deleterious mutations on the average fitness of a population depends only on the rate of mutations (not the degree of harm caused by each mutation) because more-harmful mutations are eliminated more quickly by natural selection, while less-harmful mutations remain in the population longer. H. J. Muller dubbed this "genetic load". Muller, motivated by his concern about the effects of radiation on human populations, argued that heterosis is primarily the result of deleterious homozygous recessive alleles, the effects of which are masked when separate lines are crossed—this was the "dominance hypothesis", part of what Dobzhansky labeled the "classical position" | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,972 |
History of molecular evolution Thus, ionizing radiation and the resulting mutations produce considerable genetic load even if death or disease does not occur in the exposed generation, and in the absence of mutation natural selection will gradually increase the level of homozygosity. Bruce Wallace, working with J. C. King, used the "overdominance hypothesis" to develop the "balance position", which left a larger place for overdominance (where the heterozygous state of a gene is more fit than the homozygous states). In that case, heterosis is simply the result of the increased expression of heterozygote advantage. If overdominant loci are common, then a high level of heterozygosity would result from natural selection, and mutation-inducing radiation may in fact facilitate an increase in fitness due to overdominance. (This was also the view of Dobzhansky.) Debate continued through 1950s, gradually becoming a central focus of population genetics. A 1958 study of "Drosophila" by Wallace suggested that radiation-induced mutations "increased" the viability of previously homozygous flies, providing evidence for heterozygote advantage and the balance position; Wallace estimated that 50% of loci in natural "Drosophila" populations were heterozygous. Motoo Kimura's subsequent mathematical analyses reinforced what Crow had suggested in 1950: that even if overdominant loci are rare, they could be responsible for a disproportionate amount of genetic variability. Accordingly, Kimura and his mentor Crow came down on the side of the classical position | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,973 |
History of molecular evolution Further collaboration between Crow and Kimura led to the infinite alleles model, which could be used to calculate the number of different alleles expected in a population, based on population size, mutation rate, and whether the mutant alleles were neutral, overdominant, or deleterious. Thus, the infinite alleles model offered a potential way to decide between the classical and balance positions, if accurate values for the level of heterozygosity could be found. By the mid-1960s, the techniques of biochemistry and molecular biology—in particular protein electrophoresis—provided a way to measure the level of heterozygosity in natural populations: a possible means to resolve the classical/balance controversy. In 1963, Jack L. Hubby published an electrophoresis study of protein variation in "Drosophila"; soon after, Hubby began collaborating with Richard Lewontin to apply Hubby's method to the classical/balance controversy by measuring the proportion of heterozygous loci in natural populations. Their two landmark papers, published in 1966, established a significant level of heterozygosity for "Drosophila" (12%, on average). However, these findings proved difficult to interpret. Most population geneticists (including Hubby and Lewontin) rejected the possibility of widespread neutral mutations; explanations that did not involve selection were anathema to mainstream evolutionary biology | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,974 |
History of molecular evolution Hubby and Lewontin also ruled out heterozygote advantage as the main cause because of the segregation load it would entail, though critics argued that the findings actually fit well with overdominance hypothesis. While evolutionary biologists were tentatively branching out into molecular biology, molecular biologists were rapidly turning their attention toward evolution. After developing the fundamentals of protein sequencing with insulin between 1951 and 1955, Frederick Sanger and his colleagues had published a limited interspecies comparison of the insulin sequence in 1956. Francis Crick, Charles Sibley and others recognized the potential for using biological sequences to construct phylogenies, though few such sequences were yet available. By the early 1960s, techniques for protein sequencing had advanced to the point that direct comparison of homologous amino acid sequences was feasible. In 1961, Emanuel Margoliash and his collaborators completed the sequence for horse cytochrome c (a longer and more widely distributed protein than insulin), followed in short order by a number of other species. In 1962, Linus Pauling and Emile Zuckerkandl proposed using the number of differences between homologous protein sequences to estimate the time since divergence, an idea Zuckerkandl had conceived around 1960 or 1961 | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,975 |
History of molecular evolution This began with Pauling's long-time research focus, hemoglobin, which was being sequenced by Walter Schroeder; the sequences not only supported the accepted vertebrate phylogeny, but also the hypothesis (first proposed in 1957) that the different globin chains within a single organism could also be traced to a common ancestral protein. Between 1962 and 1965, Pauling and Zuckerkandl refined and elaborated this idea, which they dubbed the molecular clock, and Emil L. Smith and Emanuel Margoliash expanded the analysis to cytochrome c. Early molecular clock calculations agreed fairly well with established divergence times based on paleontological evidence. However, the essential idea of the molecular clock—that individual proteins evolve at a regular rate independent of a species' morphological evolution—was extremely provocative (as Pauling and Zuckerkandl intended it to be). From the early 1960s, molecular biology was increasingly seen as a threat to the traditional core of evolutionary biology. Established evolutionary biologists—particularly Ernst Mayr, Theodosius Dobzhansky and G. G. Simpson, three of the founders of the modern evolutionary synthesis of the 1930s and 1940s—were extremely skeptical of molecular approaches, especially when it came to the connection (or lack thereof) to natural selection. Molecular evolution in general—and the molecular clock in particular—offered little basis for exploring evolutionary causation | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,976 |
History of molecular evolution According to the molecular clock hypothesis, proteins evolved essentially independently of the environmentally determined forces of selection; this was sharply at odds with the panselectionism prevalent at the time. Moreover, Pauling, Zuckerkandl, and other molecular biologists were increasingly bold in asserting the significance of "informational macromolecules" (DNA, RNA and proteins) for "all" biological processes, including evolution. The struggle between evolutionary biologists and molecular biologists—with each group holding up their discipline as the center of biology as a whole—was later dubbed the "molecular wars" by Edward O. Wilson, who experienced firsthand the domination of his biology department by young molecular biologists in the late 1950s and the 1960s. In 1961, Mayr began arguing for a clear distinction between "functional biology" (which considered proximate causes and asked "how" questions) and "evolutionary biology" (which considered ultimate causes and asked "why" questions) He argued that both disciplines and individual scientists could be classified on either the "functional" or "evolutionary" side, and that the two approaches to biology were complementary. Mayr, Dobzhansky, Simpson and others used this distinction to argue for the continued relevance of organismal biology, which was rapidly losing ground to molecular biology and related disciplines in the competition for funding and university support | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,977 |
History of molecular evolution It was in that context that Dobzhansky first published his famous statement, "nothing in biology makes sense except in the light of evolution", in a 1964 paper affirming the importance of organismal biology in the face of the molecular threat; Dobzhansky characterized the molecular disciplines as "Cartesian" (reductionist) and organismal disciplines as "Darwinian". Mayr and Simpson attended many of the early conferences where molecular evolution was discussed, critiquing what they saw as the overly simplistic approaches of the molecular clock. The molecular clock, based on uniform rates of genetic change driven by random mutations and drift, seemed incompatible with the varying rates of evolution and environmentally-driven adaptive processes (such as adaptive radiation) that were among the key developments of the evolutionary synthesis. At the 1962 Wenner-Gren conference, the 1964 Colloquium on the Evolution of Blood Proteins in Bruges, Belgium, and the 1964 Conference on Evolving Genes and Proteins at Rutgers University, they engaged directly with the molecular biologists and biochemists, hoping to maintain the central place of Darwinian explanations in evolution as its study spread to new fields. Though not directly related to molecular evolution, the mid-1960s also saw the rise of the gene-centered view of evolution, spurred by George C. Williams's "Adaptation and Natural Selection" (1966) | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,978 |
History of molecular evolution Debate over units of selection, particularly the controversy over group selection, led to increased focus on individual genes (rather than whole organisms or populations) as the theoretical basis for evolution. However, the increased focus on genes did not mean a focus on molecular evolution; in fact, the adaptationism promoted by Williams and other evolutionary theories further marginalized the apparently non-adaptive changes studied by molecular evolutionists. The intellectual threat of molecular evolution became more explicit in 1968, when Motoo Kimura introduced the neutral theory of molecular evolution. Based on the available molecular clock studies (of hemoglobin from a wide variety of mammals, cytochrome c from mammals and birds, and triosephosphate dehydrogenase from rabbits and cows), Kimura (assisted by Tomoko Ohta) calculated an average rate of DNA substitution of one base pair change per 300 base pairs (encoding 100 amino acids) per 28 million years. For mammal genomes, this indicated a substitution rate of one every 1.8 years, which would produce an unsustainably high substitution load unless the preponderance of substitutions was selectively neutral. Kimura argued that neutral mutations occur very frequently, a conclusion compatible with the results of the electrophoretic studies of protein heterozygosity. Kimura also applied his earlier mathematical work on genetic drift to explain how neutral mutations could come to fixation, even in the absence of natural selection; he soon convinced James F | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,979 |
History of molecular evolution Crow of the potential power of neutral alleles and genetic drift as well. Kimura's theory—described only briefly in a letter to "Nature"—was followed shortly after with a more substantial analysis by Jack L. King and Thomas H. Jukes—who titled their first paper on the subject "non-Darwinian evolution". Though King and Jukes produced much lower estimates of substitution rates and the resulting genetic load in the case of non-neutral changes, they agreed that neutral mutations driven by genetic drift were both real and significant. The fairly constant rates of evolution observed for individual proteins was not easily explained without invoking neutral substitutions (though G. G. Simpson and Emil Smith had tried). Jukes and King also found a strong correlation between the frequency of amino acids and the number of different codons encoding each amino acid. This pointed to substitutions in protein sequences as being largely the product of random genetic drift. King and Jukes' paper, especially with the provocative title, was seen as a direct challenge to mainstream neo-Darwinism, and it brought molecular evolution and the neutral theory to the center of evolutionary biology. It provided a mechanism for the molecular clock and a theoretical basis for exploring deeper issues of molecular evolution, such as the relationship between rate of evolution and functional importance. The rise of the neutral theory marked synthesis of evolutionary biology and molecular biology—though an incomplete one | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,980 |
History of molecular evolution With their work on firmer theoretical footing, in 1971 Emile Zuckerkandl and other molecular evolutionists founded the "Journal of Molecular Evolution". The critical responses to the neutral theory that soon appeared marked the beginning of the "neutralist-selectionist debate". In short, selectionists viewed natural selection as the primary or only cause of evolution, even at the molecular level, while neutralists held that neutral mutations were widespread and that genetic drift was a crucial factor in the evolution of proteins. Kimura became the most prominent defender of the neutral theory—which would be his main focus for the rest of his career. With Ohta, he refocused his arguments on the rate at which drift could fix new mutations in finite populations, the significance of constant protein evolution rates, and the functional constraints on protein evolution that biochemists and molecular biologists had described. Though Kimura had initially developed the neutral theory partly as an outgrowth of the "classical position" within the classical/balance controversy (predicting high genetic load as a consequence of non-neutral mutations), he gradually deemphasized his original argument that segregational load would be impossibly high without neutral mutations (which many selectionists, and even fellow neutralists King and Jukes, rejected) | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,981 |
History of molecular evolution From the 1970s through the early 1980s, both selectionists and neutralists could explain the observed high levels of heterozygosity in natural populations, by assuming different values for unknown parameters. Early in the debate, Kimura's student Tomoko Ohta focused on the interaction between natural selection and genetic drift, which was significant for mutations that were not strictly neutral, but nearly so. In such cases, selection would compete with drift: most slightly deleterious mutations would be eliminated by natural selection or chance; some would move to fixation through drift. The behavior of this type of mutation, described by an equation that combined the mathematics of the neutral theory with classical models, became the basis of Ohta's nearly neutral theory of molecular evolution. In 1973, Ohta published a short letter in "Nature" suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Molecular evolutionists were finding that while rates of protein evolution (consistent with the molecular clock) were fairly independent of generation time, rates of noncoding DNA divergence were inversely proportional to generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that most amino acid substitutions are slightly deleterious while noncoding DNA substitutions are more neutral | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,982 |
History of molecular evolution In this case, the faster rate of neutral evolution in proteins expected in small populations (due to genetic drift) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations). Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,983 |
History of molecular evolution While early work in molecular evolution focused on readily sequenced proteins and relatively recent evolutionary history, by the late 1960s some molecular biologists were pushing further toward the base of the tree of life by studying highly conserved nucleic acid sequences. Carl Woese, a molecular biologist whose earlier work was on the genetic code and its origin, began using small subunit ribosomal RNA to reclassify bacteria by genetic (rather than morphological) similarity. Work proceeded slowly at first, but accelerated as new sequencing methods were developed in the 1970s and 1980s. By 1977, Woese and George Fox announced that some bacteria, such as methanogens, lacked the rRNA units that Woese's phylogenetic studies were based on; they argued that these organisms were actually distinct enough from conventional bacteria and the so-called higher organisms to form their own kingdom, which they called archaebacteria. Though controversial at first (and challenged again in the late 1990s), Woese's work became the basis of the modern three-domain system of Archaea, Bacteria, and Eukarya (replacing the five-domain system that had emerged in the 1960s). Work on microbial phylogeny also brought molecular evolution closer to cell biology and origin of life research. The differences between archaea pointed to the importance of RNA in the early history of life | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,984 |
History of molecular evolution In his work with the genetic code, Woese had suggested RNA-based life had preceded the current forms of DNA-based life, as had several others before him—an idea that Walter Gilbert would later call the "RNA world". In many cases, genomics research in the 1990s produced phylogenies contradicting the rRNA-based results, leading to the recognition of widespread lateral gene transfer across distinct taxa. Combined with the probable endosymbiotic origin of organelle-filled eukarya, this pointed to a far more complex picture of the origin and early history of life, one which might not be describable in the traditional terms of common ancestry. | Biology | https://en.wikipedia.org/wiki?curid=11863361 | History of molecular evolution | 193,985 |
Netherlands Entomological Society The (, abbreviated NEV) was founded in 1845 for the purpose of improving and promoting entomology in the Netherlands. The society has more than 600 members. | Biology | https://en.wikipedia.org/wiki?curid=11863568 | Netherlands Entomological Society | 193,986 |
Haar-like feature Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar wavelets and were used in the first real-time face detector. Historically, working with only image intensities (i.e., the RGB pixel values at each and every pixel of image) made the task of feature calculation computationally expensive. A publication by Papageorgiou et al. discussed working with an alternate feature set based on Haar wavelets instead of the usual image intensities. Paul Viola and Michael Jones adapted the idea of using Haar wavelets and developed the so-called Haar-like features. A considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image. For example, with a human face, it is a common observation that among all faces the region of the eyes is darker than the region of the cheeks. Therefore, a common Haar feature for face detection is a set of two adjacent rectangles that lie above the eye and the cheek region. The position of these rectangles is defined relative to a detection window that acts like a bounding box to the target object (the face in this case). In the detection phase of the Viola–Jones object detection framework, a window of the target size is moved over the input image, and for each subsection of the image the is calculated | Biology | https://en.wikipedia.org/wiki?curid=11864935 | Haar-like feature | 193,987 |
Haar-like feature This difference is then compared to a learned threshold that separates non-objects from objects. Because such a is only a weak learner or classifier (its detection quality is slightly better than random guessing) a large number of Haar-like features are necessary to describe an object with sufficient accuracy. In the Viola–Jones object detection framework, the Haar-like features are therefore organized in something called a "classifier cascade" to form a strong learner or classifier. The key advantage of a over most other features is its calculation speed. Due to the use of "integral images", a of any size can be calculated in constant time (approximately 60 microprocessor instructions for a 2-rectangle feature). A simple rectangular can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position and scale within the original image. This modified feature set is called "2-rectangle feature". Viola and Jones also defined 3-rectangle features and 4-rectangle features. The values indicate certain characteristics of a particular area of the image. Each feature type can indicate the existence (or absence) of certain characteristics in the image, such as edges or changes in texture. For example, a 2-rectangle feature can indicate where the border lies between a dark region and a light region. One of the contributions of Viola and Jones was to use summed-area tables, which they called "integral images" | Biology | https://en.wikipedia.org/wiki?curid=11864935 | Haar-like feature | 193,988 |
Haar-like feature Integral images can be defined as two-dimensional lookup tables in the form of a matrix with the same size of the original image. Each element of the integral image contains the sum of all pixels located on the up-left region of the original image (in relation to the element's position). This allows to compute sum of rectangular areas in the image, at any position or scale, using only four lookups: where points formula_2 belong to the integral image formula_3, as shown in the figure. Each may need more than four lookups, depending on how it was defined. Viola and Jones's 2-rectangle features need six lookups, 3-rectangle features need eight lookups, and 4-rectangle features need nine lookups. Lienhart and Maydt introduced the concept of a tilted (45°) Haar-like feature. This was used to increase the dimensionality of the set of features in an attempt to improve the detection of objects in images. This was successful, as some of these features are able to describe the object in a better way. For example, a 2-rectangle tilted can indicate the existence of an edge at 45°. Messom and Barczak extended the idea to a generic rotated Haar-like feature. Although the idea is sound mathematically, practical problems prevent the use of Haar-like features at any angle. In order to be fast, detection algorithms use low resolution images introducing rounding errors. For this reason rotated Haar-like features are not commonly used. | Biology | https://en.wikipedia.org/wiki?curid=11864935 | Haar-like feature | 193,989 |
Azoxystrobin is the ISO common name for an organic compound that is used as a pesticide. It is a broad spectrum systemic fungicide widely used in agriculture to protect crops from fungal diseases. It was first marketed in 1996 using the brand name Amistar and by 1999 it had been registered in 48 countries on more than 50 crops. In the year 2000 it was announced that it had been granted UK Millennium product status. In 1977, academic research groups in Germany published details of two new antifungal antibiotics they had isolated from the basidiomycete fungus "Strobilurus tenacellus". They named these strobilurin A and B but did not provide detailed structures, only data based on their high-resolution mass spectra, which showed that the simpler of the two had molecular formula CHO. In the following year, further details including structures were published and a related fungicide, oudemansin from the fungus "Oudemansiella mucida", whose identity had been determined by X-ray crystallography, was disclosed. When the fungicidal effects were shown to stem from what was then a novel mode of action, chemists at the Imperial Chemical Industries (ICI) research site at Jealott's Hill became interested to use them as leads to develop new fungicides suitable for use in agriculture. The first task was to synthesize a sample of strobilurin A for testing | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,990 |
Azoxystrobin In doing so, it was discovered that the structure that had been published was incorrect in the stereochemistry of one of the double bonds: the strobilurins, in fact, have the E,Z,E not E,E,E configuration. Once this was realised and the correct material was made and tested it was shown, as expected, to be active "in vitro" but insufficiently stable to light to be active in the glasshouse. A large programme of chemistry to make analogues was begun when it was discovered that a new stilbene structure containing the β-methoxyacrylate portion (shown in blue and believed to be the toxophore) had good activity in glasshouse tests but still lacked sufficient photostability. After more than 1400 analogues had been made and tested the team chose azoxystrobin for commercialisation and it was developed under the code number ICIA5504. First sales were in 1996 using the brand name Amistar: it then gained fast-track registration in the United States, where it was marketed in 1997 as Heritage. By 1999 it had been registered in 48 countries on more than 50 crops. In the year 2000 it was announced that it had been granted Millennium product status by the UK Prime Minister, Tony Blair, as it had become in three years the world’s best-selling fungicide. Meanwhile, BASF scientists who were collaborating with the German academic groups that had discovered strobilurin A had independently invented kresoxim-methyl, which was also launched in 1996. The first synthesis of azoxystrobin was disclosed in patents filed by the ICI group | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,991 |
Azoxystrobin The sequence of the two substitution reactions allows the first intermediate to be used to create a diverse range of analogues in which the part remote from the toxophore can be varied. The final choice of 2-cyano phenol in the second step of the synthesis was made after many other alternatives had been tested for their fungicidal properties. The crystal structure was published in 2008. and other strobilurins inhibit mitochondrial respiration by blocking electron transport. They bind at the quinol outer binding site of the cytochrome b-c1 complex, where ubiquinone (coenzyme Q10) would normally bind when carrying electrons to that protein. Thus production of ATP is prevented. The generic name for this mode of action is "Quinone Outside Inhibitors" QoI. is made available to end-users only in formulated products. Since the active ingredient has moderate solubility in water, formulations aid its use in water-based sprays by creating an emulsion when diluted. Modern products use non-powdery formulations with reduced or no use of hazardous solvents, for example suspension concentrates. The fungicide is compatible with many other pesticides and adjuvants when mixed by the farmer for spraying. is a xylem-mobile systemic fungicide with translaminar, protectant and curative properties. In cereal crops, its main outlet, the length of disease control is generally about four to six weeks during the period of active stem elongation | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,992 |
Azoxystrobin All pesticides are required to seek registration from appropriate authorities in the country in which they will be used. In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA). A pesticide can only be used legally according to the directions on the label that is included at the time of the sale of the pesticide. The purpose of the label is to "to provide clear directions for effective product performance while minimizing risks to human health and the environment". A label is a legally binding document that mandates how the pesticide can and must be used and failure to follow the label as written when using the pesticide is a federal offence. Within the European Union, a 2-tiered approach is used for the approval and authorisation of pesticides. Firstly, before a formulated product can be developed for market, the active substance must be approved for the European Union. After this has been achieved, authorisation for the specific product must be sought from every Member State that the applicant wants to sell it to. Afterwards, there is a monitoring programme to make sure the pesticide residues in food are below the limits set by the European Food Safety Authority. possesses a broad spectrum of activity, in common with other QoI inhibitors. Examples of the fungal groups on which it is effective are Ascomycota, Deuteromycota, Basidiomycota and Oomycota | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,993 |
Azoxystrobin In addition, its properties mean that it can move systemically through plant tissue to protect parts of the crop that were not in contact with the spray. This combination of properties has meant that it achieved widespread use very quickly and has reached annual sales of more than $500 million. Important diseases which it controls include leaf spot, rusts, powdery mildew, downy mildew, net blotch and blight. Worldwide, azoxystrobin is registered for use on all important crops. For example, in the European Union and United states, it is registered for use in wheat, barley, oats, rye, soya, cotton, rice, strawberry, peas, beans, onions and many other vegetables. The advantage to the farmer comes in the form of improved yield at harvest. Farmers can act in their best economic interest: the value of the additional yield can be estimated and the total cost of using the fungicide informs the decision to purchase. This cost-benefit analysis by the end user sets a maximum price which the supplier can demand and in practice pesticide prices fluctuate according to the current market value of the crops in which they are used. The estimated annual use of azoxystrobin in US agriculture is mapped by the US Geological Service and shows an increasing trend from its introduction in 1997 to 2016, the latest date for which figures are available. One of the earliest uses of azoxystrobin was to control fungal diseases of turf and it has been used on golf courses and lawns | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,994 |
Azoxystrobin It is now available for domestic markets under brand names such as Heritage and Azoxy 2SC. is added to mold-resistant Purple wallboards (optiSHIELD AT, mixture of azoxystrobin and thiabendazole) and can leach into house dust, providing a source of life-long exposure to children and adults. Mammalian toxicity studies were performed with a vehicle that did not dissolve azoxystrobin and related strobilurins, so the LD of over 5000 mg/kg (rats, oral) must be re-evaluated. It can cause skin and eye irritation. First aid information is included with the label. The World Health Organization (WHO) and Food and Agriculture Organization (FAO) joint meeting on pesticide residues has determined that the acceptable daily intake for azoxystrobin is 0-0.2 mg/kg bodyweight per day. The Codex Alimentarius database maintained by the FAO lists the maximum residue limits for azoxystrobin in various food products. is categorized as having a low potential for bioconcentration and of moderate risk to fish, earthworms and bees but of high risk to aquatic crustaceans, so care must be taken to avoid runoff into water bodies. Its main degradation product, the carboxylic acid resulting from hydrolysis of its methyl ester, is also potentially harmful to aquatic environments. The benefits and risks of use of QoI fungicides have been reviewed and there is extensive literature on azoxystrobin's environmental profile | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,995 |
Azoxystrobin Ultimately it is the regulatory authorities in each country who must weigh up the benefits to end users and balance these against the compound's inherent hazards and consequent risks to consumers and the wider environment. Fungal populations have the ability to develop resistance to QoI inhibitors. This potential can be mitigated by careful management. Reports of individual pest species becoming resistant to azoxystrobin are monitored by manufacturers, regulatory bodies such as the EPA and the Fungicides Resistance Action Committee (FRAC). In some cases, the risks of resistance developing can be reduced by using a mixture of two or more fungicides which each have activity on relevant pests but with unrelated mechanisms of action. FRAC assigns fungicides into classes so as to facilitate this. On cereal crops in the USA, for example, azoxystrobin may only be used in mixture, usually with an azole fungicide such as difenoconazole. By international convention and in many countries the law, pesticide labels are required to include the common name of the active ingredients. These names are not the exclusive property of the holder of any patent or trademark and as such they are the easiest way for non-experts to refer to individual chemicals. Companies selling pesticides normally do so using a brand name or wordmark which allows them to distinguish their product from competitor products having the same active ingredient | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,996 |
Azoxystrobin In many cases, this branding is country and formulation-specific so after several years of sales there can be multiple brand names for a given active ingredient. The situation is made even more complicated when companies license their ingredients to others, as is often done. In addition, the product may be pre-mixed with other pesticides under a new brand name. It is therefore difficult to provide a comprehensive list of brand names for products containing azoxystrobin. They include Amistar, Abound, Heritage, Olympus, Ortiva, Priori Xtra and Quadris. Suppliers and brand names in the United States are listed in the National Pesticide Information Retrieval System. | Biology | https://en.wikipedia.org/wiki?curid=11880615 | Azoxystrobin | 193,997 |
Translocated promoter region is a component of the tpr-met fusion protein. | Biology | https://en.wikipedia.org/wiki?curid=11882025 | Translocated promoter region | 193,998 |
Tpr-met fusion protein Tpr-Met fusion protein is an oncogene fusion protein consisting of TPR and MET. Tpr-Met was generated following a chromosomal rearrangement induced by the treatment of a human osteogenic sarcoma cell line with the carcinogen "N"-methyl-"N"'-nitronitrosoguanidine. The genomic rearrangement fuses two genetic loci,"translocated promoter region", from chromosome 1q25 which encodes a dimerization leucine zipper motif, and "MET", from chromosome 7q31 which contributes the kinase domain and carboxy-terminus of the Met RTK. The resulting 65 kDa cytoplasmic Tpr-Met oncoprotein forms a dimer mediated through the Tpr leucine zipper. The Tpr-Met fusion protein lacks the extracellular, transmembrane and juxtamembrane domains of c-Met receptor, and has gained the Tpr dimerization motif, which allows constitutive and ligand-independent activation of the kinase. The loss of juxtamembrane sequences, necessary for the negative regulation of kinase activity and receptor degradation, prolongs duration of Met signalling. Specific expression of Tpr-Met in terminally-differentiated skeletal muscle causes muscle wasting "in vivo" and exerts anti-differentiation effects in terminally differentiated myotubes. Constitutive activation of MET signaling has been suggested to cause defects in myogenic differentiation, contributing to rhabdomyosarcoma development and progression. In a transgenic model, cardiac-specific expression of Tpr-Met oncogene during postnatal life causes heart failure with early-onset. | Biology | https://en.wikipedia.org/wiki?curid=11882041 | Tpr-met fusion protein | 193,999 |
Hfq protein The (also known as HF-I protein) encoded by the hfq gene was discovered in 1968 as an "Escherichia coli" host factor that was essential for replication of the bacteriophage Qβ. It is now clear that Hfq is an abundant bacterial RNA binding protein which has many important physiological roles that are usually mediated by interacting with Hfq binding sRNA. In "E. coli", Hfq mutants show multiple stress response related phenotypes. The is now known to regulate the translation of two major stress transcription factors ( σS (RpoS) and σE (RpoE) ) in Enterobacteria. It also regulates sRNA in "Vibrio cholerae", a specific example being MicX sRNA. In "Salmonella typhimurium", Hfq has been shown to be an essential virulence factor as its deletion attenuates the ability of "S.typhimurium" to invade epithelial cells, secrete virulence factors or survive in cultured macrophages. In "Salmonella", Hfq deletion mutants are also non motile and exhibit chronic activation of the sigma mediated envelope stress response. A CLIP-Seq study of Hfq in "Salmonella" has revealed 640 binding sites across the "Salmonella" transcriptome. The majority of these binding sites was found in mRNAs and sRNAs. In "Photorhabdus luminescens", a deletion of the "hfq" gene causes loss of secondary metabolite production. Hfq mediates its pleiotropic effects through several mechanisms. It interacts with regulatory sRNA and facilitates their antisense interaction with their targets | Biology | https://en.wikipedia.org/wiki?curid=11882145 | Hfq protein | 194,000 |
Hfq protein It also acts independently to modulate mRNA decay (directing mRNA transcripts for degradation) and also acts as a repressor of mRNA translation. Genomic SELEX has been used to show that Hfq binding RNAs are enriched in the sequence motif 5'-AAYAAYAA-3'. Hfq was also found to act on ribosome biogenesis in "E. coli", specifically on the 30S subunit. Hfq mutants accumulate higher levels of immature small subunits and decreased translation accuracy. This function on the bacterial ribosome could also account for the pleiotropic effect typical of Hfq deletion strains. Electron microscopy imaging reveals that, in addition to the expected localization of this protein in cytoplasmic regions and in the nucleoid, an important fraction of Hfq is located in close proximity to the membrane. Six crystallographic structures of 4 different Hfq proteins have been published so far; "E. coli" Hfq (), "P. aeruginosa" Hfq in a low salt condition () and a high salt condition (), Hfq from "S. aureus" with bound RNA () and without (), and the Hfq(-like) protein from "M. jannaschii" (). All six structures confirm the hexameric ring-shape of a complex. 11. Mol Cell. 2002 Jan;9(1):23-30. Hfq: a bacterial Sm-like protein that mediates RNA-RNA interaction.Møller T1, Franch T, Højrup P, Keene DR, Bächinger HP, Brennan RG, Valentin-Hansen P. | Biology | https://en.wikipedia.org/wiki?curid=11882145 | Hfq protein | 194,001 |
OPIE (Entomology) Office pour l'Information Eco-entomologique (abbreviated OPIE or OPIE-LR, English: Office for Entomological Information) is a French government organisation based in Guyancourt devoted to entomology, especially applied entomology. | Biology | https://en.wikipedia.org/wiki?curid=11882557 | OPIE (Entomology) | 194,002 |
Betacellulin is a protein that in humans is encoded by the "BTC" gene located on chromosome 4 at locus 4q13-q21. Betacellulin, is a part of an Epidermal Growth Factor (EGF) family that has been spotted in the conditioned cell lines that was taken from mice pancreatic Beta cell tumor. When a sequence of the purified protein and a cloned cDNA was extracted, it supported the claim that in fact betacellulin is a new ligand formed from the epidermal growth factor receptor (EGFR). As the role a EGFR, betacellulin is manifested by different form of muscles and tissues, it also has a great effect of nitrogen that is used for retinal pigment epithelial cells and vascular smooth muscle cells. While many studies attest a role for betacellulin in the differentiation of pancreatic β-cells, the last decade witnessed the association of betacellulin with many additional biological processes, ranging from reproduction to the control of neural stem cells.is a member of the EGF family of growth factors. It is synthesized primarily as a transmembrane precursor, which is then processed to mature molecule by proteolytic events. This protein is a ligand for the EGF receptor. As a typical EGFR ligand, betacellulin is expressed by a variety of cell types and tissues, and the soluble growth factor is proteolytically cleaved from a larger membrane-anchored precursor | Biology | https://en.wikipedia.org/wiki?curid=11885910 | Betacellulin | 194,003 |
Betacellulin stimulated the proliferation of retinal pigment epithelial and vascular smooth muscle cells at a concentration of [difference]30 pM (1 ng/ml) but did not stimulate the growth of several other cell types, such as endothelial cells and fetal lung fibroblasts.chemically bonds and activates tyrosine residues phosphorylation of the epidemic growth factor. Osteoblasts, which are responsible for forming and mineralizing osteoid, express EGF receptors and alter rates of proliferation and differentiation in response to EGF receptor activation. Transgenic mice over-expressing the EGF-like ligand betacellulin (BTC) exhibit increased cortical bone deposition; however, because the transgene is ubiquitously expressed in these mice, the identity of cells affected by BTC and responsible for increased cortical bone thickness remains unknown. We have therefore examined the influence of BTC upon mesenchymal stem cell (MSC) and pre-osteoblast differentiation and proliferation. BTC decreases the expression of osteogenic markers in both MSCs and pre-osteoblasts; interestingly, increases in proliferation require hypoxia-inducible factor-alpha (HIF-alpha), as an HIF antagonist prevents BTC-driven proliferation. Both MSCs and pre-osteoblasts express EGF receptors ErbB1, ErbB2, and ErbB3, with no change in expression under osteogenic differentiation. These are the first data that demonstrate an influence of BTC upon MSCs and the first to implicate HIF-alpha in BTC-mediated proliferation | Biology | https://en.wikipedia.org/wiki?curid=11885910 | Betacellulin | 194,004 |
Betacellulin as you can see the role of Betacellulinis a bit flexible enough to divert its response based on part where it binds. BTC is a polymer of about 62-111 amino acid residues. Secondary Structure: 6% helical (1 helices; 3 residues) 36% beta sheet (5 strands; 18 residues) | Biology | https://en.wikipedia.org/wiki?curid=11885910 | Betacellulin | 194,005 |
Carpus and tarsus of land vertebrates The carpus (wrist) and tarsus (ankle) of land vertebrates primitively had three rows of carpal or tarsal bones. Often some of these have become lost or fused in evolution. | Biology | https://en.wikipedia.org/wiki?curid=11894014 | Carpus and tarsus of land vertebrates | 194,006 |
Cellular microarray A cellular microarray (or cell microarray) is a laboratory tool that allows for the multiplex interrogation of living cells on the surface of a solid support. The support, sometimes called a "chip", is spotted with varying materials, such as antibodies, proteins, or lipids, which can interact with the cells, leading to their capture on specific spots. Combinations of different materials can be spotted in a given area, allowing not only cellular capture, when a specific interaction exists, but also the triggering of a cellular response, change in phenotype, or detection of a response from the cell, such as a specific secreted factor. There are a large number of types of cellular microarrays: | Biology | https://en.wikipedia.org/wiki?curid=11894889 | Cellular microarray | 194,007 |
Calcitroic acid (1α-hydroxy-23-carboxy-24,25,26,27-tetranorvitamin D) is a major metabolite of 1α,25-dihydroxyvitamin D (calcitriol). Often synthesized in the liver and kidneys, calcitroic acid is generated in the body after vitamin D is first converted into calcitriol, an intermediate in the fortification of bone through the formation and regulation of calcium in the body. These pathways managed by calcitriol are thought to be inactivated through its hydroxylation by the enzyme CYP24A1, also called calcitriol 24-hydroxylase. Specifically, It is thought to be the major route to inactivate vitamin D metabolites. Hydroxylation and further metabolism of calcitriol in the liver and the kidneys yields calcitroic acid, a water soluble compound that is excreted in bile. A recent review suggested that current knowledge of calcitroic acid is limited, and more studies are needed to identify its physiological role. In case where a higher concentration of this acid is used in vitro, studies determined that calcitroic acid binds to vitamin D receptor (VDR) and induces gene transcription. In vivo, studies determined that calcitroic acid, along with citrulline, may be used to quantify the amount of ionizing radiation an individual has been exposed to. | Biology | https://en.wikipedia.org/wiki?curid=11895977 | Calcitroic acid | 194,008 |
Diallel cross A diallel cross is a mating scheme used by plant and animal breeders, as well as geneticists, to investigate the genetic underpinnings of quantitative traits. In a full diallel, all parents are crossed to make hybrids in all possible combinations. Variations include half diallels with and without parents, omitting reciprocal crosses. Full diallels require twice as many crosses and entries in experiments, but allow for testing for maternal and paternal effects. If such "reciprocal" effects are assumed to be negligible, then a half diallel without reciprocals can be effective. Common analysis methods utilize general linear models to identify heterotic groups, estimate general or specific combining ability, interactions with testing environments and years, or estimates of additive, dominant, and epistatic genetic effects and genetic correlations. There are four main types of diallel mating design: | Biology | https://en.wikipedia.org/wiki?curid=11899236 | Diallel cross | 194,009 |
Stem Cell Research Enhancement Act was the name of two similar bills that both passed through the United States House of Representatives and Senate, but were both vetoed by President George W. Bush and were not enacted into law. The of 2005 () was the first bill ever vetoed by United States President George W. Bush, more than five years after his inauguration. The bill, which passed both houses of Congress, but by less than the two-thirds majority needed to override the veto, would have allowed federal funding of stem cell research on new lines of stem cells derived from discarded human embryos created for fertility treatments. The bill passed House of Representatives by a vote of 238 to 194 on May 24, 2005., then passed the Senate by a vote of 63 to 37 on July 18, 2006. President Bush vetoed the bill on July 19, 2006. The House of Representatives then failed to override the veto (235 to 193) on July 19, 2006. The of 2007 (), was proposed federal legislation that would have amended the Public Health Service Act to provide for human embryonic stem cell research. It was similar in content to the vetoed of 2005. The bill passed the Senate on April 11, 2007 by a vote of 63-34, then passed the House on June 7, 2007 by a vote of 247-176. President Bush vetoed the bill on June 19, 2007, and an override was not attempted. The bill was re-introduced in the 111th Congress. It was introduced in the House by Representative Diana DeGette (D-CO) on February 4, 2009. A Senate version was introduced by Tom Harkin (D-IA) on February 26, 2009 | Biology | https://en.wikipedia.org/wiki?curid=11902016 | Stem Cell Research Enhancement Act | 194,010 |
Stem Cell Research Enhancement Act The House bill had 113 co-sponsors and the Senate 10 co-sponsors, as of November 20, 2009. | Biology | https://en.wikipedia.org/wiki?curid=11902016 | Stem Cell Research Enhancement Act | 194,011 |
Artificial gene synthesis or gene synthesis, refers to a group of methods that are used in synthetic biology to construct and assemble genes from nucleotides "de novo". Unlike DNA synthesis in living cells, artificial gene synthesis does not require template DNA, allowing virtually any DNA sequence to be synthesized in the laboratory. It comprises two main steps, the first of which is solid-phase DNA synthesis, sometimes known as 'DNA printing'. This produces oligonucleotide fragments that are generally under 200 base pairs. The second step then involves connecting these oligonucleotide fragments using various DNA assembly methods. Because artificial gene synthesis does not require template DNA, it is theoretically possible to make a completely synthetic DNA molecules with no limits on the nucleotide sequence or size. Synthesis of the first complete gene, a yeast tRNA, was demonstrated by Har Gobind Khorana and coworkers in 1972. Synthesis of the first peptide- and protein-coding genes was performed in the laboratories of Herbert Boyer and Alexander Markham, respectively. More recently, artificial gene synthesis methods have been developed that will allow the assembly of entire chromosomes and genomes. The first synthetic yeast chromosome was synthesised in 2014, and entire functional bacterial chromosomes have also been synthesised. In addition, artificial gene synthesis could in the future make use of novel nucleobase pairs (unnatural base pairs) | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,012 |
Artificial gene synthesis Oligonucleotides are chemically synthesized using building blocks called nucleoside phosphoramidites. These can be normal or modified nucleosides which have protecting groups to prevent their amines, hydroxyl groups and phosphate groups from interacting incorrectly. One phosphoramidite is added at a time, the 5' hydroxyl group is deprotected and a new base is added and so on. The chain grows in the 3' to 5' direction, which is backwards relative to biosynthesis. At the end, all the protecting groups are removed. Nevertheless, being a chemical process, several incorrect interactions occur leading to some defective products. The longer the oligonucleotide sequence that is being synthesized, the more defects there are, thus this process is only practical for producing short sequences of nucleotides. The current practical limit is about 200 bp (base pairs) for an oligonucleotide with sufficient quality to be used directly for a biological application. HPLC can be used to isolate products with the proper sequence. Meanwhile, a large number of oligos can be synthesized in parallel on gene chips. For optimal performance in subsequent gene synthesis procedures they should be prepared individually and in larger scales. Usually, a set of individually designed oligonucleotides is made on automated solid-phase synthesizers, purified and then connected by specific annealing and standard ligation or polymerase reactions | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,013 |
Artificial gene synthesis To improve specificity of oligonucleotide annealing, the synthesis step relies on a set of thermostable DNA ligase and polymerase enzymes. To date, several methods for gene synthesis have been described, such as the ligation of phosphorylated overlapping oligonucleotides, the Fok I method and a modified form of ligase chain reaction for gene synthesis. Additionally, several PCR assembly approaches have been described. They usually employ oligonucleotides of 40-50 nucleotides long that overlap each other. These oligonucleotides are designed to cover most of the sequence of both strands, and the full-length molecule is generated progressively by overlap extension (OE) PCR, thermodynamically balanced inside-out (TBIO) PCR or combined approaches. The most commonly synthesized genes range in size from 600 to 1,200 bp although much longer genes have been made by connecting previously assembled fragments of under 1,000 bp. In this size range it is necessary to test several candidate clones confirming the sequence of the cloned synthetic gene by automated sequencing methods. Moreover, because the assembly of the full-length gene product relies on the efficient and specific alignment of long single stranded oligonucleotides, critical parameters for synthesis success include extended sequence regions comprising secondary structures caused by inverted repeats, extraordinary high or low GC-content, or repetitive structures | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,014 |
Artificial gene synthesis Usually these segments of a particular gene can only be synthesized by splitting the procedure into several consecutive steps and a final assembly of shorter sub-sequences, which in turn leads to a significant increase in time and labor needed for its production. The result of a gene synthesis experiment depends strongly on the quality of the oligonucleotides used. For these annealing based gene synthesis protocols, the quality of the product is directly and exponentially dependent on the correctness of the employed oligonucleotides. Alternatively, after performing gene synthesis with oligos of lower quality, more effort must be made in downstream quality assurance during clone analysis, which is usually done by time-consuming standard cloning and sequencing procedures. Another problem associated with all current gene synthesis methods is the high frequency of sequence errors because of the usage of chemically synthesized oligonucleotides. The error frequency increases with longer oligonucleotides, and as a consequence the percentage of correct product decreases dramatically as more oligonucleotides are used. The mutation problem could be solved by shorter oligonucleotides used to assemble the gene. However, all annealing based assembly methods require the primers to be mixed together in one tube. In this case, shorter overlaps do not always allow precise and specific annealing of complementary primers, resulting in the inhibition of full length product formation | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,015 |
Artificial gene synthesis Manual design of oligonucleotides is a laborious procedure and does not guarantee the successful synthesis of the desired gene. For optimal performance of almost all annealing based methods, the melting temperatures of the overlapping regions are supposed to be similar for all oligonucleotides. The necessary primer optimisation should be performed using specialized oligonucleotide design programs. Several solutions for automated primer design for gene synthesis have been presented so far. To overcome problems associated with oligonucleotide quality several elaborate strategies have been developed, employing either separately prepared fishing oligonucleotides, mismatch binding enzymes of the mutS family or specific endonucleases from bacteria or phages. Nevertheless, all these strategies increase time and costs for gene synthesis based on the annealing of chemically synthesized oligonucleotides. Massively parallel sequencing has also been used as a tool to screen complex oligonucleotide libraries and enable the retrieval of accurate molecules. In one approach, oligonucleotides are sequenced on the 454 pyrosequencing platform and a robotic system images and picks individual beads corresponding to accurate sequence. In another approach, a complex oligonucleotide library is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by dial-out PCR | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,016 |
Artificial gene synthesis Increasingly, genes are ordered in sets including functionally related genes or multiple sequence variants on a single gene. Virtually all of the therapeutic proteins in development, such as monoclonal antibodies, are optimised by testing many gene variants for improved function or expression. While traditional nucleic acid synthesis only uses 4 base pairs - adenine, thymine, guanine and cytosine, oligonucleotide synthesis in the future could incorporate the use of unnatural base pairs, which are artificially designed and synthesised nucleobases that do not occur in nature. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or "Unnatural Base Pair" (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed, and inserted it into cells of the common bacterium "E. coli" that successfully replicated the unnatural base pairs through multiple generations | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,017 |
Artificial gene synthesis This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into "E. coli" bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM. The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. In the future, these unnatural base pairs could be synthesised and incorporated into oligonucleotides via DNA printing methods. DNA printing can thus be used to produce DNA parts, which are defined as sequences of DNA that encode a specific biological function (for example, promoters, transcription regulatory sequences or open reading frames). However, because oligonucleotide synthesis typically cannot accurately produce oligonucleotides sequences longer than a few hundred base pairs, DNA assembly methods have to be employed to assemble these parts together to create functional genes, multi-gene circuits or even entire synthetic chromosomes or genomes | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,018 |
Artificial gene synthesis Some DNA assembly techniques only define protocols for joining DNA parts, while other techniques also define the rules for the format of DNA parts that are compatible with them. These processes can be scaled up to enable the assembly of entire chromosomes or genomes. In recent years, there has been proliferation in the number of different DNA assembly standards with 14 different assembly standards developed as of 2015, each with their pros and cons. Overall, the development of DNA assembly standards has greatly facilitated the workflow of synthetic biology, aided the exchange of material between research groups and also allowed for the creation of modular and reusable DNA parts. The various DNA assembly methods can be classified into three main categories – endonuclease-mediated assembly, site-specific recombination, and long-overlap-based assembly. Each group of methods has its distinct characteristics and their own advantages and limitations. Endonucleases are enzymes that recognise and cleave nucleic acid segments and they can be used to direct DNA assembly. Of the different types of restriction enzymes, the type II restriction enzymes are the most commonly available and used because their cleavage sites are located near or in their recognition sites. Hence, endonuclease-mediated assembly methods make use of this property to define DNA parts and assembly protocols. The BioBricks assembly standard was described and introduced by Tom Knight in 2003 and it has been constantly updated since then | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,019 |
Artificial gene synthesis Currently, the most commonly used BioBricks standard is the assembly standard 10, or BBF <nowiki>RFC 10</nowiki>. BioBricks defines the prefix and suffix sequences required for a DNA part to be compatible with the BioBricks assembly method, allowing the joining together of all DNA parts which are in the BioBricks format. The prefix contains the restriction sites for EcoRI, NotI and XBaI, while the suffix contains the SpeI, NotI and PstI restriction sites. Outside of the prefix and suffix regions, the DNA part must not contain these restriction sites. To join two BioBrick parts together, one of the plasmids is digested with EcoRI and SpeI while the second plasmid is digested with EcoRI and XbaI. The two EcoRI overhangs are complementary and will thus anneal together, while SpeI and XbaI also produce complementary overhangs which can also be ligated together. As the resulting plasmid contains the original prefix and suffix sequences, it can be used to join with more BioBricks parts. Because of this property, the BioBricks assembly standard is said to be idempotent in nature. However, there will also be a "scar" sequence (either TACTAG or TACTAGAG) formed between the two fused BioBricks. This prevents BioBricks from being used to create fusion proteins, as the 6bp scar sequence codes for a tyrosine and a stop codon, causing translation to be terminated after the first domain is expressed, while the 8bp scar sequence causes a frameshift, preventing continuous readthrough of the codons | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,020 |
Artificial gene synthesis To offer alternative scar sequences that for example give a 6bp scar, or scar sequences that do not contain stop codons, other assembly standards such as the BB-2 Assembly, BglBricks Assembly, Silver Assembly and the Freiburg Assembly were designed. While the easiest method to assemble BioBrick parts is described above, there also exist several other commonly used assembly methods that offer several advantages over the standard assembly. The 3 antibiotic (3A) assembly allows for the correct assembly to be selected via antibiotic selection, while the amplified insert assembly seeks to overcome the low transformation efficiency seen in 3A assembly. The BioBrick assembly standard has also served as inspiration for using other types of endonucleases for DNA assembly. For example, both the iBrick standard and the HomeRun vector assembly standards employ homing endonucleases instead of type II restriction enzymes. Some assembly methods also make use of type IIs restriction endonucleases. These differ from other type II endonucleases as they cut several base pairs away from the recognition site. As a result, the overhang sequence can be modified to contain the desired sequence. This provides Type IIs assembly methods with two advantages – it enables "scar-less" assembly, and allows for one-pot, multi-part assembly. Assembly methods that use type IIs endonucleases include Golden Gate and its associated variants. The Golden Gate assembly protocol was defined by Engler et al | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,021 |
Artificial gene synthesis 2008 to define a DNA assembly method that would give a final construct without a scar sequence, while also lacking the original restriction sites. This allows the protein to be expressed without containing unwanted protein sequences which could negatively affect protein folding or expression. By using the BsaI restriction enzyme that produces a 4 base pair overhang, up to 240 unique, non-palindromic sequences can be used for assembly. Plasmid design and assembly In Golden Gate cloning, each DNA fragment to be assembled is placed in a plasmid, flanked by inward facing BsaI restriction sites containing the programmed overhang sequences. For each DNA fragment, the 3' overhang sequence is complementary to the 5' overhang of the next downstream DNA fragment. For the first fragment, the 5' overhang is complementary to the 5' overhang of the destination plasmid, while the 3' overhang of the final fragment is complementary to the 3' overhang of the destination plasmid. Such a design allows for all DNA fragments to be assembled in a one-pot reaction (where all reactants are mixed together), with all fragments arranged in the correct sequence. Successfully assembled constructs are selected by detecting the loss of function of a screening cassette that was originally in the destination plasmid. MoClo and Golden Braid The original Golden Gate Assembly only allows for a single construct to be made in the destination vector | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,022 |
Artificial gene synthesis To enable this construct to be used in a subsequent reaction as an entry vector, the MoClo and Golden Braid standards were designed. The MoClo standard involves defining multiple tiers of DNA assembly: Each assembly tier alternates the use of BsaI and BpiI restriction sites to minimise the number of forbidden sites, and sequential assembly for each tier is achieved by following the Golden Gate plasmid design. Overall, the MoClo standard allows for the assembly of a construct that contains multiple transcription units, all assembled from different DNA parts, by a series of one-pot Golden Gate reactions. However, one drawback of the MoClo standard is that it requires the use of 'dummy parts' with no biological function, if the final construct requires less than four component parts. The Golden Braid standard on the other hand introduced a pairwise Golden Gate assembly standard. The Golden Braid standard uses the same tiered assembly as MoClo, but each tier only involves the assembly of two DNA fragments, i.e. a pairwise approach. Hence in each tier, pairs of genes are cloned into a destination fragment in the desired sequence, and these are subsequently assembled two at a time in successive tiers. Like MoClo, the Golden Braid standard alternates the BsaI and BpiI restriction enzymes between each tier. The development of the Golden Gate assembly methods and its variants has allowed researchers to design tool-kits to speed up the synthetic biology workflow. For example EcoFlex was developed as a toolkit for "E | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,023 |
Artificial gene synthesis Coli" that uses the MoClo standard for its DNA parts, while a similar toolkit has also been developed for engineering the "Chlamydomonas reinhardtii" mircoalgae. Site-specific recombination makes use of phage integrases instead of restriction enzymes, eliminating the need for having restriction sites in the DNA fragments. Instead, integrases make use of unique attachment (att) sites, and catalyse DNA rearrangement between the target fragment and the destination vector. The Invitrogen Gateway cloning system was invented in the late 1990s and uses two proprietary enzyme mixtures, BP clonase and LR clonase. The BP clonase mix catalyses the recombination between attB and attP sites, generating hybrid attL and attR sites, while the LR clonase mix catalyse the recombination of attL and attR sites to give attB and attP sites. As each enzyme mix recognises only specific att sites, recombination is highly specific and the fragments can be assembled in the desired sequence. Vector design and assembly Because Gateway cloning is a proprietary technology, all Gateway reactions must be carried out with the Gateway kit that is provided by the manufacturer. The reaction can be summarised into two steps. The first step involves assembling the entry clones containing the DNA fragment of interest, while the second step involves inserting this fragment of interest into the destination clone. The earliest iterations of the Gateway cloning method only allowed for only one entry clone to be used for each destination clone produced | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,024 |
Artificial gene synthesis However, further research revealed that four more orthogonal att sequences could be generated, allowing for the assembly of up to four different DNA fragments, and this process is now known as the Multisite Gateway technology. Besides Gateway cloning, non-commercial methods using other integrases have also been developed. For example, the Serine Integrase Recombinational Assembly (SIRA) method uses the ϕC31 integrase, while the Site-Specific Recombination-based Tandem Assembly (SSRTA) method uses the "Streptomyces" phage φBT1 integrase. Other methods, like the HomeRun Vector Assembly System (HVAS), build on the Gateway cloning system and further incorporate homing endoucleases to design a protocol that could potentially support the industrial synthesis of synthetic DNA constructs. There have been a variety of long-overlap-based assembly methods developed in recent years. One of the most commonly used methods, the Gibson assembly method, was developed in 2009, and provides a one-pot DNA assembly method that does not require the use of restriction enzymes or integrases. Other similar overlap-based assembly methods include Circular Polymerase Extension Cloning (CPEC), Sequence and Ligase Independent Cloning (SLIC) and Seamless Ligation Cloning Extract (SLiCE). Despite the presence of many overlap assembly methods, the Gibson assembly method is still the most popular | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,025 |
Artificial gene synthesis Besides the methods listed above, other researchers have built on the concepts used in Gibson assembly and other assembly methods to develop new assembly strategies like the Modular Overlap-Directed Assembly with Linkers (MODAL) strategy, or the Biopart Assembly Standard for Idempotent Cloning (BASIC) method. The Gibson assembly method is a relatively straightforward DNA assembly method, requiring only a few additional reagents: the 5' T5 exonuclease, Phusion DNA polymerase, and Taq DNA ligase. The DNA fragments to be assembled are synthesised to have overlapping 5' and 3' ends in the order that they are to be assembled in. These reagents are mixed together with the DNA fragments to be assembled at 50 °C and the following reactions occur: Because the T5 exonuclease is heat labile, it is inactivated at 50 °C after the initial chew back step. The product is thus stable, and the fragments assembled in the desired order. This one-pot protocol can assemble up to 5 different fragments accurately, while several commercial providers have kits to accurately assemble up to 15 different fragments in a two-step reaction. However, while the Gibson assembly protocol is fast and uses relatively few reagents, it requires bespoke DNA synthesis as each fragment has to be designed to contain overlapping sequences with the adjacent fragments and amplified via PCR. This reliance on PCR may also affect the fidelity of the reaction when long fragments, fragments with high GC content or repeat sequences are used | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,026 |
Artificial gene synthesis The MODAL strategy defines overlap sequences known as "linkers" to reduce the amount of customisation that needs to be done with each DNA fragment. The linkers were designed using the R2oDNA Designer software and the overlap regions were designed to be 45 bp long to be compatible with Gibson assembly and other overlap assembly methods. To attach these linkers to the parts to be assembled, PCR is carried using part-specific primers containing 15 bp prefix and suffix adaptor sequences. The linkers are then attached to the adaptor sequences via a second PCR reaction. To position the DNA fragments, the same linker will be attached to the suffix of the desired upstream fragment and the prefix of the desired downstream fragments. Once the linkers are attached, Gibson assembly, CPEC, or the other overlap assembly methods can all be used to assemble the DNA fragments in the desired order. The BASIC assembly strategy was developed in 2015 and sought to address the limitations of previous assembly techniques, incorporating six key concepts from them: standard reusable parts; single-tier format (all parts are in the same format and are assembled using the same process); idempotent cloning; parallel (multipart) DNA assembly; size independence; automatability. DNA parts and linker design The DNA parts are designed and cloned into storage plasmids, with the part flanked by an integrated prefix ("i"P) and an integrated suffix ("i"S) sequence | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,027 |
Artificial gene synthesis The "i"P and "i"S sequences contain inward facing BsaI restriction sites, which contain overhangs complementary to the BASIC linkers. Like in MODAL, the 7 standard linkers used in BASIC were designed with the R2oDNA Designer software, and screened to ensure that they do not contain sequences with homology to chassis genomes, and that they do not contain unwanted sequences like secondary structure sequences, restriction sites or ribosomal binding sites. Each linker sequence is split into two halves, each with a 4 bp overhang complementary to the BsaI restriction site, a 12 bp double stranded sequence and sharing a 21 bp overlap sequence with the other half. The half that is will bind to the upstream DNA part is known as the suffix linker part (e.g. L1S) and the half that binds to the downstream part is known as the prefix linker part (e.g. L1P). These linkers form the basis of assembling the DNA parts together. Besides directing the order of assembly, the standard BASIC linkers can also be modified to carry out other functions. To allow for idempotent assembly, linkers were also designed with additional methylated "i"P and "i"S sequences inserted to protect them from being recognised by BsaI. This methylation is lost following transformation and in vivo plasmid replication, and the plasmids can be extracted, purified, and used for further reactions | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,028 |
Artificial gene synthesis Because the linker sequence are relatively long (45bp for a standard linker), there is an opportunity to incorporate functional DNA sequences to reduce the number of DNA parts needed during assembly. The BASIC assembly standard provides several linkers embedded with RBS of different strengths. Similarly to facilitate the construction of fusion proteins containing multiple protein domains,several fusion linkers were also designed to allow for full read-through of the DNA construct. These fusion linkers code for a 15 amino acid glycine and serine polypeptide, which is an ideal linker peptide for fusion proteins with multiple domains. Assembly There are three main steps in the assembly of the final construct. As DNA printing and DNA assembly methods have allowed commercial gene synthesis to become progressively and exponentially cheaper over the past years, artificial gene synthesis represents a powerful and flexible engineering tool for creating and designing new DNA sequences and protein functions. Besides synthetic biology, various research areas like those involving heterologous gene expression, vaccine development, gene therapy and molecular engineering, would benefit greatly from having fast and cheap methods to synthesise DNA to code for proteins and peptides. The methods used for DNA printing and assembly have even enabled the use of DNA as a information storage medium. On June 28, 2007, a team at the J | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,029 |
Artificial gene synthesis Craig Venter Institute published an article in "Science Express", saying that they had successfully transplanted the natural DNA from a "Mycoplasma mycoides" bacterium into a "Mycoplasma capricolum" cell, creating a bacterium which behaved like a "M. mycoides". On Oct 6, 2007, Craig Venter announced in an interview with UK's "The Guardian" newspaper that the same team had synthesized a modified version of the single chromosome of "Mycoplasma genitalium" artificially. The chromosome was modified to eliminate all genes which tests in live bacteria had shown to be unnecessary. The next planned step in this "minimal genome project" is to transplant the synthesized minimal genome into a bacterial cell with its old DNA removed; the resulting bacterium will be called "Mycoplasma laboratorium". The next day the Canadian bioethics group, ETC Group issued a statement through their representative, Pat Mooney, saying Venter's "creation" was "a chassis on which you could build almost anything". The synthesized genome had not yet been transplanted into a working cell. On May 21, 2010, "Science" reported that the Venter group had successfully synthesized the genome of the bacterium "Mycoplasma mycoides" from a computer record, and transplanted the synthesized genome into the existing cell of a "Mycoplasma capricolum" bacterium that had its DNA removed. The "synthetic" bacterium was viable, i.e. capable of replicating billions of times. The team had originally planned to use the "M | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,030 |
Artificial gene synthesis genitalium" bacterium they had previously been working with, but switched to "M. mycoides" because the latter bacterium grows much faster, which translated into quicker experiments. Venter describes it as "the first species... to have its parents be a computer". The transformed bacterium is dubbed "Synthia" by ETC. A Venter spokesperson has declined to confirm any breakthrough at the time of this writing. As part of the Synthetic Yeast 2.0 project, various research groups around the world have participated in a project to synthesise synthetic yeast genomes, and through this process, optimise the genome of the model organism "Saccharomyces cerevisae". The Yeast 2.0 project applied various DNA assembly methods that have been discussed above, and in March 2014, Jef Boeke of the Langone Medical Centre at New York University, revealed that his team had synthesized chromosome III of "S. cerevisae". The procedure involved replacing the genes in the original chromosome with synthetic versions and the finished synthetic chromosome was then integrated into a yeast cell. It required designing and creating 273,871 base pairs of DNA – fewer than the 316,667 pairs in the original chromosome. In March 2017, the synthesis of 6 of the 16 chromosomes had been completed, with synthesis of the others still ongoing. | Biology | https://en.wikipedia.org/wiki?curid=11913227 | Artificial gene synthesis | 194,031 |
Prix Michel-Sarrazin The is awarded annually in the Canadian province of Quebec by the Club de Recherches Clinique du Québec to a celebrated Québécois scientist who, by their dynamism and productivity, have contributed in an important way to the advancement of research biomedical. It is named in honour of Michel Sarrazin (1659–1734) who was the first Canadian scientist. Source: CRCQ | Biology | https://en.wikipedia.org/wiki?curid=11914292 | Prix Michel-Sarrazin | 194,032 |
Human Molecular Genetics Human Molecular Genetics, first published in 1992, is a semimonthly peer reviewed, scientific journal, published by The Oxford University Press. The journal's focus is research papers on all topics related to human molecular genetics. In addition, two ""special review"" issues are published each year. There are four professors who share the title of Executive Editor for this journal: Professor Kay Davies from the University of Oxford, Professor Anthony Wynshaw-Boris from Case Western Reserve University, Timothy M. Frayling from the University of Exeter, and Eleftheria Zeggini from Helmholtz Zentrum München. This journal was first published as Volume 1 Number 1 in April 1992 by IRL Press of Oxford England. The impact factor for 2006 was 8.099, for 2007 it was 7.806, for 2008 it was 7.249. The current impact factor for 2018 is 4.544. "Human Molecular Genetics" is indexed in: | Biology | https://en.wikipedia.org/wiki?curid=11918680 | Human Molecular Genetics | 194,033 |
Insecticidal soap is used to control many plant insect pests. Soap has been used for more than 200 years as an insect control. Because insecticidal soap works on direct contact with pests via the disruption of cell membranes when the insect is penetrated with fatty acids, the insect's cells leak their contents causing the insect to dehydrate and die. is sprayed on plants in a way so the entire plant is saturated for it is found that the insecticidal properties of soap occurs when the solution is wet. Soaps have a low mammalian toxicity and are therefore considered safe to be used around children and pets, and may be used in organic farming. Insecticidal soap's active ingredient is most often a potassium salt of fatty acids. should be based on long-chain fatty acids (10–18 carbon atoms), because shorter-chain fatty acids tend to be damaging for the plant (phytotoxicity). Short (8-carbon) fatty-acid chains occur for example in coconut oil and palm oil and soaps based on those oils. "'Green Soap' is a potassium/coconut oil soap ... [that] has also been shown to be effective, as an unlabeled insecticide, in controlling soft-bodied insects including aphids". Recommended concentrations of insecticidal soap are typically in the range 1–2 percent soap mixed with water. One manufacturer recommends a concentration of 0.06% to 0.25% (pure soap equivalent)[ what is pure soap!?] for most agricultural applications.; another one recommends concentrations of 0.5 to 1% pure soap equivalent | Biology | https://en.wikipedia.org/wiki?curid=11921706 | Insecticidal soap | 194,034 |
Insecticidal soap In the European Union, fatty acid potassium salts are registered and allowed as insecticide at a 2% concentration. is most effective if it is dissolved in soft water, since the fatty acids in soap tend to precipitate in hard water, thereby reducing the effectivity. is sold commercially for aphid control. Labels on these products may not always use the word soap, but they will list "potassium salts of fatty acids" or "potassium laurate" as the active ingredient. Certain types of household soaps (not synthetic detergents, ) are also suitable, but it may be difficult to tell the composition and water content from the label. Potassium-based soaps are typically soft or liquid. The mechanism of action is not exactly understood. Possible mechanisms are: works best on soft-bodied insects and arthropods such as aphids, adelgids, mealybugs, spider mites, thrips, jumping plant lice, scale insects, whiteflies, and sawfly larvae. It can also be used for caterpillars and leafhoppers, but these large-bodied insects can be more difficult to control with soaps alone. Many pollinators and predatory insects such as lady beetles, bumblebees, and hoverflies are relatively unaffected. However, soap will kill predatory mites that may help control spider mites. Also, the soft-bodied aphid-eating larvae of lady beetles, lacewing, and hoverflies may be affected negatively. According to one study a single soap application killed about 15% of lacewing and lady-beetle larvae, and about 65% of predatory mites ("Amblyseius andersoni") | Biology | https://en.wikipedia.org/wiki?curid=11921706 | Insecticidal soap | 194,035 |
Insecticidal soap Green peach aphids are difficult to control since they reproduce quickly (one adult female can deposit up to four nymphs per day) because they tend to reside under the leaves and in leaf axils ("leaf armpits"), where they may not be wetted by a soap spray. Manufacturers indeed state that their insecticidal soaps are only suitable for controlling green peach aphids if used in combination with another insecticide, whereas the same soaps can control other aphids on their own. Among green peach aphids that are in contact with a 2% soap solution, around 95% of the adults and 98% of nymphs die within 48 hours. At 0.75% concentration, the mortality rates are reduced to 75% and 90%, respectively. Since 2011, insecticidal soap has also been approved in the United States for use against powdery mildew. In the European pesticide registration, its use as an insecticide is listed for aphids, white fly, and spider mites. At different concentrations, it may also be used against algae and moss. solution will only kill pests on contact; it has no residual action against aphids that arrive after it has dried. Therefore, the infested plants must be thoroughly wetted. Repeated applications may be necessary to adequately control high populations of pests. Soap spray may damage plants, especially at higher concentrations or at temperatures above 32 °C (90 °F). Plant injury may not be apparent until two days after application. Some plant species are particularly sensitive to soap sprays | Biology | https://en.wikipedia.org/wiki?curid=11921706 | Insecticidal soap | 194,036 |
Insecticidal soap Highly sensitive plants include: horse chestnut, Japanese maple ("Acer"), "Sorbus aucuparia" (mountain ash), Cherimoya fruit, "Lamprocapnos" (bleeding heart), and sweet pea. Other sensitive plants are, for example: "Portulaca", some tomato varieties, "Crataegus" (hawthorn), cherries, plum, "Adiantum" (maidenhair fern), "Euphorbia milii" (crown of thorns), "Lantana camara", "Tropaeolum" (nasturtium), "Gardenia jasminoides", "Lilium longiflorum" (Easter lily). Conifers under (drought) stress or with tender new growth are sensitive as well. Damage may occur as yellow or brown spotting on the leaves, burned tips, or leaf scorch. Plants under drought stress, young transplants, unrooted cuttings and plants with soft young growth tend to be more sensitive. Sensitivity may be tested on a small portion of a plant or plot before a full-scale application. One manufacturer recommends that applications are done with 7- to 14-day intervals, with a maximum of three applications, as repeated applications may aggravate phytotoxicity. In addition, water conditioning agents can increase phytotoxicity. Thanks to its low mammalian toxicity, application of insecticidal soap is typically allowed up to the day of harvest. | Biology | https://en.wikipedia.org/wiki?curid=11921706 | Insecticidal soap | 194,037 |
Instituto Biológico Biological Institute ("Instituto Biológico" in Portuguese) is an applied research center organised in 1924 in São Paulo, Brazil. It is a governmental organisation concerned with the prevention of zoonoses and foodborne animal pathogens such as rabies and tuberculosis, sanitary advertisement campaigns, alternatives to the chemical control of diseases such as organic farming and biological control. Among its main achievements are the biological control of the coffee borer beetle in the 1920s in Brazil, the discovery of bradykinin, and the production of vaccines that combat the Newcastle disease, foot-and-mouth disease and the black plague in pigs. Brazil used to be an important world coffee supplier in the international commodities markets in the beginning of the 20th century. Especially in the state of São Paulo, coffee became a major source of income from exports, and newly-rich coffee barons were sprouting all over the state. In the early 1920s, coffee farmers in the state of São Paulo were having a hard time in controlling the coffee borer beetle ("Hypothenemus hampei"), a bug that destroys coffee berries by perforating them (perforated coffee berries have no value in the commodities market). Gabriel Ribeiro dos Santos, the Secretary of Agriculture of the state of São Paulo at that time, has organised a commission of scientists in May 1924 to identify the coffee borer beetle and prevent further losses in the coffee fields | Biology | https://en.wikipedia.org/wiki?curid=11928230 | Instituto Biológico | 194,038 |
Instituto Biológico A report was delivered to the Secretary of Agriculture, and the actual research started in the same year with Arthur Neiva, Adalberto Queiros Teles and Edmundo Navarro, who worked in two chemistry and entomology laboratories. The goal of the Commission was to find out more information about the parasite, and hence discover effective ways of preventing its growth. The academic studies in process were widely advertised among more than 1,300 coffee farms, or about 50 million farmers overall, in order to apply the results of the ongoing research. Arthur Neiva then ended the research at the end of the year, and the results from such a massive scientific and technical experiment soon arrived, and the damages caused by the beetle were finally under biological control. By importing the ectoparasitoid "Prorops nasuta" from Uganda and using it against the coffee borer beetle, the Commission was able to mitigate the losses in the coffee farms. The catastrophic uprising of the coffee borer beetle, which caught both farmers and the government short and unprepared, and the subsequent fast control of the bug founded on scientific research have shown politicians that it was impossible to protect agriculture from parasites and diseases without a permanent fitosanitary organisation, based on active research and specialised technicians and scientists | Biology | https://en.wikipedia.org/wiki?curid=11928230 | Instituto Biológico | 194,039 |
Instituto Biológico On 26 December 1927, a law enacted the creation of the de Defesa Agrícola e Animal (Biological Institute of Agricultural and Veterinary Defence) - its current denomination came in 1937. In 1928, an area of 239,000 square metres near Ibirapuera Park, known as "Campo do Barreto", was donated to the Institute for the construction of its research centre. The construction works took 17 years to be completed, and the building was finally inaugurated on 25 January 1945. Some of the construction materials were donated by private firms and wealthy individuals from the farming elites at that time. | Biology | https://en.wikipedia.org/wiki?curid=11928230 | Instituto Biológico | 194,040 |
Corepressor In the field of molecular biology, a corepressor is a substance that inhibits the expression of genes. In prokaryotes, corepressors are small molecules whereas in eukaryotes, corepressors are proteins. A corepressor does not directly bind to DNA, but instead indirectly regulates gene expression by binding to repressors. A corepressor downregulates (or represses) the expression of genes by binding to and activating a repressor transcription factor. The repressor in turn binds to a gene's operator sequence (segment of DNA to which a transcription factor binds to regulate gene expression), thereby blocking transcription of that gene. In prokaryotes, the term corepressor is used to denote the activating ligand of a repressor protein. For example, the "E. coli" tryptophan repressor (TrpR) is only able to bind to DNA and repress transcription of the "trp" operon when its corepressor tryptophan is bound to it. TrpR in the absence of tryptophan is known as an aporepressor and is inactive in repressing gene transcription. Trp operon encodes enzymes responsible for the synthesis of tryptophan. Hence TrpR provides a negative feedback mechanism that regulates the biosynthesis of tryptophan. In short tryptophan acts as a corepressor for its own biosynthesis. In eukaryotes, a corepressor is a protein that binds to transcription factors. In the absence of corepressors and in the presence of coactivators, transcription factors upregulate gene expression | Biology | https://en.wikipedia.org/wiki?curid=11933545 | Corepressor | 194,041 |
Corepressor Coactivators and corepressors compete for the same binding sites on transcription factors. A second mechanism by which corepressors may repress transcriptional initiation when bound to transcription factor/DNA complexes is by recruiting histone deacetylases which catalyze the removal of acetyl groups from lysine residues. This increases the positive charge on histones which strengthens the electrostatic attraction between the positively charged histones and negatively charged DNA, making the DNA less accessible for transcription. In humans several dozen to several hundred corepressors are known, depending on the level of confidence with which the characterisation of a protein as a corepressors can be made. | Biology | https://en.wikipedia.org/wiki?curid=11933545 | Corepressor | 194,042 |
Biosurfactant usually refers to surfactants of microbial origin. Most of the bio-surfactants produced by microbes are synthesized extracellularly and many microbes are known to produce bio-surfactants in large relative quantities. Some are of commercial interest. Common biosurfactants include: Microbial biosurfactants are obtain by including immiscible liquids in the growth medium. Potential applications include herbicides and pesticides formulations, detergents, healthcare and cosmetics, pulp and paper, coal, textiles, ceramic processing and food industries, uranium ore-processing, and mechanical dewatering of peat. Biosurfactants enhance the emulsification of hydrocarbons, thus they have the potential to solubilise hydrocarbon contaminants and increase their availability for microbial degradation. These compounds can also be used in enhanced oil recovery and may be considered for other potential applications in environmental protection. | Biology | https://en.wikipedia.org/wiki?curid=11935297 | Biosurfactant | 194,043 |
Ladd's bands Ladd's bands, sometimes called bands of Ladd, are fibrous stalks of peritoneal tissue that attach the cecum to the retroperitoneum in the right lower quadrant (RLQ). Obstructing Ladd's Bands are associated with malrotation of the intestine, a developmental disorder in which the cecum is found in the right upper quadrant (RUQ), instead of its normal anatomical position in the RLQ. then pass over the second part of the duodenum, causing extrinsic compression and obstruction. This clinically manifests as poor feeding and bilious vomiting in neonates. Screening can be performed with an upper GI series. The most severe complication of malrotation is midgut volvulus, in which the mesenteric base twists around the superior mesenteric artery, compromising intestinal perfusion, leading to bowel necrosis. A surgical operation called a "Ladd procedure" is performed to alleviate intestinal malrotation. The procedure involves counterclockwise detorsion of the bowel, surgical division of Ladd's bands, widening of the small intestine's mesentery, performing an appendectomy, and reorientation of the small bowel on the right and the cecum and colon on the left (the appendectomy is performed so as not to be confused by atypical presentation of appendicitis at a later date). Most Ladd surgical repairs take place in infancy or childhood. and the Ladd procedure are named after American pediatric surgeon William Edwards Ladd (1880–1967). | Biology | https://en.wikipedia.org/wiki?curid=11935942 | Ladd's bands | 194,044 |
Genetically modified plant Genetically modified plants have been engineered for scientific research, to create new colours in plants, deliver vaccines, and to create enhanced crops. Many plant cells are pluripotent, meaning that a single cell from a mature plant can be harvested and then under the right conditions form a new plant. This ability can be taken advantage of by genetic engineers; by selecting for cells that have been successfully transformed in an adult plant a new plant can then be grown that contains the transgene in every cell through a process known as tissue culture. Much of the advances in the field genetic engineering has come from experimentation with tobacco. Major advances in tissue culture and plant cellular mechanisms for a wide range of plants has originated from systems developed in tobacco. It was the first plant to be genetically engineered and is considered a model organism for not only genetic engineering, but a range of other fields. As such the transgenic tools and procedures are well established making it one of the easiest plants to transform. Another major model organism relevant to genetic engineering is "Arabidopsis thaliana." Its small genome and short life cycle makes it easy to manipulate and it contains many homolgs to important crop species. It was the first plant sequenced, has abundant bioinformatic resources and can be transformed by simply dipping a flower in a transformed "Agrobacterium" solution. In research, plants are engineered to help discover the functions of certain genes | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,045 |
Genetically modified plant The simplest way to do this is to remove the gene and see what phenotype develops compared to the wild type form. Any differences are possibly the result of the missing gene. Unlike mutagenisis, genetic engineering allows targeted removal without disrupting other genes in the organism. Some genes are only expressed in certain tissue, so reporter genes, like GUS, can be attached to the gene of interest allowing visualisation of the location. Other ways to test a gene is to alter it slightly and then return it to the plant and see if it still has the same effect on phenotype. Other strategies include attaching the gene to a strong promoter and see what happens when it is over expressed, forcing a gene to be expressed in a different location or at different developmental stages. Some genetically modified plants are purely ornamental. They are modified for lower color, fragrance, flower shape and plant architecture. The first genetically modified ornamentals commercialised altered colour. Carnations were released in 1997, with the most popular genetically modified organism, a blue rose (actually lavender or mauve) created in 2004. The roses are sold in Japan, the United States, and Canada. Other genetically modified ornamentals include Chrysanthemum and Petunia. As well as increasing aesthetic value there are plans to develop ornamentals that use less water or are resistant to the cold, which would allow them to be grown outside their natural environments | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,046 |
Genetically modified plant It has been proposed to genetically modify some plant species threatened by extinction to be resistant invasive plants and diseases, such as the emerald ash borer in North American and the fungal disease, Ceratocystis platani, in European plane trees. The papaya ringspot virus (PRSV) devastated papaya trees in Hawaii in the twentieth century until transgenic papaya plants were given pathogen-derived resistance. However, genetic modification for conservation in plants remains mainly speculative. A unique concern is that a transgenic species may no longer bear enough resemblance to the original species to truly claim that the original species is being conserved. Instead, the transgenic species may be genetically different enough to be considered a new species, thus diminishing the conservation worth of genetic modification. Genetically modified crops are genetically modified plants that are used in agriculture. The first crops provided are used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops can be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,047 |
Genetically modified plant There are three main aims to agricultural advancement; increased production, improved conditions for agricultural workers and sustainability. GM crops contribute by improving harvests through reducing insect pressure, increasing nutrient value and tolerating different abiotic stresses. Despite this potential, as of 2018, the commercialised crops are limited mostly to cash crops like cotton, soybean, maize and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance. Soybeans accounted for half of all genetically modified crops planted in 2014. Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100, from to 1,750,000 km (432 million acres). Geographically though the spread has been very uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013. The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties; in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Most currently available genes used to engineer insect resistance come from the "Bacillus thuringiensis" bacterium | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,048 |
Genetically modified plant Most are in the form of delta endotoxin genes known as cry proteins, while a few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from "B. thuringiensis" is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence, modifying flower colour and altering the plants composition. Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A. a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. In 2018 it gained its first approvals for use as food. Plants and plant cells have been genetically engineered for production of biopharmaceuticals in bioreactors, a process known as Pharming. Work has been done with duckweed "Lemna minor", the algae "Chlamydomonas reinhardtii" and the moss "Physcomitrella patens" | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,049 |
Genetically modified plant Biopharmaceuticals produced include cytokines, hormones, antibodies, enzymes and vaccines, most of which are accumulated in the plant seeds. Many drugs also contain natural plant ingredients and the pathways that lead to their production have been genetically altered or transferred to other plant species to produce greater volume and better products. Other options for bioreactors are biopolymers and biofuels. Unlike bacteria, plants can modify the proteins post-translationally, allowing them to make more complex molecules.They also pose less risk of being contaminated. Therapeutics have been cultured in transgenic carrot and tobacco cells, including a drug treatment for Gaucher's disease. Vaccine production and storage has great potential in transgenic plants. Vaccines are expensive to produce, transport and administer, so having a system that could produce them locally would allow greater access to poorer and developing areas. As well as purifying vaccines expressed in plants it is also possible to produce edible vaccines in plants. Edible vaccines stimulate the immune system when ingested to protect against certain diseases. Being stored in plants reduces the long-term cost as they can be disseminated without the need for cold storage, don't need to be purified and have long term stability. Also being housed within plant cells provides some protection from the gut acids upon digestion | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,050 |
Genetically modified plant However the cost of developing, regulating and containing transgenic plants is high, leading to most current plant-based vaccine development being applied to veterinary medicine, where the controls are not as strict. | Biology | https://en.wikipedia.org/wiki?curid=11943240 | Genetically modified plant | 194,051 |
Respirocyte Respirocytes are hypothetical, microscopic, artificial red blood cells that are intended to emulate the function of their organic counterparts, so as to supplement or replace the function of much of the human body's normal respiratory system. Respirocytes were proposed by Robert A. Freitas Jr in his 1998 paper "A Mechanical Artificial Red Blood Cell: Exploratory Design in Medical Nanotechnology". Respirocytes are an example of molecular nanotechnology, a field of technology still in the very earliest, purely hypothetical phase of development. Current technology is not sufficient to build a respirocyte due to considerations of power, atomic-scale manipulation, immune reaction or toxicity, computation and communication. Freitas proposed a spherical robot made up of 18 billion atoms arranged as a tiny pressure tank, which would be filled up with oxygen and carbon dioxide. In Freitas' proposal, each respirocyte could store and transport 236 times more oxygen than a natural red blood cell, and could release it in a more controlled manner. Freitas has also proposed "microbivore" robots that would attack pathogens in the manner of white blood cells. | Biology | https://en.wikipedia.org/wiki?curid=11946749 | Respirocyte | 194,052 |
Protein dispersibility index The Protein Dispersibility Index (PDI) is a means of comparing the solubility of a protein in water, and is widely used in the soybean product industry. A sample of the soybeans are ground, mixed with a specific quantity of water, and the two are then blended together at a specific rpm for a specific time. The resulting mixture and original bean flour then have their protein content measured using a combustion test, and the PDI is calculated as the percentage of the protein in the mix divided by the percentage in the flour - a PDI of 100 therefore indicates total solubility. It has been shown that the PDI can be affected, not only by the type of soybean used, but also by manufacturing processes - heat has been shown to lower the PDI. The PDI required of a soyflour is dependent on the purpose to which the soybeans are to be put. Manufacturers of soymilk and tofu products want a high PDI to ensure the maximum protein content in their products. However, manufacturers of soy-based fish feed require a low PDI to avoid loss of valuable protein into the surrounding water. | Biology | https://en.wikipedia.org/wiki?curid=11960698 | Protein dispersibility index | 194,053 |
Lau event The was the last of three relatively minor mass extinctions (the Ireviken, Mulde, and Lau events) during the Silurian period. It had a major effect on the conodont fauna, but barely scathed the graptolites. It coincided with a global low point in sea level, is closely followed by an excursion in geochemical isotopes in the ensuing late Ludfordian faunal stage and a change in depositional regime. The started at the beginning of the late Ludfordian, a subdivision of the Ludlow stage, about . Its strata are best exposed in Gotland, Sweden, taking its name from the parish of Lau. Its base is set at the first extinction datum, in the Eke beds, and despite a scarcity of data, it is apparent that most major groups suffered an increase in extinction rate during the event; major changes are observed worldwide at correlated rocks, with a "crisis" observed in populations of conodonts and graptolites. More precisely, conodonts suffered in the Lau event, and graptolites in the subsequent isotopic excursion. Local extinctions may have played a role in many places, especially the increasingly enclosed Welsh basin; the event's relatively high severity rating of 6.2 does not change the fact that many life-forms became re-established shortly after the event, presumably surviving in refuge or in environments that have not been preserved in the geological record. Although life persisted after the event, community structures were permanently altered and many lifeforms failed to regain the niches they had occupied before the event | Biology | https://en.wikipedia.org/wiki?curid=11965281 | Lau event | 194,054 |
Lau event A peak in , accompanied by fluctuations in other isotope concentrations, is often associated with mass extinctions. Some workers have attempted to explain this event in terms of climate or sea level change – perhaps arising due to a build-up of glaciers; however, such factors alone do not appear to be sufficient to explain the events. An alternative hypothesis is that changes in ocean mixing were responsible. An increase in density is required to make water downwell; the cause of this densification may have changed from hypersalinity (due to ice formation and evaporation) to temperature (due to water cooling). The curve slightly lags conodont extinctions, hence the two events may not represent the same thing. Therefore, the term "Lau event" is used only for the extinction, not the following isotopic activity, which is named after the time period in which it occurred. Loydell suggests many causes of the isotopic excursion, including increased carbon burial, increased carbonate weathering, changes in atmospheric and oceanic interactions, changes in primary production, and changes in humidity or aridity. He uses a correlation between the events and glacially induced global sea level change to suggest that carbonate weathering is the major player, with other factors playing a less significant role | Biology | https://en.wikipedia.org/wiki?curid=11965281 | Lau event | 194,055 |
Lau event Profound sedimentary changes occurred at the beginning of the Lau event; these are probably associated with the onset of sea level rise, which continued through the event, reaching a high point at the time of deposition of the Burgsvik beds, after the event. These changes appear to display anachronism, marked by an increase in erosional surfaces and the return of flat-pebbled conglomerates in the Eke beds. This is further evidence of a major blow to ecosystems of the time – such deposits can only form in conditions similar to those of the early Cambrian period, when life as we know it was only just becoming established. Indeed, stromatolites, which rarely form in the presence of abundant higher life forms, are observed during the and, occasionally, in the overlying Burgsvik beds; microbial colonies of "Rothpletzella" and "Wetheredella" become abundant. This suite of characteristics is common to the larger end-Ordovician and end-Permian extinctions. | Biology | https://en.wikipedia.org/wiki?curid=11965281 | Lau event | 194,056 |
Subclinical seizure A subclinical seizure is a seizure that, being subclinical, does not present any clinical signs or symptoms. Such seizures are often experienced by people with epilepsy, in which an electroencephalogram (EEG) trace will show abnormal brain activity, usually for a short time, but level of consciousness is normal. This is often manifest as a single spike on the EEG trace or a slowing of brain activity not correlating to what the person notices or experiences. Subclinical seizures can be useful to a neurologist in the diagnosis of epilepsy. | Biology | https://en.wikipedia.org/wiki?curid=11974025 | Subclinical seizure | 194,057 |
Banana peel A banana peel, also called banana husk or banana skin in British English, is the outer covering of the banana fruit. Banana peels are used as food for animals, in water purification, for manufacturing of several biochemical products as well as for jokes and comical situations. There are several methods to remove a peel from a banana. Bananas are a popular fruit consumed worldwide with a yearly production of over 165 million tonnes in 2011. Once the peel is removed, the fruit can be eaten raw or cooked and the peel is generally discarded. Because of this removal of the banana peel, a significant amount of organic waste is generated. Banana peels are sometimes used as feedstock for cattle, goats, pigs, monkeys, poultry, rabbits, fish, zebras and several other species, typically on small farms in regions where bananas are grown. There are some concerns over the impact of tannins contained in the peels on animals that consume them. The nutritional value of banana peel depends on the stage of maturity and the cultivar; for example plantain peels contain less fibre than dessert banana peels, and lignin content increases with ripening (from 7 to 15% dry matter). On average, banana peels contain 6-9% dry matter of protein and 20-30% fibre (measured as NDF). Green plantain peels contain 40% starch that is transformed into sugars after ripening. Green banana peels contain much less starch (about 15%) when green than plantain peels, while ripe banana peels contain up to 30% free sugars | Biology | https://en.wikipedia.org/wiki?curid=11979055 | Banana peel | 194,058 |
Banana peel Banana peels are also used for water purification, to produce ethanol, cellulase, laccase, as fertilizer and in composting. is also part of the classic physical comedy slapstick visual gag, the "slipping on a banana peel". This gag was already seen as classic in 1920s America. It can be traced to the late 19th century, when banana peel waste was considered a public hazard in a number of American towns. Although banana peel-slipping jokes date to at least 1854, they became much more popular, beginning in the late-1860s, when the large-scale importation of bananas made them more readily available. Before banana peel jokes came into vogue, orange peels, and sometimes peach skins, or fruit peels/peelings/or skins, generally, were funny, as well as dangerous. Slipping on a banana peel was at one point a real concern with municipal ordinances governing the disposal of the peel. The coefficient of friction of banana peel on a linoleum surface was measured at just 0.07, about half that of lubricated metal on metal. Researchers attribute this to the crushing of the natural polysaccharide follicular gel, releasing a homogenous sol. This unsurprising finding was awarded the 2014 Ig Nobel Prize for physics. Most people peel a banana by cutting or snapping the stem and divide the peel into sections while pulling them away from the bared fruit. Another way of peeling a banana is done in the opposite direction, from the end with the brownish floral residue—a way usually perceived as "upside down" | Biology | https://en.wikipedia.org/wiki?curid=11979055 | Banana peel | 194,059 |
Banana peel This way is also known as the "monkey method", since it is how monkeys are said to peel bananas. When the tip of a banana is pinched with two fingers, it will split and the peel comes off in two clean sections. The inner fibres, or "strings", between the fruit and the peel will remain attached to the peel and the stem of the banana can be used as a handle when eating the banana. There has been a widespread belief that banana peels contain a psychoactive substance, and that smoking them may produce a "high", or a sense of relaxation. This belief, which may be a rumor or urban legend, is often associated with the 1966 song "Mellow Yellow" by Donovan. A recipe for the extraction of the fictional chemical "bananadine" is found in "The Anarchist Cookbook" of 1971. | Biology | https://en.wikipedia.org/wiki?curid=11979055 | Banana peel | 194,060 |
Cellulosic ethanol commercialization is the process of building an industry out of methods of turning cellulose-containing organic matter into cellulosic ethanol for use as a biofuel. Companies, such as Iogen, POET, DuPont, and Abengoa, are building refineries that can process biomass and turn it into bioethanol. Companies, such as Diversa, Novozymes, and Dyadic, are producing enzymes that could enable a cellulosic ethanol future. The shift from food crop feedstocks to waste residues and native grasses offers significant opportunities for a range of players, from farmers to biotechnology firms, and from project developers to investors. As of 2013, the first commercial-scale plants to produce cellulosic biofuels have begun operating. Multiple pathways for the conversion of different biofuel feedstocks are being used. In the next few years, the cost data of these technologies operating at commercial scale, and their relative performance, will become available. Lessons learnt will lower the costs of the industrial processes involved. Cellulosic ethanol can be produced from a diverse array of feedstocks, such as wood pulp from trees or any plant matter. Instead of taking the grain from wheat and grinding that down to get starch and gluten, then taking the starch, cellulosic ethanol production involves the use of the whole crop. This approach should increase yields and reduce the carbon footprint because the amount of energy-intensive fertilisers and fungicides will remain the same, for a higher output of usable material | Biology | https://en.wikipedia.org/wiki?curid=11989180 | Cellulosic ethanol commercialization | 194,061 |
Cellulosic ethanol commercialization Ethtec is building a pilot plant in Harwood, New South Wales, which uses wood residues as a feedstock. GranBio (formerly known as GraalBio) is building a facility projected to produce 82 million litres of cellulosic ethanol per year. In Canada, Iogen Corporation is a developer of cellulosic ethanol process technology. Iogen has developed a proprietary process and operates a demonstration-scale plant in Ontario. The facility has been designed and engineered to process 40 tons of wheat straw per day into ethanol using enzymes made in an adjacent enzyme manufacturing facility. In 2004, Iogen began delivering its first shipments of cellulosic ethanol into the marketplace. In the near term, the company intends to commercialize its cellulose ethanol process by licensing its technology broadly through turnkey plant construction partnerships. The company is currently evaluating sites in the United States and Canada for its first commercial-scale plant. Lignol Innovations has a pilot plant, which uses wood as a feedstock, in Vancouver. In March 2009, KL Energy Corporation of South Dakota and Prairie Green Renewable Energy of Alberta announced their intention to develop a cellulosic ethanol plant near Hudson Bay, Saskatchewan. The Northeast Saskatchewan Renewable Energy Facility will use KL Energy’s modern design and engineering to produce ethanol from wood waste. Cellulosic ethanol production currently exists at "pilot" and "commercial demonstration" scale, including a plant in China engineered by SunOpta Inc | Biology | https://en.wikipedia.org/wiki?curid=11989180 | Cellulosic ethanol commercialization | 194,062 |