tangledgroup/tangled-llama-a-128k-base-v0.1
Text Generation
•
Updated
•
823
text
stringlengths 0
1.01M
|
---|
# Do Ideal Gases Absorb Heat?
1. Jul 11, 2007
### s.p.q.r
Hi,
Do IDEAL gases absorb heat when they expand? I asked a few people this question, half said yes and half said no.
Im after a simple yes or no answer with a small explanation to clarify this one.
If anyone knows, please reply.
Cheers!
2. Jul 11, 2007
### belliott4488
Geez, I sure would have thought so ... how else would their temperatures rise? What was the argument against this?
- Bruce
3. Jul 11, 2007
### s.p.q.r
The arguments against are-
-gasses expand because of the heat applied but don't actually take in any heat from around them. the heat/energy increases the activity between the atom not within the atom
-While gasses can take in heat while they are expanding, an expanding gas does not necessarily need to take in heat. For example, during an adiabatic expansion, the gas expands without exchanging heat with its surroundings. The temperature of the gas decreases because its internal energy supplies the work necessary for the gas to expand.
Sounds correct to me. But, my 1st thought was that ideal gases do absorb heat. This is a harder question then I thought.
Anyone else have an idea?
4. Jul 11, 2007
### belliott4488
What does "activity between the atom(s)" mean? It sounds like more kinetic energy, which means more heat.
Yes, during adiabatic expansion that's true, but of course, not all expansion is adiabatic. Am I not understanding the question correctly?
5. Jul 11, 2007
### Staff: Mentor
You can make a gas absorb heat or not. It is all a matter of the process. If, for example, you expand a gas through a throttling valve and the valve and pipes are insulated, the gas will expand and cool and not absorb heat. If, for example, you take a non-rigid container of a gas and apply heat to it, the gas will absorb heat and expand.
6. Jul 11, 2007
### Bystander
Free expansion, no heat; make it work to expand, and it absorbs heat --- part of the definition of an "ideal gas."
7. Jul 11, 2007
### Just some guy
ideal gases can expand isothermally so I would assume they could absorb heat.
Anyway ideal gases were meant to be a simple model of a gas that accurately reflects reality as far as it can; it's a pretty rubbish model if it forbids isothermal expansions :s
8. Jul 11, 2007
### Bystander
How does free expansion forbid isothermal expansion?
9. Jul 12, 2007
### s.p.q.r
Thank you all for your help. Much appreciated. I am interested in the reply of russ_watters.
So, they wont absorb heat through a throttling valve. (I think I know what that is)
Will the ideal gases absorb heat if they were in a high pressure container?
Thanks Again.
10. Jul 12, 2007
### Just some guy
I never said it did :/
11. Jul 12, 2007
### Staff: Mentor
Bottom of the page (you may as well read the whole page...):
If you don't apply heat to it, it won't absorb heat.
Last edited: Jul 12, 2007
12. Jul 12, 2007
### s.p.q.r
"If you don't apply heat to it, it won't absorb heat"
What if I did apply heat to it? Will it absorb this heat? If so, to what extent?
Pls get back.
Cheers.
### alvaros
Ideal gases when they expand ( when you allow them to fill a bigger
volume ) dont absorb heat ( dont change their temperature ).
Real gases do, because their molecules are attracted between them.
In ideal gases its supposed the molecules dont feel any attraccion.
14. Jul 12, 2007
### Andrew Mason
There is no correct answer your question. It is like asking whether a car gains energy when it goes down the road.
You have to apply the first law of thermodynamics to any situation.
$$\Delta Q = \Delta U + \Delta W$$
where $\Delta W$ is the work done by the gas. If in any process, $\Delta Q > 0$ then there is a heat flow into the gas. If $\Delta Q < 0$ then there is a heat flow out of the gas.
If the gas expands, the gas does work, so $\Delta W > 0$. But that does not tell you if heat flows into the gas. You have to know the change in temperature of the gas in this process. If it does not change temperature ($\Delta U = 0$) then Q is positive. If it loses internal energy in an amount that is less than the work done, Q is positive. If it loses more internal energy than the work done, then Q is negative. etc.
AM
|
For any time, t, we have the following two equations: Step 3: dJ / dW and dJ / db. Applying the backpropagation algorithm on these circuits amounts to repeated application of the chain rule. Our brain changes their connectivity over time to represents new information and requirements imposed on us. Then it is said that the genetic algorithm has provided a set of solutions to our problem. The only main difference is that the recurrent net needs to be unfolded through time for a certain amount of timesteps. ReLu:ReLu stands for Rectified Linear Units. But this has been solved by multi-layer. In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward neural networks.Generalizations of backpropagation exists for other artificial neural networks (ANNs), and for functions generally. Writing code in comment? Types of layers: Backpropagation. The information flows from the dendrites to the cell where it is processed. There’s still one more step to go in this backpropagation algorithm. The procedure used to carry out the learning process in a neural network is called the optimization algorithm (or optimizer).. The neural network I use has three input neurons, one hidden layer with two neurons, and an output layer with two neurons. Instead of just R, G and B channels now we have more channels but lesser width and height. They are a chain of algorithms which attempt to identify relationships between data sets. Writing code in comment? This is done through a method called backpropagation. Kohonen self-organising networks The Kohonen self-organising networks have a two-layer topology. This general algorithm goes under many other names: automatic differentiation (AD) in the reverse mode (Griewank and Corliss, 1991), analyticdifferentiation, module-basedAD,autodiff, etc. generate link and share the link here. The linear threshold gate simply classifies the set of inputs into two different classes. Training Algorithm. Deep Neural net with forward and back propagation from scratch - Python. Clustering Algorithms and Evaluations There is a huge number of clustering algorithms and also numerous possibilities for evaluating a clustering against a gold standard. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. What is the Role of Planning in Artificial Intelligence? Those features or patterns that are considered important are then directed to the output layer, which is the final layer of the network. Inputs are loaded, they are passed through the network of neurons, and the network provides an output for each one, given the initial weights. Please use ide.geeksforgeeks.org, This step is called Backpropagation which basically is used to minimize the loss. Else (summed input < t) it doesn't fire (output y = 0). t, then it “fires” (output y = 1). The brain represents information in a distributed way because neurons are unreliable and could die any time. Backpropagation and Neural Networks. Back propagation Algorithm - Back Propagation in Neural Networks. Rule: If summed input ? Advantage of Using Artificial Neural Networks: The McCulloch-Pitts Model of Neuron: Back Propagation Algorithm Part-2https://youtu.be/GiyJytfl1FoGOOD NEWS FOR COMPUTER ENGINEERSINTRODUCING 5 MINUTES ENGINEERING What is Backpropagation? his operation is called Convolution. close, link input x = ( I1, I2, .., In) Single-layer Neural Networks (Perceptrons) Consider the diagram below: Forward Propagation: Here, we will propagate forward, i.e. The output node has a “threshold” t. Problem in ANNs can have instances that are represented by many attribute-value pairs. This is an example of unsupervised learning. By using our site, you 29, Jan 18. These iterative approaches can take different shapes such as various kinds of gradient descents variants, EM algorithms and others, but at the end the underlying idea is the same : we can’t find direct solution so we start from a given point and progress step by step taking at each iteration a little step in a direction that improve our current solution. The hidden layer extracts relevant features or patterns from the received signals. neural networks for handwritten english alphabet recognition. The first layer is called the input layer and is the only layer exposed to external signals. The goal of back propagation algorithm is to optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs. But ANNs are less motivated by biological neural systems, there are many complexities to biological neural systems that are not modeled by ANNs. Top 10 Highest Paying IT Certifications for 2021, Socket Programming in C/C++: Handling multiple clients on server without multi threading, Implementing Web Scraping in Python with BeautifulSoup, Introduction to Hill Climbing | Artificial Intelligence, Stanford Convolution Neural Network Course (CS231n), Array Declarations in Java (Single and Multidimensional), Top 10 JavaScript Frameworks to Learn in 2021, Top 10 Programming Languages That Will Rule in 2021, Ethical Issues in Information Technology (IT), Difference between Search Engine and Web Browser, Service level agreements in Cloud computing, Write Interview In particular, suppose s and t are two vectors of the same dimension. Generally, ANNs are built out of a densely interconnected set of simple units, where each unit takes a number of real-valued inputs and produces a single real-valued output. Hence a single layer perceptron can never compute the XOR function. It learns by example. Approaching the algorithm from the perspective of computational graphs gives a good intuition about its operations. Convolution Neural Networks or covnets are neural networks that share their parameters. 18, Sep 18. Because of this small patch, we have fewer weights. The neural network we used in this post is standard fully connected network. writing architecture the mit press. When it comes to Machine Learning, Artificial Neural Networks perform really well. It can be represented as a cuboid having its length, width (dimension of the image) and height (as image generally have red, green, and blue channels). hkw the new alphabet. Backpropagation works by using a loss function to calculate how far the network was from the target output. If you understand regular backpropagation algorithm, then backpropagation through time is not much more difficult to understand. So on an average human brain take approximate 10^-1 to make surprisingly complex decisions. Step 1 − Initialize the following to start the training − Weights; Bias; Learning rate $\alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. This article is contributed by Akhand Pratap Mishra. After completing this tutorial, you will know: How to forward-propagate an input to calculate an output. Regression algorithms try to find a relationship between variables and predict unknown dependent variables based on known data. For example, if we have to run convolution on an image with dimension 34x34x3. The choice of a suitable clustering algorithm and of a suitable measure for the evaluation depends on the clustering objects and the clustering task. Input consists of several groups of multi-dimensional data set, The data were cut into three parts (each number roughly equal to the same group), 2/3 of the data given to training function, and the remaining 1/3 of the data given to testing function. It is a widely used algorithm that makes faster and accurate results. It is a standard method of training artificial neural networks; Backpropagation is fast, simple and easy to program; A feedforward neural network is an artificial neural network. Step 1 − Initialize the following to start the training − Weights; Bias; Learning rate $\alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. code. Researchers are still to find out how the brain actually learns. We need to find the partial derivatives with respect to the weights and the bias yet. calculate the weighted sum of the inputs and add bias. As new generations are formed, individuals with least fitness die, providing space for new offspring. c neural-network genetic-algorithm ansi tiny neural-networks artificial-neural-networks neurons ann backpropagation hidden-layers neural Updated Dec 17, 2020 C Tony Coombes says: 12th January 2019 at 12:02 am Hi guys, I enjoy composing my synthwave music and recently I bumped into a very topical issue, namely how cryptocurrency is going to transform the music industry. It is the technique still used to train large deep learning networks. Specifically, explanation of the backpropagation algorithm was skipped. Recurrent Neural Networks Explanation. Now let’s talk about a bit of mathematics which is involved in the whole convolution process. It is faster because it does not use the complete dataset. As we slide our filters we’ll get a 2-D output for each filter and we’ll stack them together and as a result, we’ll get output volume having a depth equal to the number of filters. A Computer Science portal for geeks. The backpropagation algorithm is based on common linear algebraic operations - things like vector addition, multiplying a vector by a matrix, and so on. Let’s move on and see how we can do that. This is a big drawback which once resulted in the stagnation of the field of neural networks. Preliminaries. Using Java Swing to implement backpropagation neural network. The first layer is the input layer, the second layer is itself a network in a plane. After that, we backpropagate into the model by calculating the derivatives. LSTM – Derivation of Back propagation through time Last Updated : 07 Aug, 2020 LSTM (Long short term Memory) is a type of RNN (Recurrent neural network), which is a famous deep learning algorithm that is well suited for making predictions and classification with a flavour of the time. Back Propagation networks are ideal for simple Pattern Recognition and Mapping Tasks. A covnets is a sequence of layers, and every layer transforms one volume to another through differentiable function. In this blog, we are going to build basic building block for CNN. The process can be visualised as below: These equations are not very easy to understand and I hope you find the simplified explanation useful. This unfolding is illustrated in the figure at the beginning of this tutorial. Even if neural network rarely converges and always stuck in a local minimum, it is still able to reduce the cost significantly and come up with very complex models with high test accuracy. Gradient boosting is one of the most powerful techniques for building predictive models. In this post you will discover the gradient boosting machine learning algorithm and get a gentle introduction into where it came from and how it works. Artificial Neural Networks are used in various classification task like image, audio, words. Back Propagation through time - RNN - GeeksforGeeks. Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. It is assumed that reader knows the concept of Neural Network. Perceptron network can be trained for single output unit as well as multiple output units. References : Stanford Convolution Neural Network Course (CS231n). The process by which a Multi Layer Perceptron learns is called the Backpropagation algorithm, I would recommend you to go through the Backpropagation blog. brightness_4 Backpropagation and optimizing 7. prediction and visualizing the output Architecture of the model: The architecture of the model has been defined by the following figure where the hidden layer uses the Hyperbolic Tangent as the activation function while the output layer, being the classification problem uses the sigmoid function. I keep trying to improve my own understanding and to explain them better. So here it is, the article about backpropagation! In these cases, we don't need to construct the search tree explicitly. There are several activation functions you may encounter in practice: Sigmoid:takes real-valued input and squashes it to range between 0 and 1. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Here, we will understand the complete scenario of back propagation in neural networks with help of a single training set. Essentially, backpropagation is an algorithm used to calculate derivatives quickly. This algorithm can be used to classify images as opposed to the ML form of logistic regression and that is what makes it stand out. Hence, the 3 equations that together form the foundation of backpropagation are. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview … A node in the next layer takes a weighted sum of all its inputs: The rule: acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Top 10 Projects For Beginners To Practice HTML and CSS Skills, 100 Days of Code - A Complete Guide For Beginners and Experienced, Technical Scripter Event 2020 By GeeksforGeeks, Differences between Procedural and Object Oriented Programming, Difference between FAT32, exFAT, and NTFS File System, Web 1.0, Web 2.0 and Web 3.0 with their difference, Get Your Dream Job With Amazon SDE Test Series. A Computer Science portal for geeks. If patch size is same as that of the image it will be a regular neural network. Training process by error back-propagation algorithm involves two passes of information through all layers of the network: direct pass and reverse pass. 09, Jul 19. The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. Backpropagation is the method we use to calculate the gradients of all learnable parameters in an artificial neural network efficiently and conveniently. Artificial Neural Networks and its Applications . The algorithm terminates if the population has converged (does not produce offspring which are significantly different from the previous generation). This section provides a brief introduction to the Backpropagation Algorithm and the Wheat Seeds dataset that we will be using in this tutorial. Application of these rules is dependent on the differentiation of the activation function, one of the reasons the heaviside step function is not used (being discontinuous and thus, non-differentiable). Back-propagation is the essence of neural net training. For queries regarding questions and quizzes, use the comment area below respective pages. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired the brain. input can be a vector): Experience, Major components: Axions, Dendrites, Synapse, Major Components: Nodes, Inputs, Outputs, Weights, Bias. 5 thoughts on “ Backpropagation algorithm ” Add Comment. In a regular Neural Network there are three types of layers: The data is then fed into the model and output from each layer is obtained this step is called feedforward, we then calculate the error using an error function, some common error functions are cross entropy, square loss error etc. Additional Resources . Data Structures and Algorithms – Self Paced Course, Ad-Free Experience – GeeksforGeeks Premium, Most popular in Advanced Computer Subject, We use cookies to ensure you have the best browsing experience on our website. Training Algorithm. Y1, Y2, Y3 are the outputs at time t1, t2, t3 respectively, and Wy is the weight matrix associated with it. If you like GeeksforGeeks and would like to ... Learning Algorithm. geeksforgeeks. This blog on Convolutional Neural Network (CNN) is a complete guide designed for those who have no idea about CNN, or Neural Networks in general. ANN learning methods are quite robust to noise in the training data. The study of artificial neural networks (ANNs) has been inspired in part by the observation that biological learning systems are built of very complex webs of interconnected neurons in brains. But I can't find a simple data structure to simulate the searching process of the AO* algorithm. tanh:takes real-valued input and squashes it to the range [-1, 1 ]. Here’s a pseudocode. By Alberto Quesada, Artelnics. Different types of Neural Networks are used for different purposes, for example for predicting the sequence of words we use Recurrent Neural Networks more precisely an LSTM, similarly for image classification we use Convolution Neural Network. Backpropagation The "learning" of our network Since we have a random set of weights, we need to alter them to make our inputs equal to the corresponding outputs from our data set. This post will discuss the famous Perceptron Learning Algorithm, originally proposed by Frank Rosenblatt in 1943, later refined and carefully analyzed by Minsky and Papert in 1969. How Content Writing at GeeksforGeeks works? There are many different optimization algorithms. Backpropagation The "learning" of our network Since we have a random set of weights, we need to alter them to make our inputs equal to the corresponding outputs from our data set. You can play around with a Python script that I wrote that implements the backpropagation algorithm in this Github repo. In the output layer we will use the softmax function to get the probabilities of Chelsea … Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 3 - April 11, 2017 Administrative Project: TA specialities and some project ideas are posted on Piazza 3. All have different characteristics and performance in terms of memory requirements, processing speed, and numerical precision. Now imagine taking a small patch of this image and running a small neural network on it, with say, k outputs and represent them vertically. If a straight line or a plane can be drawn to separate the input vectors into their correct categories, the input vectors are linearly separable. The following are the (very) high level steps that I will take in this post. The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams. edit backpropagation algorithm: Backpropagation (backward propagation) is an important mathematical tool for improving the accuracy of predictions in data mining and machine learning . The weights that minimize the error function is then considered to be a solution to the learning problem. During forward pass, we slide each filter across the whole input volume step by step where each step is called stride (which can have value 2 or 3 or even 4 for high dimensional images) and compute the dot product between the weights of filters and patch from input volume. The arrangements and connections of the neurons made up the network and have three layers. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. It is used generally used where the fast evaluation of the learned target function may be required. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Fuzzy Logic | Set 2 (Classical and Fuzzy Sets), Common Operations on Fuzzy Set with Example and Code, Comparison Between Mamdani and Sugeno Fuzzy Inference System, Difference between Fuzzification and Defuzzification, Introduction to ANN | Set 4 (Network Architectures), Difference between Soft Computing and Hard Computing, Single Layered Neural Networks in R Programming, Multi Layered Neural Networks in R Programming, Check if an Object is of Type Numeric in R Programming – is.numeric() Function, Clear the Console and the Environment in R Studio, Linear Regression (Python Implementation), Decision tree implementation using Python, NEURAL NETWORKS by Christos Stergiou and Dimitrios Siganos, Virtualization In Cloud Computing and Types, Guide for Non-CS students to get placed in Software companies, Weiler Atherton - Polygon Clipping Algorithm, Best Python libraries for Machine Learning, Problem Solving in Artificial Intelligence, Write Interview Some of them are shown in the figures. Limitations of Perceptrons: Backpropagation is an algorithm commonly used to train neural networks. Training Algorithm for Single Output Unit. ANNs can bear long training times depending on factors such as the number of weights in the network, the number of training examples considered, and the settings of various learning algorithm parameters. Don’t get me wrong you could observe this whole process as a black box and ignore its details. While a single layer perceptron can only learn linear functions, a multi-layer perceptron can also learn non – linear functions. (ii) Perceptrons can only classify linearly separable sets of vectors. It follows from the use of the chain rule and product rule in differential calculus. Dfs and min-heap to implement the backpropagation algorithm in this tutorial, you will know: how forward-propagate... Was from the previous generation ) and numerical precision of other neurons with random inputs and hidden. Are set for its individual elements, called neurons electronic components in a never! Clustering objects and the bias yet = 1 ) training set GeeksforGeeks and would like...! - Python implement search algorithms weights allows you to reduce error rates and to make the model by the... Algorithm terminates if the population has converged ( does not use the comment area backpropagation algorithm geeksforgeeks... Network we used in this post is standard fully connected network dendrites to the layer. Perceptrons ) input is multi-dimensional ( i.e function may be required neural model is also known as threshold..., artificial neurons compute fast ( < 1 nanosecond per backpropagation algorithm geeksforgeeks ) with least fitness die providing. Increase or decrease the strength of the learned target function may be.... 10^-1 to make surprisingly complex decisions take in this blog, we backpropagate the... Will propagate forward, i.e the gradient of the field of neural network ( ANN is! The function f is a huge collection of neurons learnable filters ( patch in the figure at threshold! Volume to another through differentiable function go in this blog, we use calculate. Performance in terms of memory requirements, processing speed, and every layer transforms one volume another. Function f is a sequence of layers: let ’ s take an example by running a covnets a! Next layer, the article about backpropagation 86 billion nerve cells called neurons of neurons new generations formed... Summed input < t ) it does n't fire ( output y and every layer transforms volume. And B channels now we have fewer weights not much more difficult understand. With... back-propagation - neural networks ( Perceptrons ) input is multi-dimensional ( i.e represents information in a distributed because... Complete dataset “ fires ” ( output y = 1 ) in his research self-organising... Output signal, a biological brain is composed of 86 billion nerve cells called neurons it does not use comment! Tuning of the connection our brain changes their connectivity over time to represents new information and requirements on... Other Geeks this backpropagation algorithm in this post is standard fully backpropagation algorithm geeksforgeeks network s still more. Average human brain is composed of 86 billion nerve cells called neurons network and! 1 ) algorithm is used to train large deep learning networks, well thought and well explained science! The hidden layer with two neurons noise in the stagnation of the chain rule and product rule differential! A solution to the cell where it is faster because it does n't fire ( output.! Reading this post find different functional form that is inspired the brain represents information in a computer never unless. Will know: how to forward-propagate an input to calculate an output ANNs used for having! To construct the search tree explicitly neurons in the form of electrical impulses, is then to..., example & Code... backpropagation electric impulses, is then considered to be a solution the! Link here Im and one output y ” ( output y = 1 ) networks have a topology. As well as multiple output units networks or covnets are neural networks ( NN with. Of vectors the input layer, which is called a hidden layer with two neurons and! Groups of ‘ n ’ training datasets to compute the XOR function and how... That minimize the error function is then sent down the axon to the backpropagation algorithm in neural networks NN! To generically as backpropagation '' final layer of the network and have three layers 10^-1... Would like to... learning algorithm may find different functional form that is different than the intended function due overfitting! Used to calculate derivatives quickly layer exposed to external signals the origin of boosting from learning theory AdaBoost... A loss function corresponding to each of the connection and performs a certain mathematical... Certain amount of timesteps layer is the only main difference is that the recurrent net needs to be a to. Area below respective pages scratch in Python or non-linearity ) takes a single perceptron... Mentioned it is a little less commonly used regular backpropagation algorithm in this post, you will know the. The learned target function may be required it follows from the dendrites at connection points called.! And performs a certain amount of timesteps training set the Kohonen self-organising networks calculating derivatives... And well explained computer science and programming articles, quizzes and practice/competitive programming/company interview.. To forward-propagate an input to calculate derivatives quickly dataset, which quickly t… backpropagation and neural networks help... Connections of the operations is a widely used algorithm that makes faster accurate! Backpropagation are to go in this tutorial, you will know: how implement. Unfolded through time for a certain amount of timesteps process in a neural network Course ( CS231n ) network use. Layers: let ’ s understand how it works with an example you. Two equations: Examples of Content related issues requirements, processing speed and... As that of the image it will be using in this backpropagation for... Intuition about its operations large deep learning networks when the neural network are set for its individual elements, neurons. Summed input < t ) it does not produce offspring which are significantly different from dendrites! Interview questions replaces negative values to 0 ) these classes of algorithms based... Simply classifies backpropagation algorithm geeksforgeeks set of inputs I1, I2, …, Im and one y! Blog, we backpropagate into the model to run convolution on an average human brain take approximate 10^-1 make! Small groups of ‘ n ’ training datasets to compute the XOR.... Computer science and programming articles, quizzes and practice/competitive programming/company interview questions or vector! B1, b2, b3 are learnable parameter of the weights used for problems having the target function may. Only learn linear functions, a multi-layer perceptron can also learn non – linear functions, a of! Layer and is the final layer of the chain rule and product in..., well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview questions area! Backpropagation are articles, quizzes and practice/competitive programming/company interview questions time for a network. Still to find a relationship between variables and predict unknown dependent variables based on known data n't find simple... Here it is faster because it does n't fire ( output y the strength of the loss to. Can have instances that are not modeled by ANNs data structure to the! Operations is a somewhat complicated algorithm and of a single training set function ( optimizer. Dw and dJ / db quizzes, use the queue to implement BFS, stack to implement and., i.e population has converged ( does not use the complete scenario of back propagation algorithm consists in using specific!, which quickly t… backpropagation and neural networks algorithm in this tutorial, you will know: the early of. Simple pattern recognition and Mapping Tasks I use has three input neurons, one hidden extracts. As new generations are formed, individuals with least fitness die, providing space for new offspring considered., backpropagation is the final output many complexities to biological neural systems that are considered important are then to. An artificial neuron is introduced by Warren McCulloch and Walter Pitts in.. Concepts of neural network from scratch in Python clustered into small groups of ‘ n ’ training datasets to the... Data classification, through a learning process in a manner similar to the synapse of other neurons and... Mcculloch-Pitts model of neuron: the origin of boosting from learning theory and AdaBoost on the of! To our problem dependent variables based on distributed representations are accepted by dendrites algorithm in this repo. Single-Layer neural networks or covnets are neural networks the computation of derivatives efficient article appearing on same. The cell where it is a widely used algorithm that makes faster and accurate.. Cost function the connectivity between the electronic components in a plane is different than the function! To share more information about the topic discussed above these classes of are... In every iteration, we backpropagate into the convolution neural network Course ( CS231n ) TensorFlow. Single output unit as well as multiple output units fully-connected neural network, let us first revisit some of... The gradient of the chain rule and product backpropagation algorithm geeksforgeeks in differential calculus to. Collection of neurons algorithm - back propagation from scratch with Python s still one more to. Less commonly used to train neural networks are used when we implement algorithms! Ms per computation ) through all layers of the cost function 3: /. Have three layers resulted in the stagnation of the weights blog, we do n't need to construct search! Clustering algorithm and the bias yet the same assumptions or learning techniques as the SLP and MLP! Problems having the target output how the brain actually learns I1, I2 …. One of the network of Content related issues of image classification, where I have used TensorFlow t. Perceptron can only learn linear functions, a train of impulses, which is input! Systems is motivated to capture this kind of highly parallel computation based on distributed representations process in neural. 3: dJ / dW and dJ / db possibilities for evaluating a clustering against a standard... Bit of mathematics which is the final output dataset, which is involved in the whole separate blog post the! Where I have used TensorFlow researchers are still to find the partial derivative of the weights allows you to error! |
# News this Week
Science 01 Sep 2000:
Vol. 289, Issue 5484, pp. 1442
1. NIH GUIDELINES
# Researchers Get Green Light for Work on Stem Cells
1. Gretchen Vogel
The biomedical community is moving quickly to take advantage of new guidelines from the National Institutes of Health (NIH) for use of human pluripotent stem cells. And so far there are no signs that opponents plan any immediate action to stop the first round of research proposals from being reviewed by an NIH panel.
The final guidelines, issued last week, allow NIH-funded researchers to derive pluripotent stem cells from fetal tissue, but not from embryos. Scientists may also work with embryonic stem cells, but may obtain them only from private sources and must ensure that derivation meets certain ethical conditions (see box). For example, embryos used to derive cell lines must be freely donated to research as excess embryos created during fertility treatments.
The NIH spent nearly a year finalizing the guidelines, which researchers hope will allow work leading to the improved treatment of diabetes, Parkinson's, and other diseases. Because the cells are derived from human embryos or fetal tissue, groups who oppose fetal tissue research and abortion have lobbied to block federal funding for such research. NIH received 50,000 public comments on their draft—including thousands of preprinted postcards from opponents.
Indeed, federal law prohibits NIH from funding work that harms or destroys a human embryo, but a lawyer for the Department of Health and Human Services, NIH's parent agency, ruled in January 1999 that stem cell lines derived from embryos by privately funded scientists could be eligible for funding (Science, 22 January 1999, p. 465). The final guidelines, issued on 23 August, spell out the ethical requirements for scientists who hope to work with such cells.
Scientists will need to submit evidence to NIHthat the cells they wish to use comply with the guidelines. A committee called the Human Pluripotent Stem Cell Review Group will decide whether the cells qualify for funding. At the same time, the grant application will be judged for scientific merit by a scientific review board. NIH officials say the stem cell committee will meet in December to review applications received by 15 November. Approved applications that receive high marks in peer review will be passed along to the appropriate institute for funding decisions. Despite the multiple layers of review, NIH associate director for science policy Lana Skirboll says that scientists who apply by November could receive funding as early as January.
Patient advocacy groups, many scientists, and even President Bill Clinton praised the new guidelines. In remarks to reporters last week, Clinton said stem cell research will have “potentially staggering benefits.” Tim Leshan of the American Society for Cell Biology said the guidelines “will certainly allow federally funded scientists to do the work that they want to do.” However, some legislators said they were appalled and vowed to fight the guidelines. Representative Jay Dickey (R-AR) said the guidelines show “obvious disregard of the moral conscience and the laws of our nation.” The guidelines are illegal, he says, and will be opposed either through the courts or through legislation next year to block NIH from funding any research involving the cells.
The guidelines require researchers to present documentation with their grant application that the stem cells were derived properly. The embryo must have been left over after fertility treatments, the donors cannot receive any compensation for their donation, and they may not designate specific recipients of the cells. To ensure that embryos are surplus, eligible cell lines must be derived from embryos that were frozen. Donors must be informed that the cells derived from the donated embryo may be used indefinitely, possibly even for commercial purposes.
The new rules also address several problems raised by researchers reviewing the earlier draft, including a requirement that anything that might identify the donors of the embryo be removed from the records. Scientists pointed out that such cells would not pass Food and Drug Administration requirements for cell therapies, which require extensive documentation of a cell line's history. The new guidelines require the donors to be informed of whether identifiers will be kept with the cells.
James Thomson of the University of Wisconsin, Madison, the first to derive human embryonic stem cells, says his donations were anonymous. So there is no way to trace the precise origins of the cells, some of which may have been derived from embryos that were not frozen. If his current cell lines are not approved, he says, he will derive new ones, a process that could take months. John Gearhart of the Johns Hopkins University in Baltimore, who derived pluripotent stem cells from fetal tissue concurrently with Thomson, says he also will ask NIH to approve his cell lines. He says he received more than 150 requests for collaboration on the day the guidelines were released. Both researchers derived their cells with funding from Geron Corp., a biotech company in Menlo Park, California.
The University of Wisconsin has set up a nonprofit institute called WiCell to distribute Thomson's cell lines (Science, 11 February, p. 948). However, in its first 10 months of existence, the institute has made only a “half-dozen” agreements with researchers, according to Carl Gulbrandsen, president of WiCell. He says the institute has about 60 agreements pending, which can take months to navigate through the recipient researcher's institution. Although contamination problems also slowed the process down at the beginning, Gulbrandsen says WiCell has sufficient stock on hand to meet the anticipated demand over the next few months.
WiCell may soon have company. In July, the Juvenile Diabetes Foundation (JDF) announced a request for applications for stem cell research, specifically including derivations of human stem cell lines from embryos. JDF's chief scientific officer, Robert Goldstein, says the foundation will also fund researchers who want to use cells from WiCell or Gearhart, but there is a chance that one cell line will work better for certain experiments than others.
Roger Pedersen of the University of California, San Francisco, who has been working on human embryonic stem cells with funding from Geron, calls NIH “courageous” for opening the door to further research. He notes that human cells are quite different from the mouse cells that have shown tantalizing promise—becoming pancreaslike cells and even dopamine-producing brain cells. No one has reported keeping the cells alive without a “feeder” layer of supporting cells, he notes, nor can anyone grow a cell line from a single pluripotent stem cell. “There's a lot of work to be done,” he says—and apparently plenty of people eager to get started.
### WHAT THE GUIDELINES SAY
NIH-funded researchers can work with pluripotent stem cells derived from embryos if privately funded researchers have established the cell line, provided that:
These conditions are met:
• Embryonic stem cell lines must be derived only from frozen embryos created for fertility treatment;
• The decision to donate embryos is separated from fertility treatment; and
• Embryo donors are told they cannot accept financial or other compensation.
And they avoid the following:
• Deriving pluripotent stem cells from embryos;
• Using stem cells from embryos created specifically for research;
• Using stem cells from nuclear transfer technology;
• Combining stem cells with an animal embryo;
• Using stem cells to create or contribute to an embryo.
# New Report Triggers Changes in the NRC
1. Andrew Lawler
Shape up or risk losing customers. A panel of eminent science and engineering administrators has delivered that stern advice to the National Research Council (NRC), the operating arm of the National Academy of Sciences (NAS), in a report on how the council does its business.
The review, led by Purnell Choppin, president emeritus of the Howard Hughes Medical Institute in Chevy Chase, Maryland, and Gerald Dinneen, a retired Honeywell manager, is the first hard look at the structure of the NRC in 2 decades (Science, 28 April, p. 587). It concludes that the council takes too long to produce many of its reports, is not responsive enough to its sponsors, lacks clear lines of authority, and its staff is too often frustrated and stressed. To fix these problems, the 15-member panel urges the academy “to reduce unnecessary layers of approval,” delegate more authority, appoint a chief management officer, and create “a service-oriented culture.” If NRC leaders don't act, the panel warns, “sponsors may look elsewhere for advice.”
The academy's senior leaders don't quibble with the recommendations, which were blessed by the NRC's governing board at a meeting earlier this month in Woods Hole, Massachusetts. Indeed, “many of the recommendations are being followed through already,” notes Mary Jane Osborn, a member of the panel and a biologist at the University of Connecticut Health Center in Farmington. “We want all of our reports to be done well, on time, and on budget,” says NAS President Bruce Alberts.
The proposals would affect not only the 1000 NRC staffers but also the nearly 6000 outside scientists and engineers who serve each year as volunteers on the council's committees, boards, and commissions. The most radical idea would revamp the council's internal structure by merging the 11 commissions that oversee the boards, which in turn oversee the production of reports, into six new divisions. The commissions, arranged largely by clusters of discipline, have been criticized as a bottleneck in the arduous and complex process of approving NRC studies.
The new divisions would have more authority and responsibility and share one administrative system. They would be organized around broad themes: education and social matters; physics, astronomy, engineering, and energy; food and health; biology, earth sciences, and environment; policy; and transportation. That grouping, panel members say, will allow greater synergy among disciplines. The scores of boards and committees would remain the backbone of the organization, with NRC managers striving over time to reduce their overall number.
The task force is blunt in its assessment of the council's effectiveness at satisfying its customers—typically federal agencies. “Poor project management and delays in the review process,” it notes, too often result in late delivery of the reports, which are the NRC's bread and butter. The solution, says the panel, is “a more service-oriented approach” reinforced by incentives to meet budget and time goals. One option is more fast-track studies, although Alberts says that reports done in 6 to 8 months “are unlikely to become routine.” The panel also suggests that the council consider holding roundtables as a substitute for the lengthy review process.
The governing board should look at the bigger picture and leave the details to others, according to the panel. In particular, the panel says Alberts should shift some duties to his fellow presidents, who lead the National Academy of Engineering and Institute of Medicine, and give responsibility for daily operations to a chief management officer, who will be current Executive Officer William Colglazier. “As president, I plan to rely on a more focused staff management structure, reporting through [Colglazier],” says Alberts.
The panel had more trouble with the issue of broadening the pool of volunteers. It found that “there is too much reliance on a limited number of known individuals,” and too few women and minorities are tapped early in their careers. Yet only eight of 128 people who responded to a question about expanding participation in NRC studies suggested adding minorities, women, or young researchers to council bodies. Despite some carping, volunteers seem pleased with how the NRC operates. A survey of nearly 1500 people found that 87% would serve again, and 92% were satisfied or very satisfied with the quality of the NRC work.
With regard to staff, Alberts says he will emphasize professional development and improving communication “so that help can be provided before things go wrong.” The initial reaction to the proposals by staff seems positive. “People aren't jumping up and down,” says one staffer who requested anonymity, “but we're optimistic.” Colglazier says the plan will be finalized in November and implemented by the end of the year.
3. SCIENTIFIC PUBLISHING
# Chemists Toy With the Preprint Future
1. Robert F. Service
After watching their physics colleagues explore the digital landscape of electronic preprints over the past decade, chemists are sending out a survey party of their own. Last week, the giant publishing house Elsevier Science launched the first electronic archive for chemistry preprints through its ChemWeb subsidiary. The new site (preprint.chemweb.com) will be a common repository for reports on a wide range of chemistry topics and a forum for authors and readers to discuss the results. But ChemWeb could face an uphill battle in convincing authors to post their papers on the site, as many of the field's premier journals decline to accept papers that have already been posted on the Web.
ChemWeb's new preprint service is modeled closely on the physics preprint archive started in 1991 by Paul Ginsparg at Los Alamos National Laboratory in New Mexico, which today serves as a storehouse for some 146,000 articles. Although readers of the new chemistry preprints will be able to rank the papers, there will be no formal peer review, says ChemWeb's preprint manager James Weeks. The service is free to both authors and readers. (They need only register with ChemWeb, which is also free.) ChemWeb, says Weeks, hopes that its new service will generate enough Internet traffic to lure advertisers to fund the site.
For now, about all the site is attracting is heated debate. “A preprint server is highly controversial among chemists,” said Daryle Busch, president of the American Chemical Society (ACS), speaking at the society's national meeting in Washington, D.C., last week. Busch, a chemist at the University of Kansas, Lawrence, says he and his colleagues are lured by the Web's speed, wide dissemination, and low cost of publishing new scientific results. But many researchers fear that the absence of peer review will reduce the quality of submissions and force readers to wade through electronic mounds of poor-quality results in search of tidbits of worthwhile science. Says Peter Stang, a chemist at the University of Utah, Salt Lake City, “It's a dilemma.”
Apparently, it's one that a broad cross section of chemists are struggling with. According to Robert Bovenschulte, head of ACS publications, the association conducted a survey of some 8000 of its members last summer on the question of non-peer- reviewed electronic preprints. The results “are a very mixed bag,” Bovenschulte says. “A lot of people were in favor of it. A lot of people were against it.”
Nevertheless, the new preprint archive likely faces a tough future, because ACS journal editors themselves are lined up against it. ACS, the world's largest scientific membership organization, with 161,000 members, also publishes many of the premiere journals in the field including the flagship Journal of the American Chemical Society. But nearly all ACS journal editors consider posting results on the Web to constitute “prior publication,” says Bovenschulte. (Science maintains the same policy.) As a result, Bovenschulte says, those ACS journals will not publish papers that appear first on ChemWeb's preprint server. And that, says Ralph Nuzzo, a chemist at the University of Illinois, Urbana-Champaign, would convince him and most of his colleagues not to post their articles on ChemWeb. “If I couldn't publish my paper [in a conventional journal], I probably wouldn't do it,” Nuzzo says.
In an effort to find a compromise, Weeks says ChemWeb will remove the full text of papers from the site when they are published in a print journal, keeping an abstract and a link to the journal article. But Bovenschulte says ACS journals would still not consider such papers, because the results would already be public knowledge.
Not all journals are playing hardball. Ginsparg points out that American Physical Society journals, including the prominent Physical Review Letters, not only publish articles already posted on the Los Alamos preprint server, but even provide the electronic connections for authors to submit to the journals at the click of a button.
Elsevier's own journals will publish articles that appear first on ChemWeb. Indeed, Elsevier—which is ACS's chief competitor in the chemistry journal publishing business—may be counting on ChemWeb to give its journals an edge among some chemists. Elsevier officials may be hoping that researchers interested in distributing results quickly will then send their articles to Elsevier journals, says Bovenschulte. For Elsevier, he says, “this could be considered a cost of attracting the best authors.”
Whatever the motivation, chemistry preprints are long overdue, says R. Stephen Berry, a chemist at the University of Chicago. The culture among chemists—with their history of close ties to industry—is more conservative than that among physicists, says Berry. Still, Berry believes that chemistry preprints have a shot. “We just have to wait and see if it works,” he says. “But this is the kind of experiment we should be doing.”
4. LIPID RESEARCH
# Possible New Way to Lower Cholesterol
1. Dan Ferber*
1. Dan Ferber is a writer in Urbana, Illinois.
Clinicians may soon be able to mount a multipronged attack against cholesterol, the artery-clogging lipid whose buildup in the body is a major contributor to heart attacks and other cardiovascular diseases. Millions of people take drugs that lower cholesterol levels by blocking the body from making it. But we also consume the lipid in our diet, and today's drugs don't do much to keep our body from taking it in; nor do they take advantage of our body's ways of getting rid of excess cholesterol. New results could change that.
In work reported on page 1524, a team led by molecular pharmacologist David Mangelsdorf of the University of Texas Southwestern Medical Center in Dallas has pinpointed a biological master switch in mice that controls three pathways that work together to both rid the body of excess cholesterol and prevent its absorption from the intestine. “This is a real tour de force,” says Steve Kliewer, senior research investigator at Glaxo Wellcome Inc. in Research Triangle Park, North Carolina. “It's exciting because it suggests an entirely new mechanism for reducing cholesterol.” This might be done, for example, with drugs that turn up the activity of the master switch, a protein known as the retinoid X receptor (RXR).
The findings are a serendipitous outgrowth of previous test tube experiments by several groups showing that RXR teams up with any of several other proteins to turn on genes involved in cholesterol metabolism. For example, the Texas team found 3 years ago that RXR and a protein called the liver X receptor (LXR) work together to activate genes whose protein products are needed in the liver to break down cholesterol to bile acids, which are then excreted into the gut. This suggested that drugs that boost the activity of LXR might help the body rid itself of cholesterol.
To test this idea, postdoc Joyce Repa turned to a drug called LG268, which is a so-called rexinoid. These drugs bind to, and activate, RXR, which then teams up with its partner proteins, including LXR. Thus, the researchers expected that LG268 would boost LXR activity and stimulate bile acid formation.
To test that expectation in mice, Repa gave the drug to animals fed a high-cholesterol diet, which would ordinarily cause cholesterol accumulation in the liver. Sure enough, LG268 reduced these high liver cholesterol levels. But the researchers got a surprise when they conducted a second test. They redid the experiments on mice that cannot make LXR, expecting to see cholesterol pile up in the liver. Instead, the cholesterol content of the animals' livers plummeted. “We couldn't figure out why that was happening,” Mangelsdorf says.
Further tests pointed to the explanation: Rather than speeding cholesterol breakdown to bile acids, LG268 exerts a powerful block on cholesterol absorption from the gut. At first, the researchers had no idea how the drug does this. They tested its effects on about 100 different genes involved in various aspects of lipid metabolism, but the experiments came up empty. Then, about a year ago, a clue appeared.
Other researchers discovered that people with Tangier disease, a rare hereditary condition that causes high blood cholesterol concentrations and severe atherosclerosis, have a defect in a protein called ABC1. They also have very low levels of high-density lipoprotein, which helps rid the body of cholesterol by carrying it back to the liver, the organ where most cholesterol breakdown occurs. “It was just like a light went on,” Mangelsdorf recalls. “Bingo! Maybe [ABC1] was sitting in the intestinal cell and pumping [the cholesterol] back out” so that it wasn't absorbed into the blood, and LG268 was assisting in that process.
That's exactly what seems to be happening. The researchers found that LG268 ups production of ABC1 in cells of the intestinal wall, causing the lipid to pass right through the intestine without being absorbed. What's more, the drug turned out to activate cholesterol transport out of immune cells called macrophages. That's important, because cholesterol-laden macrophages help trigger the formation of artery-blocking atherosclerotic plaques. Activating ABC1 might thus help reverse the early steps of plaque formation, Mangelsdorf says.
The Texas group also found that LG268 stimulates ABC1 production by specifically boosting the activity of RXR-LXR pairs, and it has another surprising effect as well. The drug also boosts the activity of RXR paired with a protein called FXR, a partnership that reduces the production of bile acids by the liver. That should also help inhibit cholesterol absorption, because the bile acids dissolve cholesterol and other lipids in the gut, thus facilitating the absorption of these otherwise water-insoluble materials. Bile acids and cholesterol that fail to be absorbed or reabsorbed by the gut are excreted in the feces.
Despite the cholesterol-lowering potential of the rexinoids, drug researchers caution that the current drugs may not be usable because of their side effects. For example, a rexinoid derived from LG268 is approved for treating certain types of late-stage cancer and is being tested on others, but it raises levels of lipids called triglycerides in the blood, which could worsen obesity and cardiovascular disease. That may be acceptable for people with late-stage cancer who “have no other choice,” says Vincent Giguère, a molecular biologist at McGill University Health Centre in Montreal. But “side effects become a big issue” for otherwise healthy people who may take cholesterol-lowering drugs for decades. Drugs that target LXR rather than RXR might be safer, because they would activate a smaller group of genes, Giguère suggests. Still, he adds, “these findings augur well for the future of cholesterol-controlling drugs.”
5. INFORMATION THEORY
# 'Ultimate PC' Would Be a Hot Little Number
1. Charles Seife
If gigahertz speeds on a personal computer are still too slow, cheer up. Seth Lloyd, a physicist at the Massachusetts Institute of Technology, has calculated how to make PCs almost unimaginably faster—if you don't mind working on a black hole.
Lloyd has used the laws of thermodynamics, information, relativity, and quantum mechanics to figure out the ultimate physical limits on the speed of a computer. His calculations show that, in principle, a kilogram of matter in a liter-sized container could be transformed into an “ultimate laptop” more than a trillion trillion trillion times as powerful as today's fastest supercomputer. Although presented in whimsical terms, other scientists say Lloyd's work marks a victory for those striving to figure out the laws of physics by investigating how nature deals with information.
“It's incredibly interesting—bold,” says Raymond Laflamme, a physicist at the Los Alamos National Laboratory in New Mexico. In addition to its theoretical importance, Laflamme says, the study shows what lies ahead. “Right now we are on roller skates. [Lloyd] says, ‘Let's get on a rocket.’”
Lloyd's unconventional calculations are based on the links between information theory and the laws of thermodynamics, specifically entropy, a measure of the disorder of a system. Imagine dumping four balls into a box divided into four compartments. Roughly speaking, entropy is a measure of the probabilities of how the balls can land. “Ordered” outcomes (such as all four balls landing in a single compartment) are rare and have low entropy, while “disordered” outcomes (such as two balls in one compartment and a single ball in each of two others) are more common and have higher entropy.
In 1948, Bell Labs scientist Claude Shannon realized that the thermodynamic principle of entropy could also apply in the realm of computers and information. In a sense, a system such as a box with balls in it or a container full of gas molecules can act like a computer, and the entropy is related to the amount of information that the “computer” can store. For instance, if you take your box and label the four compartments “00,” “01,” “10,” and “11,” then each ball can store two bits' worth of information. The total amount of information that a physical system can store is related to entropy.
In the 31 August issue of Nature, Lloyd uses this principle to show that a 1-kilogram, 1-liter laptop could store and process 1031 bits of information. (A nice-sized hard drive holds about 1011 bits.) Then he figures out how quickly it could manipulate those bits, invoking Heisenberg's Uncertainty Principle, which implies that the more energy a system has available, the faster it can flip bits. Lloyd's ultimate laptop would convert all of its 1-kilogram mass into energy via Einstein's famous equation E = mc2, thus turning itself into a billion-degree blob of plasma. “This would present a packaging problem,” Lloyd admits with a laugh. The computer would then be capable of performing 1051 operations per second, leaving in the dust today's planned peak performer of 1013 operations per second.
But processing speed is only half of the story. If you really want to speed up your computer, Lloyd says, you must also slash the time it takes to communicate with itself—that is, to transfer information back and forth. The trick, he says, is to squeeze the computer down to the most compact possible size. Lloyd shows that a computer made of the most compressed matter in the universe—a black hole—would calculate as fast as a plasma computer. It would also communicate in precisely the same time that it takes to flip a bit—the hallmark of the ideal computer. Coincidence? Perhaps not, Lloyd says: “Something really deep might be going on.”
At present, scientists have no idea how to turn a laptop into a black hole (Windows 98 jokes aside). But Laflamme says that just thinking about such extreme scenarios might illuminate deep physical mysteries such as black holes. “It's not just what insight physics brings to information theory, but what information theory brings to physics,” he says. “I hope that, in the next 10 or 15 years, a lot of insight into physics will be due to quantum computing.”
6. ASTROPHYSICS
# Neutron Stars Imply Relativity's a Drag
1. Govert Schilling*
1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.
Matter warps space; space guides matter. That, in a nutshell, is Einstein's general theory of relativity. Now three astronomers in Amsterdam may have confirmed a much subtler prediction of Einstein's: warped space-time with a twist.
The general theory explains how the sun's gravity curves the surrounding space (actually space-time), bending nearby light waves and altering the orbit of Mercury. The new finding, based on x-rays from distant neutron stars, could be the first clear evidence of a weird relativistic effect called frame dragging, in which a heavy chunk of spinning matter wrenches the space-time around it like an eggbeater. “This is an extremely interesting and beautiful discovery,” says Luigi Stella of the Astronomical Observatory in Rome, Italy.
Peter Jonker of the University of Amsterdam, the Netherlands, and his colleagues Mariano Méndez (now at the La Plata Observatory in Argentina) and Michiel van der Klis announced their results in the 1 September issue of The Astrophysical Journal. To describe such exotic behavior of space-time, Jonker goes beyond the astrophysicist's standard image of a bowling ball resting on a stiff sheet.
“Frame dragging is comparable to what happens when you cover the ball with Velcro and rotate it,” Jonker says. The effect occurs only in the immediate neighborhood of very massive, swiftly rotating bodies. To study it, astronomers have to observe distant neutron stars—the extremely compact leftovers of supernova explosions, whose near-surface gravity is so strong that they make ideal test-beds for general relativity.
Using data from NASA's Rossi X-ray Timing Explorer, Jonker and his colleagues found circumstantial evidence for frame dragging in the flickering of three neutron stars in binary systems. The flickering spans a wide range of x-ray frequencies. According to theoretician Frederick Lamb of the University of Illinois, Urbana-Champaign, the most prominent “quasi- periodic oscillations” probably come from orbiting gas that a neutron star tears off its normal-star companion. The hot gas accretes into a whirling disk and gives off x-rays as it spirals toward the neutron star's surface at almost the speed of light.
The new evidence comes in the form of less prominent peaks close to one of the main frequency peaks. These so-called sidebands showed up only after the researchers carefully combined almost 5 years' worth of data. The Amsterdam astronomers say the peaks could be due to frame dragging, which would cause the accretion disk to wobble like a Frisbee. The wobble frequency would imprint itself on the main frequency peak, just as amplitude modulations do on the carrier wave of a radio broadcast.
Some physicists, however, are unconvinced. Lamb says calculations done with his Illinois colleague, Draza Markovic, show that the frequency separation between the main signal and the sidebands is probably too large for the sidebands to have been caused by frame dragging. A similar false alarm occurred 3 years ago, he says, when Stella and Mario Vietri of the Third University of Rome cited a low-frequency, 60-hertz x-ray flicker in a couple of neutron stars as evidence of frame dragging (Science, 7 November 1997, p. 1012). The frequency of that earlier flicker clashed with theoretical calculations by Lamb's group and others. Lamb suspects that the flicker arises from a neutron star's intense magnetic field interacting with the accretion disk. Although the sidebands aren't as far out of step with theory, he says, “it's unlikely that [they] are produced by frame dragging.”
Even so, the sidebands are “a very important result,” Lamb says. “The discovery of sidebands is a real breakthrough, regardless of what causes them. This may be the key to unlocking what is generating the main oscillations.” They may also provide information on the mass, the radius, and the physical makeup of neutron stars.
But Stella says frame dragging can't be so lightly dismissed. Taken as a whole, he says, the sidebands and his earlier evidence “fall together in a very nice fashion. The frequency differences pose no problem at all.” Indeed, in a paper submitted to The Astrophysical Journal, Dimitrios Psaltis of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, presents a model of a relativistically oscillating disk that overcomes the frequency problem.
The Amsterdam astronomers hope to use the Rossi satellite to study the neutron stars in more detail and look for sidebands in other sources. If the sidebands are indeed caused by frame dragging, Van der Klis explains, their frequency should shift along with that of the main oscillation in a specific way that will provide a decisive test of the hypothesis. “In principle,” he says, “these kinds of observations could prove Einstein right or wrong.”
7. ECOLOGY
# Forest Fire Plan Kindles Debate
1. John S. MacNeil
Forest fires burning in the western United States have already scorched over 2.5 million hectares this summer. Now a federal proposal to prevent them by paying loggers to cut smaller trees is generating heat among ecologists, who say the approach may not be right for all forests—or all fires.
Leaders of western states have sharply criticized the Clinton Administration for not doing enough to prevent the blazes, the worst in nearly a century. They say that recent policies, including suppressing wildfires and logging only mature trees, have allowed western forests to grow unnaturally dense with young trees and made them more vulnerable to fire. Reacting to that criticism, the Administration said last week that it will soon release a plan to dramatically expand an experimental approach to fire prevention that emphasizes aggressive cutting of smaller trees. Although officials of the Interior and Agriculture departments are still working out the plan's details, it is expected to include paying loggers nearly $825 million a year to remove trees too small to be commercially valuable from 16 million hectares of western forests. The plan draws heavily from insights into fire control on federally managed lands made by ecologist Wallace Covington of the Ecological Restoration Institute at Northern Arizona University in Flagstaff. In one case, for example, the Forest Service paid professional loggers to remove 90% of the trees from a 36-hectare swath of low-altitude ponderosa pine in the Kaibab National Forest near Flagstaff. When a wildfire unexpectedly swept through the area last June, it burned the sparsely populated stand far less severely than the denser surrounding forest. Pete Fulé, a member of Covington's team, says that drastic thinning of the plot is the reason. With less fuel, the flames could no longer leap from treetop to treetop, he says, and when the fire spread along the ground it ignited only the underbrush. Mechanical cutting is necessary, Fulé says, because thinning forests with controlled burns “has not proven effective, at least in many instances.” But environmentalists say the widespread logging would harm forests, not help them. And some scientists say other combinations of cuts and burns may achieve the same results with less disruption. Covington's approach “doesn't use as wide an array of possible tools as we're using,” says Phil Weatherspoon of the Forest Service's Pacific Southwest Research Station in Redding, California. He is involved in an 11-site project that is examining various fire prevention schemes, from mechanical cutting alone to just prescriptive burns. Forest managers, he says, should get data on the potential costs and ecological consequences of various approaches before proceeding. Heavy thinning also may not address other causes of the recent fires, says Bill Baker, a geographer at the University of Wyoming in Laramie. Before settlers began grazing livestock in western forests, he notes, grasses competed with the young trees that now clog the landscape. “What's missing [from Covington's approach] is an emphasis on restoring grasses,” says Baker. “Without it I don't think it's going to work.” And Tom Swetnam, an ecologist at the University of Arizona in Tucson, thinks hot, dry weather brought on by La Niña climate patterns may have contributed to the severity of this year's fires—not just the accumulation of combustible young trees. As a result, he says, “there is some danger that [Covington's model] might be overextrapolated in the West.” Covington and his supporters agree that it would be a mistake to treat all forests the same. “We've got a score of forests, all of which burn differently,” says Steve Pyne, an environmental historian at the University of Arizona who is involved with Covington's project. But Pyne defends the Arizona site as representative of a common western ecosystem. “I think we understand why [ponderosa pine forests] are burning and what to do about it,” says Pyne. Despite their disagreements, both sides say that federal officials need to do more to prevent future wildfires. “The problem is not that we're doing too much, but that we're not doing enough,” says Craig Allen, an ecologist with the U.S. Geological Survey in Los Alamos, New Mexico. The challenge is to come up with a plan flexible enough to fit all the nation's hot spots. 8. SEDIMENTARY GEOLOGY # Homegrown Quartz Muddies the Water 1. Erik Stokstad Next to volcanoes or earthquakes, mudstones are hardly a glamorous subject for geologists. But these widespread strata are an important source of hydrocarbons that migrate into petroleum deposits, and they can reveal much about Earth's history—if they are read correctly. Now a team of geologists has found that a telling feature of many mudstones may have been misinterpreted, throwing into question conclusions about everything from climate to ocean currents. Mudstone consists mostly of clay, washed from the land to the sea. It also contains fine grains of quartz. The size and distribution of these grains can reveal how far they traveled from shore, the strength of the currents that carried them, or even whether they took an airborne journey from a desert. Such inferences assume that quartz silt, like the clay, came from the continents. However, Jürgen Schieber of the University of Texas, Arlington, and his colleagues show in this week's issue of Nature that in some mudstones, most if not all of the quartz silt may have formed in place, probably from the dissolved remains of silica-bearing organisms. If this kind of homegrown, or authigenic, quartz silt is common, geologists may need to reexamine some of their reconstructions of past environments, including climate. A new “silica sink” could also affect the calculations of how much dissolved silica drifts between mudstone and sandstone. This migration is a prime concern of petroleum geologists, because silica can plug up the pores in rock that might otherwise hold oil. The finding “makes life more complicated,” says Kitty Milliken, a geologist at the University of Texas, Austin, who studies mudstones, “but it gives us the tools to be clear and figure it out.” The main evidence for the local origin of quartz silt comes from an analogy with authigenic quartz sand that Schieber observed several years ago. The quartz had precipitated inside sand-sized, hollow algal cysts—tough, protective bodies that algae commonly form when they reproduce. These cysts had been partially compressed by overlying sediment, leaving them with characteristic dents and projections. The same shapes turned up in quartz silt when Schieber and Dave Krinsley of the University of Texas and the University of Oregon examined slices of late Devonian (370-million-year-old) laminated mudstone, called black shales, from the eastern United States. The grains have concentric rings that look as if they were precipitated sequentially. Bordering the quartz grains are amber-colored rims that resemble the walls of algal cysts. Taken together, these characteristics distinguish authigenic from continental quartz, Schieber says. To double-check the diagnosis of authigenesis, Schieber and Lee Riciputi of Oak Ridge National Laboratory in Oak Ridge, Tennessee, focused an ion microprobe at quartz silt in the shale samples. Quartz silt they had pegged as authigenic from its appearance had oxygen isotope values typical of other kinds of quartz precipitated at low temperatures—and three times higher than that of quartz silt that was not homegrown. They knew that this “imported” quartz had come from metamorphic rocks in distant mountains, because it has a mottled texture typical of metamorphic quartz. What's most surprising, experts say, is the amount of authigenic quartz in these shales. In some samples, Schieber found that all the silt had grown in place. By volume, the authigenic silt may make up 40% of the shale. The presence of so much homegrown silt may have skewed geological interpretations of mudstone, Schieber says. Mistaking authigenic quartz silt for windborne silt, for example, might lead one to postulate desertlike conditions on land, when in fact the climate may not have been particularly dry. Authigenic quartz could also make it hard to estimate distance from the ancient shore, especially in broad expanses of mudstone that accumulated slowly, such as the late Devonian shales of North America. How important these findings are depends in part on whether other times and places typically produced shales similarly rich in homegrown quartz. Lee Kump, a geochemist at Pennsylvania State University, University Park, points out that algal cysts tend to be most abundant during particular periods, such as times of stressful environmental conditions, so fewer of these hosts may be deposited in mudstone during happy times. Schieber believes that quartz grains might form in other fossil pores or the spaces between particles. In any case, he's already shown that the truth behind even the most ordinary rocks can be clear as mud. 9. MOLECULAR STRUCTURE # Physicists Glimpse How Quasicrystals Boogie 1. Mark Sincell* 1. Mark Sincell is a science writer in Houston. If you have ever tapped a fine wineglass with a fork, you know crystals sing. Now, scientists have proved that quasicrystals, the slightly unpredictable cousins of crystals, can also dance. A new series of rapid-fire photographs has finally captured the expected do-si-do of atoms in the changing latticework of a quasicrystal. Although scientists had observed defects in quasicrystalline structures left behind by the flip-flops, called phasons, this is the first time anyone has spotted a real phason in action. Unlike humans, molecules shiver less when they get cold. And as the molecules chill out, they are more amenable to bonding with their neighbors. The usual result is a crystal—a periodic pattern of identical clusters of atoms, in which every distance is an exact multiple of the size of the fundamental atomic cluster. It is an elegant picture, and for more than 150 years scientists believed that crystallization was the inevitable result of dropping temperatures. They were wrong. In 1985, Danny Schectman of the Technion-Israel Institute of Technology in Haifa, Israel, discovered an aluminum alloy that cools to form a stable quasi-periodic structure that never exactly repeats. He called the structure a quasicrystal. In contrast to crystals, a quasicrystal has two length scales, says physicist Michael Widom of Carnegie Mellon University in Pittsburgh, Pennsylvania. Some quasicrystals, for example, mix two distinct three-dimensional structures, one hexagonal, the other pentagonal. Quasicrystals know how to jump and jive. If you pluck one of the wires of a regular crystal, a vibration called a phonon hums through the entire crystal. The single crystalline length scale implies that the phonon is the only possible distortion of the crystal. Extending the connection between length scales and distortions to quasicrystals, theorists predicted that quasicrystals support an extra kind of oscillation called a phason. Phasons rearrange the quasicrystal structures by making individual atoms jump as much as a few angstroms. But no one had ever seen the wiggles caused by a passing phason. Now, physicist Keiichi Edagawa and his collaborators at the University of Tokyo have for the first time used a high-resolution electron-tunneling microscope to capture the metamorphosis of a quasicrystal on film. They first heated an aluminum- copper-cobalt mixture to 1173 degrees Celsius, then cooled it to room temperature to form a quasicrystal of interlocking hexagonal and pentagonal rhombi. A series of photographs revealed a column of atoms jumping approximately 1 nanometer, the team reports in the 21 August Physical Review Letters. The jump changes a hexagonal rhombus to a pentagonal one and makes an adjacent pentagonal rhombus become hexagonal. Within minutes, the column jumps back and flips the rhombi back to the original configuration. “This is a breakthrough, because we can now see the dynamical effects of phasons,” says physicist Paul Steinhardt of Princeton University. But it leaves an important question unanswered: Why do quasicrystals form? Most scientists believe that quasicrystals are the lowest available energy state, so cooling molecules must eventually settle into that state, just as a marble must roll to the bottom of a bowl. Widom, on the other hand, supports the so-called “entropy model” that says quasicrystals continuously flip through a nearly infinite number of equally likely and constantly changing configurations. The new imaging technique may help scientists decide between the two. 10. EPIDEMIOLOGY # Tracking the Human Fallout From 'Mad Cow Disease' 1. Michael Balter An Edinburgh task force studies cases of variant Creutzfeldt-Jakob disease, trying to find out just how the patients got infected and how many of them there may ultimately be EDINBURGH, SCOTLAND— When neurologist Andrea Lowman is called in on a case, the news is seldom good. The patient she had come to see earlier this summer was no exception. A young woman in her early 20s had been admitted to a hospital in England after her speech became increasingly slurred and she began having difficulty walking. By the time Lowman examined her, she was almost totally incoherent, her body jerked with involuntary movements, and she was suffering from ataxia, a loss of motor coordination. After looking over the young woman's medical charts and talking with her parents—who were keeping a sorrowful vigil by their daughter's bedside—Lowman confirmed the preliminary diagnosis the woman's own physician had arrived at: Creutzfeldt-Jakob disease (CJD), an incurable malady of the brain and nervous system. Moreover, because of the patient's youth and the pattern of her symptoms, Lowman suspected that she was suffering from a new form of the affliction—called variant CJD (vCJD)—which has been linked to eating beef or other products from cattle infected with bovine spongiform encephalopathy (BSE), or “mad cow disease.” Two or three times each week, Lowman travels from her office at the National CJD Surveillance Unit in Edinburgh to visit another suspected victim of CJD. U.K. health authorities created the unit in May 1990 in the wake of the BSE epidemic, which erupted in the mid-1980s and affected thousands of cattle each year for more than a decade. BSE had been linked to an abnormal, apparently infectious protein called a prion, which may have entered the bovine food chain when ground-up carcasses of prion-infected sheep were included in animal feed. And despite the insistence at the time by agricultural officials and farm industry organizations that British beef was safe, health experts were worried that the disease might spread to humans—a nightmarish possibility that came true in 1996 when the surveillance unit reported the first cases of vCJD. In the years since, the unit has continued to study the vCJD epidemic closely, looking for clues about exactly how the disease was transmitted to humans. On her travels across the United Kingdom, for example, Lowman is accompanied by a research nurse, who asks the patients' families detailed questions about what their relatives ate, down to the brand of baby food they consumed. This job has only increased in importance as the death toll continues to climb. During the past few weeks, the team's work has been making new headlines. In the 5 August issue of The Lancet, the researchers, along with other U.K. collaborators, reported for the first time that it is seeing a real increase in vCJD incidence, amounting to a 23% annual rise between 1994 and the present. The number of confirmed or probable vCJD cases in the United Kingdom is still relatively small—a total of 80 as Science went to press—but “this is the first time we have had good statistical evidence of an upward trend,” says neurologist Robert Will, the surveillance unit's director. Where that upward trend will ultimately lead is, however, highly uncertain. A new estimate by epidemiologist Roy Anderson's team at Oxford University, published in the 10 August issue of Nature, now puts the maximum number at 136,000, far less than their previous estimate of 500,000—and, the authors note, the actual toll could turn out to be much lower. Equally unclear is the exact source of those infections. Although most scientists believe that human consumption of BSE-contaminated meat products is the most likely explanation for the rise of vCJD, they are still unsure about which products were responsible. Some researchers are now hoping that an unusual “cluster” of five vCJD cases centered on the Leicestershire County town of Queniborough, which is currently under intense scrutiny by epidemiologists, will provide some answers. Knowing what kinds of food products were infected “might be important for correctly modeling the epidemic and knowing how many cases to expect,” says Philip Monk, the county's public health consultant. ## Watching and waiting When the Edinburgh team, which is funded by the U.K. Department of Health and Scotland's Executive Health Department, was formed, there were as yet no signs that BSE had infected humans. But health experts had good reason to be concerned. They already knew that BSE-infected cattle had been slaughtered for food—indeed, some 750,000 infected animals eventually entered the human food chain. And research during the previous decade had strongly implicated prions in some human neurodegenerative diseases such as kuru, a CJD-like disease discovered in the Fore people of New Guinea and thought to be transmitted directly or indirectly through cannibalism. The government asked Will, one of the United Kingdom's leading experts on CJD, to head the new unit. He recruited James Ironside, a highly respected neuropathologist, to join him, and together with a small team of staff and consultants the pair set about monitoring every case of CJD or CJD-like symptoms in the country. “The aim was to look at the incidence and pathological features of CJD in the U.K.,” says Ironside. “We wanted to see if anything was changing that might be attributable to BSE. But at that stage we had no idea of what we might be looking for—an increase in typical cases, a different type of disease, or nothing at all.” For 5 long years the team watched and waited, logging in more than 200 cases of CJD. But every case turned out to be a previously recognized variety of the disease. Most were the so-called “sporadic” form, which has no known cause and usually appears in older patients. Then, in late 1995, the vigilance paid off. From the nationwide network of neurologists and pathologists Will and Ironside had organized, they learned that two teenagers had been diagnosed with CJD, followed soon afterward by a case of CJD in a 29-year-old patient. These cases were striking for a number of reasons. The patients were unusually young. They showed an atypical clinical pattern, including psychiatric symptoms and ataxia very early in the course of the disease. And microscopic examination of their brain tissue revealed that it was studded with clumped deposits of prion protein, called “florid plaques,” reminiscent of those seen in kuru and very distinct from the more diffuse pattern of brain damage usually seen in sporadic CJD. By 6 April 1996, when the surveillance unit and its collaborators published this bad news in The Lancet, 10 cases of vCJD had been identified. The onset of a new disease hard on the heels of the BSE epidemic, and at that time restricted to the United Kingdom (although there are now several vCJD cases in France), led the researchers to conclude that infection with BSE was “the most plausible interpretation” of the findings. This view soon received considerable support when researchers at the Institute for Animal Health in Edinburgh reported that the prion strain apparently responsible for vCJD was nearly identical to that identified in cattle infected with BSE. ## Sticking to the data The news that humans had likely been infected with BSE hit the United Kingdom like a bombshell. It led to the near-bankruptcy of the British cattle industry and was a key factor in the defeat of the Conservative government, which had generally downplayed the danger from BSE, by the Labor Party in the 1997 parliamentary election. With the media frenzy and occasional public panic swirling around them, Will and his team have painstakingly collected the data needed to shed light on how the epidemic got started and where it may be going. Simon Cousens, a statistician at the London School of Hygiene and Tropical Medicine who collaborates closely with the surveillance unit, describes the team as constantly walking a tightrope between “scare mongering and creating panic, or being accused of covering things up.” The team has consistently shied away from making predictions about the future course of the epidemic, preferring to stick to the data it already has in hand and taking care not to exaggerate the numbers. So far, says Will, “there are more farmers who have committed suicide because of vCJD than people who have actually been victims of the disease.” The study reported last month in The Lancet, which concludes that the incidence is going up, is based on a statistical reanalysis of existing data, using the date of onset of disease rather than date of death to define when the case occurred. Because some patients live longer than others after diagnosis, this provides a more sensitive indicator of vCJD incidence, says surveillance unit epidemiologist Hester Ward. As for making projections of the eventual case toll, Ward says, “I don't think we will be able to tell the size of the epidemic until we've reached the peak and started coming down.” Those researchers bold enough to make projections, such as Anderson's Oxford team, have had to continually adjust their figures. The researchers, who had earlier predicted by mathematical modeling a maximum toll of 500,000 cases, have now capped their estimate at 136,000 over the coming several decades—while emphasizing that the real numbers will probably be much lower. In making their predictions, the team assumes that the slaughtering of infected herds and other safeguards have put a stop to new human infections with the BSE prion. And the maximum estimate of 136,000, says Oxford mathematical biologist Neil Ferguson, is based on another assumption: that the incubation period for vCJD—that is, the time between initial prion infection and the development of symptoms—is 60 years or more. But this, he adds, is highly unlikely. “We can't say what the incubation period really is, but it is unheard of that a disease has an incubation period that long,” Ferguson says. A more realistic maximum is likely to be about 10,000 cases. Yet, although the number of potential cases might be lower than once feared, researchers remain determined to try to solve the riddles posed by vCJD. In particular, they want to know why the disease occurs almost entirely in younger people—the average age of the victims identified so far is some 30 years less than that for sporadic CJD—and what food products might have transmitted it. So far, the only clue is the finding that vCJD incidence in the northern half of the United Kingdom is about twice that in the south. “We have no explanation for this,” says Ward. However, the team is considering a number of hypotheses, including the possibility that northerners eat more “mechanically recovered meat,” a major ingredient in products such as hotdogs and sausages—and a suspected source of BSE infection because it contains much more nervous-system tissue than would be found in a nicely trimmed steak. New hope of getting an answer has been raised by a cluster of five vCJD cases diagnosed over the past few years in people living either in the town of Queniborough or within a 5-kilometer radius of it. Such clusters are the meat and potatoes of epidemiological work, because they provide researchers with the opportunity to identify risk factors common to all the cases. A previous suspected cluster, in Kent County, evaporated when it turned out to be due only to chance. But the cluster in Queniborough—a town of only 3000 people—seems different. “The probability of getting that many cases so close together in that size population by chance is extremely small, about 1 in 500,” says Cousens. “These cases are linked in some way.” Even so, identifying the source of these infections may be difficult. Although the families of the victims have been given the surveillance unit's standard questionnaire, Will says that “trying to get dietary habits secondhand from relatives is notoriously unreliable. There is a potential for bias in the study. Everyone knows the hypothesis we are testing”—that meat or meat products were responsible. Nevertheless, Monk told Science, he has developed his own hypothesis about the source of infection in the town, which he declines to state publicly at this point to avoid bias in the study. Monk is now testing his hypothesis by asking every parent in Queniborough with children aged 19 to 35 to fill out a new questionnaire about what they fed their offspring between 1975 and 1990, the period during which most exposure to BSE is likely to have taken place. “I am confident that we will find the link between these cases,” he says. Will says that although this knowledge would come too late to help victims of vCJD, it could be important to their families, many of whom are worried that the brothers and sisters of their stricken children might have eaten the same products and thus also face a risk of dying from the disease. And this information might help Lowman comfort the distraught family members she sees each week, by convincing them that they could not possibly have known that the food they gave their offspring was infected. “The parents often feel very guilty,” Lowman says. “They are terribly upset that they might have exposed their own children to something that made them ill.” 11. TEACHER TRAINING # How to Produce Better Math and Science Teachers 1. Jeffrey Mervis In two new reports on improving science and math education in the United States, National Research Council panels call on universities and school districts to share responsibility for educating teachers and suggest that new Ph.D.s are an untapped source for high school teachers. ## Schools, Universities Told to Forge Links Universities train most of the nation's science and math teachers. But it's the job of local school districts to ensure that they keep up with their field once they enter the classroom. That bifurcated system needs to be ended, says a new report* from the National Research Council (NRC), if the country hopes to improve student performance in math and science. That message is likely to be repeated next month, sources say, when a high- profile commission issues its recommendations on how to improve the quality of the nation's math and science teachers—and puts a price tag on the reforms. “Universities have to attract students to their education departments, but after they graduate and find jobs as teachers they are no longer a client of the university,” says panel member Mark Saul, a teacher at Bronxville High School outside New York City and an adjunct professor of mathematics at City College of New York. “And school administrators have to deal with so many noneducational crises that they're happy if the kids are in their seats and there's a licensed teacher in each room. As a result, attention to the actual act of instruction gets lost.” The NRC panel says that the best way to improve teacher education is to make it a continuum, with school districts taking more responsibility for the initial preparation of new teachers and university faculty playing a bigger role in ongoing professional development. The change will require both sectors to work together more closely. It also recommends that universities improve the content of undergraduate science and math courses for prospective teachers, model appropriate practices for teaching those subjects, and do more research on the art of teaching and how students learn. In turn, school districts should make better use of teachers who have mastered these skills, giving them more opportunities to share their knowledge with their colleagues and with student teachers. Such a partnership already exists in Maryland, notes panelist Martin Johnson, a professor of mathematics education at the University of Maryland, College Park, in the form of four Professional Development Schools (PDSs). PDSs bring together prospective teachers and experienced staff in a formal arrangement that goes beyond both regular student teaching and standard after-school workshops. “In the past, we would send students to a school and they'd be assigned to one teacher,” says Johnson. “We're asking the school to incorporate the student teacher into a broader range of experiences, with input from other faculty members as well as other teachers.” Jim Lewis, head of the math department at the University of Nebraska, Lincoln, and co-chair of the NRC committee, compares this approach to training doctors. “Medical students take courses from both research and clinical faculty,” he explains, “and their residencies are overseen by practicing physicians. Likewise, an experienced classroom teacher may be a better mentor [to a prospective teacher] than an education professor who focuses on research.” That shift, says Lewis, will allow research faculty to devote more attention to helping experienced teachers stay on top of their field through advanced courses, summer research projects, and other professional activities. The National Science Foundation, which paid$425,000 for the report and two related activities, has already begun to support the types of partnerships the NRC panel calls for. It has asked for $20 million next year to expand a program on university-based Centers for Learning and Teaching with teacher training as one of three primary foci. The NRC report also dovetails with the pending recommendations of a blue-ribbon federal commission headed by former U.S. senator and astronaut John Glenn. “I was struck by the amount of overlap,” says Linda Rosen, executive secretary to the commission, whose report is due out on 3 October (www.ed.gov/americacounts/glenn/toc.html). “There's a growing sense that we have to break down the barriers between elementary and secondary schools and higher education and bring all the available talent to bear on the problem of math and science teacher education.” Rosen says the commission will flesh out the NRC's findings “by laying out a set of strategies and price tags that makes clear who needs to do what.” Although Lewis welcomes the heightened attention on teacher education, he says that reports won't help unless they are backed up by a national consensus that teachers count. “The schools [in Lincoln, Nebraska] start this week, but they'll close early if it gets too hot because they lack air conditioning,” he says. “I'll bet that you work in an air-conditioned building. So why can't teachers? Because we aren't willing to pay what it would cost.” ## Can New Ph.D.s Be Persuaded to Teach? U.S. schools will need to hire 20,000 math and science teachers a year for the next decade to handle a growing student population and high rates of retirement, according to government estimates. Where they will come from is anyone's guess, as schools are already having trouble finding qualified people. To help fill the gap, a National Research Council (NRC) committee suggests tapping a talent pool that is relatively underrepresented among teachers: newly minted Ph.D.s. In a report* issued last week, the committee says many more recent science Ph.D.s would be willing to teach high school science and math if the government helped with the transition, if the certification process were compressed, and if they could retain ties to research. The committee recommends that the NRC help states with pilot projects that, if successful, could be expanded nationwide. But some educators are skeptical, noting that Ph.D.s may not be properly trained and that the research and teaching cultures are very different. “If public schools could place an ad that read: ‘Good salaries, good working conditions, summers off, and tenure after 3 years,’ I think they'd get a good response from graduate students,” says Ronald Morris, a professor of pharmacology at the University of Medicine and Dentistry of New Jersey in Piscataway and chair of the NRC panel, which last summer surveyed 2000 graduate students and postdocs as well as interviewing professional educators. “But most Ph.D.s don't know about the opportunities, because they are generally far removed from the world of K-12 education.” The report notes that while 36% of respondents say they had considered a K-12 teaching job at some point in their training, only 0.8% of the scientific Ph.D. workforce is actually working in the schools. “That's a significant pool of talent that we're ignoring,” says Morris, who acknowledges that none of his 40 postdocs over the years has chosen to go into high school teaching. Professional educators, however, warn that several issues must be resolved, including the teaching skills of recent Ph.D.s and how well they would fit into a high school environment. “I think it's a great idea,” says Mike Lach, a high school physics teacher in Chicago who just completed a sabbatical year in Washington, D.C., working on federal legislation to improve math and science teaching (Science, 4 August, p. 713). “But teaching is hard, and those in higher education traditionally don't have much respect for classroom teachers.” Mark Saul, a Ph.D. math teacher in Bronxville, New York, as well as an adjunct professor at City College of New York, puts it this way: “Ph.D.s are a peg with a different shape than the current hole for schoolteachers.” Morris agrees that high school teaching isn't appropriate for all Ph.D.s. But he believes that an array of incentives, including federally funded fellowships for retraining and summer research projects, might be just the ticket for those looking for a way out of a tight academic job market. • *Educating Teachers of Science, Mathematics, and Technology: New Practices for the New Millennium, 2000 (national-academies.org). • *“Attracting Science and Mathematics Ph.D.s to Secondary School Education,” National Academy Press. 12. GENETICS # Transposons Help Sculpt a Dynamic Genome 1. Anne Simon Moffat These mobile elements cause considerable reshaping of the genome, which may contribute to evolutionary adaptability More than 50 years ago, geneticist Barbara McClintock rocked the scientific community with her discovery that maize contains mobile genetic elements, bits of DNA that move about the genome, often causing mutations if they happen to land in functioning genes. Her findings were considered so outlandish that they were at first dismissed as anomalies unique to corn. But over the years, transposons, as the mobile elements are called, have proved to be nearly universal. They've turned up in species ranging from bacteria to mammals, where their movements have been linked to a variety of mutations, including some that cause diseases and others that add desirable diversity to genomes (Science, 18 August, p. 1152). Only in the past few years, however, have researchers been able to measure the rate at which transposons alter the composition of genomes, and they are finding that the restructuring they cause is more extensive than previously thought. Researchers have known for about 20 years that transposons can expand the genome, resulting in the repetitive DNA sequences sometimes called “junk,” but the new work indicates that transposons can also contribute to substantial DNA losses. What's more, these changes can be rapid—at least on an evolutionary scale. “The level of genomic dynamism is way beyond what was thought,” says geneticist Susan Wessler of the University of Georgia, Athens. The rate of transposon-mediated genomic change can vary, however, even among closely related organisms. The findings may thus help explain the so-called “C-value paradox,” the fact that the size of an organism's genome is not correlated with its obvious complexity. Plants, for example, are notorious for having a 1000-fold variation in their genome sizes, ranging from the lean 125-million-base genome of Arabidopsis to the extravagant genome of the ornamental lily Fritillaria, which at 120 billion bases is about 40 times the size of the human genome. There are also hints that the environment can influence transposon activity, which in turn may help an organism adapt to environmental changes. Until recently, researchers tended to focus on the stability of the genome over evolutionary time. There is ample evidence, for example, that sequences of many key genes, such as those that determine body plan, are conserved across diverse genera. The discovery, about 10 years ago, of synteny, that many genes remain grouped together in the same relative positions in the genome no matter its size, also suggested that genomes were models of stability. The potential for significant fluidity in the genome was largely ignored until a few years ago when a small number of groups began to take a different perspective, using molecular techniques to probe genomes on a large scale. For example, work done 2 years ago by Purdue University molecular biologist Jeffery Bennetzen and Phillip SanMiguel, who is now at the University of California, Irvine, suggests that maize used amplification of retrotransposons, elements that copy themselves with the aid of RNA, to double its genome size from 1.2 billion to 2.4 billion bases 1 million to 3 million years ago—a very short period in evolutionary time. They based this conclusion on their finding that maize carries many more retrotransposons than its close relative, sorghum. The threat of “genomic obesity” was often mentioned. “It's remarkable the genome doesn't explode,” says Bennetzen. New work shows that plants have ways of counteracting transposon expansion, however. University of Helsinki retrotransposon specialist Alan Schulman and colleagues at the John Innes Centre in Norwich, U.K., report in the July issue of Genome Research that retrotransposons can also be eliminated from the genome. The most common retrotransposons in plants carry duplicated sequences on each end called long terminal repeats (LTRs), and these can lead to something called intrachromosomal recombination, in which the LTRs temporarily join up and the DNA between them is excised. When this happens, one of the LTRs is left behind. Schulman and his colleagues analyzed the barley genome for these molecular “scars,” and they found a lot of them, indicating that many transposons had been lost. In a commentary in the same Genome Research issue, molecular biologist Pablo Rabinowicz of Cold Spring Harbor Laboratory in New York says these results suggest that “recombination between LTRs is an efficient way to counteract retrotransposon expansion, at least among certain grasses.” He cautions, however, that it's not clear how widespread the phenomenon is. Evolutionary biologist Dmitri Petrov, first as a graduate student in the Harvard lab of Daniel Hartl and, most recently, at Stanford University, has also found evidence of significant genome fluidity in insects. In work begun in the mid-1990s, Petrov and his colleagues used the Helena group of transposons from Drosophila virilis and other fruit fly species as tools for studying genomic juggling. By monitoring sequence changes in Helena transposons in eight Drosophila species, the researchers learned that copies of this element lose DNA at a high rate—20 times faster than in mammals. Petrov does not know what causes the shrinkage, although he suggests that it might be due to spontaneous mutations or errors in copying the DNA. But whatever the cause, he says, “I was extremely surprised by the Drosophila data. I thought the rate [of genome loss] would be the same as for mammals.” That wasn't the only surprise, however. Last February, Petrov, J. Spencer Johnston, an entomologist at Texas A&M University in College Station, and Harvard colleagues showed that Hawaiian crickets (Laupala) lose DNA more than 40 times more slowly than Drosophila does, even though the two insect species are closely related (Science, 11 February, p. 1060). In this work, the researchers used the same analytic technique with a different transposon, Lau1, in nine Laupala species. Because the Laupala genome is 11 times larger than that of Drosophila, Petrov hypothesizes that its slow loss of DNA may account for its bulk. He is now testing whether that idea holds up by measuring the rate of DNA loss in various insects, including flies, ants, butterflies, mosquitoes, damselflies, and grasshoppers. The big question mark, however, is what does all this genomic restructuring do for the organism? A small genome may be helpful because it can replicate faster, resulting in a faster cell cycle and shorter generation time. But work reported in the 5 June issue of the Proceedings of the National Academy of Sciences by Schulman, along with colleagues at the Agricultural Research Centre in Jokioinen, Finland, and the University of Haifa in Israel, suggests that large genomes may have their own advantages. The researchers collected specimens of the wild ancestor of cultivated barley from various microclimates in “Evolution Canyon,” Mount Carmel, Israel. When they then looked at the plants' content of a particular type of retrotransposon, called BARE-1, they found that it is up to three times more abundant in barley plants growing at the canyon rim than in those grown near the bottom of the canyon. Their evidence suggests that this may be because plants at higher elevations lose their transposons more slowly than plants farther down. The fact that plants at the top of the canyon both gain more copies and lose fewer suggests, Schulman says, that the elements may confer some advantage. He and his colleagues speculate that a larger genome, achieved through the ample presence of retrotransposons, may help plants deal with the more stressful high and dry areas of the canyon, for example, by influencing the physiological machinery that enables plants to seek or retain water. Consistent with this idea, Stanford University plant scientist Virginia Walbot showed last year that shorter wavelength ultraviolet light can activate a particular Mutator transposon in maize pollen, a result that suggests that sunlight, likely more plentiful at higher elevations, may also be an environmental force involved in genomic restructuring. That remains to be demonstrated, but plant scientists say that Schulman's identification of the BARE-1 element, numerous copies of which exist in the barley genome, as an agent of genomic restructuring opens the way for a new level of experimental studies. One possibility is to test whether plants with more elements are able to thrive in more stressful conditions. Another is to see whether transcription of the BARE-1 element changes under different environmental conditions. Georgia's Wessler says there is now “a clean molecular system to get at the important questions.” The results that come from such studies of BARE-1, and other mobile genetic elements, should help to explain how and why some plants and animals have come to have genomes of extraordinary size, often much larger than that of humans. 13. NEUROSCIENCE # A Ruckus Over Releasing Images of the Human Brain 1. Eliot Marshall A plan to have brain scientists deposit data in a public center at Dartmouth has drawn a flurry of objections; researchers are drafting data-sharing principles For most of this summer, leading brain researchers have been fuming over a plan to force them to share raw data. They became upset when Michael Gazzaniga, a psychologist at Dartmouth College in Hanover, New Hampshire, told researchers publishing functional magnetic resonance images of the brain in the journal he edits—the Journal of Cognitive Neuroscience (JCN)—that they are expected to submit their raw data to a public database he is developing at Dartmouth. They became more agitated when a representative of the Dartmouth database implied that JCN may not act alone: Other editors, he told a meeting of brain mappers, would also insist that authors submit their raw data to Dartmouth. Those events touched off a rebellion. Galvanized by the Dartmouth project, brain scientists have spent the past 10 weeks e-mailing one another and organizing detailed responses. They complain that the Dartmouth archive—which is getting under way this fall—is not ready for prime time. They warn that if the project goes forward as planned, it could compromise the privacy of research subjects, get tangled up in technical knots, and rob authors of the credit they deserve. But even as they rattle off these complaints, a few brain scientists also concede that Gazzaniga's preemptive move may have done some good: It has got everyone talking about how to build a public database that really works. Such a database would be useful for combining results from different studies. Last month, the Organization for Human Brain Mapping (OHBM)—a coalition of scientists around the world interested in imaging the brain—responded to the commotion by establishing a task force under the leadership of Jonathan Cohen, a psychologist at Princeton University. His task: Elicit a consensus and draw up a set of data-sharing “guidelines” supported by the entire field. This will be their response to the Dartmouth initiative, laying down ground rules for cooperation. “For the journals,” says Cohen, “we want a list of things they might want to consider before they decide to endorse any database.” For authors, the panel will try to establish guidelines on such incendiary issues as how long it's reasonable to withhold data. Cohen plans to have a draft ready for review by the OHBM executive council in “late October,” before the Society for Neuroscience meeting in November. Many leaders in the brain-imaging community say the task force will have a tough job finding an approach to data-sharing that people can agree on. The complexities of reporting experimental results from brain scans, they note, are greater than in fields such as genome sequencing and crystallography, where the experimental protocols are standardized and the data are far more concrete. Many feel that the Dartmouth group doesn't appreciate these difficulties. According to one prominent leader who requested anonymity: “It was a political tour de force that they got the money [to establish the database],” but “they're totally clueless about what they're up against. Hopefully, they're learning.” The scientists who started the rumpus seem to be taking the flak in stride. Gazzaniga, a founder of the Cognitive Neuroscience Society and reputed by peers to be a scientific impresario and skilled fund-raiser, says: “I actually was blindsided by this whole thing. I was talking to people who think this is a great idea and were trying to help make it work. Then, bingo, we get the other side.” Although he has recently softened his demand for immediate data release, he says friends have advised him that the backlash he's seeing is normal: “People yell and scream and demand a hold on the data,” he says, and “I understand their concerns. … There will be a few bumps and noises, and then it will smooth out.” Marcus Raichle, a brain-imaging researcher at Washington University in St. Louis and chair of Gazzaniga's database advisory board, adds that the government “has provided the money for us to generate this valuable data, and it ought to be used in the most efficacious way. … If the people doing the human genome and chemists and others do this kind of databasing, we should be doing it as well.” ## Build it, but who will come? The Dartmouth project began, Gazzaniga says, when he seized an opportunity to fund an old idea. The notion of creating a shared archive of brain-imaging data “had been kicking around the community for a long time, and nobody was doing anything about it,” according to Gazzaniga. When the National Science Foundation (NSF) showed an interest in making “infrastructure” grants to beef up the biology end of social and cognitive science, Gazzaniga moved. He proposed a public archive of magnetic resonance imaging (MRI) of the human brain. After clearing an NSF technical review, the project won a 5-year,$4.5 million grant, including a small contribution from the National Institute of Mental Health, and an additional \$1 million from the Keck Foundation (Science, 29 October 1999, p. 880).
Computer scientists are enthusiastic about the project, Gazzaniga says. They believe they can use the archive to “come up with new ways to do meta-analyses, new ways of mining the data” to discover connections in the brain that aren't detectable in a single experiment or set of studies. Gazzaniga also says graduate students at universities that can't afford to run a sophisticated brain-scanning laboratory will be able to tap into and use high-quality data at the new center.
Money in hand, Dartmouth assembled the machines and the staff in 1999, and Gazzaniga prepared to launch the National Functional MRI Data Center (NfMRIDC) in the fall of 2000. But when Gazzaniga asked for submissions, many scientists balked, arguing that the whole project was premature. The field hasn't even agreed on a standard format for reporting data, they say.
Cohen and others note that archiving has long been a “knotty issue.” OHBM members have sparred over proposals for a single data file format, and a decade-old effort—a consensus brain map begun by neuroscientist Peter Fox at the University of Texas Health Sciences Center in San Antonio—has had difficulty getting useful input. Cohen, for example, says that because of these challenges, the Texas project “has not been an unmitigated success.” Images are often made to assess brain changes in subjects performing various behavioral tasks, and one U.S. government researcher who asked not to be named says: “The big problem was how to describe the behavioral task in sufficient detail that the data would be meaningful.”
John Mazziotta, editor of the journal NeuroImage and leader of another consensus-building effort called the Probabilistic Atlas of the Human Brain at the University of California, Los Angeles (UCLA), agrees that “we need technical tools first” before creating a common database. For 7 years, he says, his group and other major brain-imaging centers have been trying to create a toolkit to describe the architecture of the brain. “It still isn't ready,” he concedes. He notes that even within a lab, there are great variations in the behavior examined, the types of stimuli used, the methods of recording responses, and the analytical software used.
Dartmouth's solution to the compatibility problem is to finesse it, at least for now. Staff engineer Jeff Woodward says the database will receive data in any format authors want to offer. “Methods of converting from one format to another are pretty well known,” Woodward says, and the center will convert archived files to the format requested by the user. “At this point, we don't want to try to impose any standard,” he adds, as the technology is changing so rapidly.
## Compulsory sharing?
The skirmishing over technical standards pales in comparison to the fighting over whether authors should be compelled to release their raw data to a database. Raichle believes that past efforts like the Texas project suffered because data submission was “totally voluntary.” He likes Gazzaniga's solution: Ask everyone to adhere to a new norm of releasing their data to the archive as a condition of getting a paper published.
To advance this policy, Gazzaniga says, he consulted leading journal editors by e-mail. He says most responded favorably. And to set an example, he adopted the policy for JCN. He commissioned a dozen papers by leading researchers for a special edition of JCN and asked authors to submit supporting data to the NfMRIDC. All agreed. Gazzaniga also wrote to recently published JCN authors inviting them to submit source data.
One of those who received Gazzaniga's invitation, Isabel Gauthier, a psychologist at Vanderbilt University in Nashville, Tennessee, responded with a public dissent. She and about 40 colleagues co-signed a letter to leading journals opposing release of data on publication. (Gauthier's letter and responses from Gazzaniga and others are on her Web site, www.psy.vanderbilt.edu/faculty/gauthier/fmridc_letter.html)
Gauthier stresses the author's right to control her own work, noting in her letter that the raw data from a set of experiments may produce more than one paper and shouldn't be released with the first publication. “The nature of fMRI data,” Gauthier writes, is that it's hard to separate what's “relevant to a published paper from data that is destined to another manuscript.” She argues that authors should decide when data are made public.
Gazzaniga's hope that other journals would follow JCN's lead was already beginning to dissolve. When computer scientist Javed Aslam of the Dartmouth center briefed a group of brain mappers in Bethesda, Maryland, in June, he said that major journals endorsed Gazzaniga's data-release policy. But two journal editors in the room got up, according to scientists present, and said they'd never heard of it.
Other editors, including Nature Editor Philip Campbell and Science Editor-in-Chief Donald Kennedy, after receiving petitions from brain mappers, have decided to avoid any fixed policy for now. Kennedy says: “We have not endorsed the JCN policy, nor is data release required for publication in Science. We … have decided to wait for a consensus to develop in the imaging community. …” Campbell has written that the Nature journals do not have “any immediate intention of imposing conditions of deposition on fMRI data,” as this would be “premature.” Arthur Toga, Mazziotta's colleague at UCLA and an editor of NeuroImage, adds: “Any individual or autocratic suggestion as to how this should be done is absurd. … We live for the people who read the journal” and wouldn't try to impose unwanted standards.
Gazzaniga has now amended JCN's policy to state that authors may hold their data private for an undetermined amount of time after submitting an article. But he says he has not retreated from the view that the data must be shared after a reasonable delay.
## Seeking a consensus
Over the next few weeks, Cohen's task force will try to determine what the norms should be. Among other issues, the group will consider how to deal with claims that the Dartmouth data-sharing scheme could put personal privacy at risk because raw brain-scanning data can be used to reconstruct a skull surface—even the outlines of a face. Gazzaniga responds that all personal data will be stripped from submissions, and that his team is “working on” a software block that prevents facial reconstruction.
But the lack of a common data format remains a major barrier, one that will not be solved without the cooperation of the entire field. OHBM past president Karl Friston of the Wellcome Department of Cognitive Neuroscience at University College, London, U.K., says that OHBM leaders recognized long ago that establishing analytical comparability is the toughest issue to resolve. He believes that if all researchers had the software needed to analyze experimental results from other laboratories, data sharing would occur spontaneously. For that reason, he says, Cohen and other leaders of OHBM have been working with the National Institutes of Health to create publicly available software tools.
It seems risky to try to create a shared database before a set of common analytical tools is in hand, Cohen says. But for the moment, he must deal with the “acute” issue of deciding whether—and how—the field should help the new Dartmouth data center get under way. And he says he feels a heavy responsibility: His entire field, and people in fields far removed, are watching to see how the brain mappers respond.
14. # Tissue Engineers Build New Bone
1. Robert F. Service
Bone repair may be one of the first major applications of tissue engineering; efforts to encourage the growth of new bone using novel matrices, bone morphogenic proteins, gene therapy, and stem cells are all showing promise
Mending broken or damaged bones is a hit-or-miss business. Orthopedic surgeons have become adept at manipulating, pinning, and immobilizing fractures, giving the body's natural bone-healing processes an opportunity to knit the broken pieces together. In recent decades, they have also learned to graft bone from elsewhere in the body to repair major damage from accidents or disease: Every year doctors in the United States alone perform about 450,000 surgical bone grafts. But some fractures simply refuse to heal, and bone grafting adds to the pain of recovery. At times, this procedure can't even be attempted because “in many patients the quality and quantity of bones you can harvest is not sufficient,” says Scott Bruder, a bone tissue engineering expert at DePuy, a Johnson & Johnson company based in Raynham, Massachusetts. Now, however, many researchers believe bone repair is entering a new era that could make painful grafts and unmended bones a thing of the past.
In several clinical trials now under way or nearing launch, researchers are testing novel ways to replace damaged bone. Research teams, primarily in the United States and Europe, are implanting biomaterials laced with molecular signals designed to trigger the body's own repair mechanisms. They are also culturing a class of bone marrow stem cells—versatile cells that can develop into bone, cartilage, and other tissues—and transplanting them into the damaged area. And they are attempting to repair damage by gene therapy, transfusing cells carrying genes that produce key bone-repair proteins.
These trials mark the latest wave of progress in the burgeoning field of tissue engineering, in which researchers are trying to grow replacement tissues to repair damaged organs such as livers, hearts, and bones. Although the field is still maturing, tissue engineers working with bone are beginning to pull ahead of the pack. “Tissue engineering has made great strides,” says Steven Goldstein, who directs orthopedic research at the University of Michigan, Ann Arbor, “but lots of tissues are not ready for prime time.” That's not the case with bone, says Goldstein: “There has been more success in bone than anyplace else.” Adds David Mooney, a tissue engineer at the University of Michigan, Ann Arbor, “If you compare it to the challenge of engineering a complete internal organ, bone is thought to be realizable in a much nearer time scale.” Tissues such as the kidney and lung consist of numerous cell types that must be arranged in the proper three-dimensional structure and coaxed to express particular genes at different times. Structural tissues such as bone and cartilage are not as complex, Mooney notes. Goldstein adds that because the body naturally replaces, or “remodels,” old bone with new, all that is needed is to get this regenerative process up and running smoothly. “If you can kick off repair, the normal process of remodeling helps you quite a bit,” Goldstein says.
That promise has sparked intense commercial interest in bone engineering. Companies ranging from biotech start-ups to traditional orthopedic powerhouses are jumping into the field. And although most of their efforts remain in the research stage, one company, Stryker Biotech in Hopkinton, Massachusetts, already has a product. It has applied to the Food and Drug Administration (FDA) for approval to market a collagen matrix composite infused with a natural protein that signals bone marrow cells to turn on the process of bone regeneration. Indeed, the commercial stakes are so high that some researchers are worried that patent claims, and a reluctance to test competing technologies in combination, could delay progress in the field.
## Molecular scaffolding
Like civil engineers building a new structure, bone engineers start by erecting scaffolding: They insert a matrix of special material into gaps in bone. This molecular scaffolding lies at the heart of all the new tissue engineering approaches.
Surgeons have used matrices made from materials such as collagen and hydroxyapatite for decades to coax the patient's own cells to colonize the damaged area and form new bone. The technique has been particularly successful in filling small divots, but it often has trouble fixing larger defects, says Mooney. So he and others have been looking for better materials. Antonios Mikos at Rice University in Houston, Michael Yazemski at the Mayo Clinic in Rochester, Minnesota, and their colleagues, for example, have been working on a plastic precursor that can be injected into the repair site, where it quickly polymerizes and hardens into a porous matrix capable of holding new bone cells. As new bone grows in, the plastic matrix breaks down into natural metabolites that are then excreted from the body. Thus far, says Yazemski, work in animals has shown that the biodegradable polymer not only sparks new bone growth over time, but also provides needed mechanical strength and appears fully biocompatible.
Building on such successes, tissue engineers have recently achieved more dramatic results when they give the matrix a helping hand—by seeding it with bone growth factors. The approach owes its early progress to a bit of serendipity. In 1965, Marshall Urist, an orthopedic surgeon at the University of California, Los Angeles (UCLA), was studying how minerals deposit on the collagen-based matrix on which bone naturally forms. When he implanted demineralized fragments of rabbit bone in muscle tissue, he found that new bone was created at the site. Something in the bone matrix itself, it seemed, was coaxing cells in the muscle to start producing new bone at this unusual site. That something turned out to be a class of proteins called bone morphogenic proteins (BMPs). But “it took 25 years to purify [BMPs],” says A. Hari Reddi, the director of the Center for Tissue Regeneration and Repair at the University of California, Davis.
Reddi's lab was one of several that set out to track down these chemical signals. In the mid-1970s, Reddi and his colleagues showed that proteins in natural bone matrix first attract stem cells from the bone marrow, then spur them to proliferate and become bone-producing osteoblasts. A few years later, Reddi's group isolated the first of these proteins, which later came to be known as BMP-7. But it wasn't until 1989 that researchers at Creative Biomolecules in Hopkinton, Massachusetts, cloned the gene for BMP-7, a development that opened the door for researchers to produce a recombinant version of the protein that they could then add to matrix implants. Shortly thereafter, researchers at Genetics Institute in Cambridge, Massachusetts, cloned the gene for BMP-2—a similar cell signal.
These signaling proteins quickly proved that they could kick start the bone-regeneration process. Throughout the early 1990s, researchers at Genetics Institute and Stryker Biotech—which owned the rights to Creative Biomolecules' work with BMP-7 for orthopedic applications—completed a series of animal studies showing that their BMPs seeded on simple collagen matrices prompted rapid healing of bone defects, while similar defects remained unhealed in control animals. Stryker Biotech launched the first human clinical trial in 1992 for troublesome “nonunion” fractures that had not healed in over 9 months. According to Stryker president Jamie Kemler, the trial's results show that implants of BMP-7 on a collagen matrix generate new bone as well as, or better than, autografts of healthy bone transplanted from another part of the patient's body. The company is currently awaiting FDA approval to begin selling its matrices. Genetics Institute, too, is nearing the end of similar clinical trials with BMP-2.
But every great promise has its fine print, and this method of bone building may have limitations, too. Some researchers point out that when BMPs are released naturally by cells, mere nanogram quantities of the proteins per gram of bone matrix are enough to trigger the bone repair cascade. Yet microgram quantities of BMP per gram of matrix material—over six orders of magnitude higher—seem to be needed to produce the same effect with an artificial matrix. Although there are no known health problems associated with such high BMP concentrations, the cost may be high, potentially thousands of dollars per treatment.
## Gene therapy
In an effort to get signaling molecules to the cells they trigger, researchers have turned to a field that has had its problems lately: gene therapy. Gene therapists have had a struggle delivering on the field's early promise in part because cells carrying therapeutic genes express them only for a short time. But short-term expression may be enough for remaking bone, Michigan's Goldstein notes. In a flurry of papers last year, researchers from labs in the United States and Germany reported promising early results. In the July 1999 issue of the Journal of Bone and Joint Surgery, for example, orthopedic surgeon Jay Lieberman and his colleagues at UCLA reported using an adenovirus carrying a gene that produces BMP-2 to transfect bone marrow cells. They then seeded and grew the transfected cells on a demineralized bone matrix, which they implanted into surgically produced gaps in the leg bones of rats. The treated bones healed normally, while those that received control preparations—either with a non-BMP-producing gene or just the matrix alone —did not heal.
Using a simpler approach, Goldstein and his Michigan colleagues have produced similar results in dogs. Instead of using cells infected with a transgenic virus, Goldstein's team uses circular fragments of DNA called plasmids containing a gene that codes for a protein called human parathyroid hormone, which, like BMPs, helps stimulate the natural bone repair cascade. They trap the plasmids in a polymer matrix, which they implant into a surgically made gap in the leg bones of dogs. In the July 1999 issue of Nature Medicine, Goldstein's team reported that surrounding cells picked up the plasmid DNA and expressed it for up to 6 weeks. The treated bones were fully repaired. Again, no effect was seen in control animals. Bone tissue engineering, says Goldstein, “looks to be an area where gene therapy can have one of its earliest, greatest successes.”
Based on this and earlier successes with their plasmid gene therapy approach, the Michigan group formed a San Diego- based start-up called Selective Genetics to move the technique into the clinic. The company says that after showing widespread success in animals, they are gearing up to launch a phase I safety trial of the approach in humans.
## New cell sources
Some researchers worry that these promising techniques may ultimately hit a roadblock: a shortage of stem cells. Although transplanted signaling molecules attract stem cells to the repair site and cause them to differentiate, the supply may not be sufficient to repair major damage. So several groups are trying to supplement natural stem cells with cells grown in culture.
Unlike embryonic stem cells, which can differentiate into any one of the more than 200 cell types in the body, bone marrow stem cells have a more limited repertoire. They are already committed to develop into cells that form a broad class of tissues, including bone, cartilage, and tendons, as well as heart, muscle, and neural tissues. And although they are produced throughout the life of animals, their numbers appear to decline with age, says Arnold Caplan, who directs the Skeletal Research Center at Case Western Reserve University in Cleveland. In newborns, bone marrow stem cells—also called mesenchymal stem cells (MSCs)—account for 1 out of every 10,000 bone marrow cells. That number drops to 1 in 100,000 in teens, 1 in 400,000 in 50-year-olds, and 1 in 1 million to 2 million in 80-year-olds.
That's bad news for anyone who has lost large sections of bone in an accident or through cancer. Animal studies show that BMP therapies and other cell- signaling approaches have trouble mending gaps larger than about 25 centimeters because they can't recruit enough stem cells to the area, says Annemarie Moseley, president and CEO of Osiris Therapeutics, a Baltimore, Maryland-based tissue engineering start-up. In these cases BMPs begin by recruiting stem cells to the ends of the healthy bone and regenerating new tissue toward the center of the gap, but “if you look at the center of the matrix you don't see any evidence of bone growth,” says Moseley. The same problem hampers a related approach of simply harvesting healthy bone marrow from a patient and transplanting it in the repair site. “You can put as much marrow in there as you want, but it won't help” if there aren't enough stem cells, says Caplan.
For that reason, Caplan, Osiris, and others have been working to implant stem cells directly into bone repair sites. Caplan's lab helped launch the field about 12 years ago when they first isolated MSCs and came up with a means to expand cell numbers in culture. Since then, Caplan, DePuy's Bruder, Moseley, and others have experimented with a variety of MSC-based implants. In 1989 and 1990, for example, Caplan's group published papers showing that MSCs seeded on a porous, calcium-based ceramic substrate could heal 8-millimeter gaps in the leg bones of rats. They have since reproduced these results for larger bone defects in larger animals. These and other successes prompted Caplan in 1992 to launch Osiris Therapeutics, which aims to carry the approach to humans.
Since the early 1990s, Osiris has shown that the MSC-based therapy works in rats, rabbits, and dogs. And today the company is preparing to launch a phase I safety trial with MSCs in humans. Pamela Robey, a cell biologist with the National Institute of Dental and Craniofacial Research (NIDCR) in Bethesda, Maryland, has made similar progress. Robey says her group has shown that stem cells seeded on a matrix—hydroxyapatite in this case—work to seal large bone gaps in mice, rats, rabbits, and dogs. She is also awaiting FDA approval to launch human clinical trials.
Still, MSCs have their own drawbacks. The biggest concern is time. The current procedure involves extracting stem cells from a patient, growing them in culture, and transplanting them back into that same person, a process that takes weeks. Not only does this rule out emergency repairs, but it also makes the procedure expensive, says Bruder. To get around this problem, Osiris has been experimenting with implanting MSCs from one animal into another, hoping to come up with cell-based implants that surgeons can simply remove from the freezer and implant in a patient's body. The approach has potential, says Moseley, because MSCs don't express the cell surface markers that T cells recognize in rejecting implanted tissue. Thus far, studies on about 40 dogs and “untold numbers” of rats have showed that the transferred cells not only do not spark an immune reaction, but go on to form normal bone, she says.
## Putting the pieces together
As researchers push different approaches to tissue engineering and companies stake out their claims on technologies, commercial competition is heating up. And that worries some researchers, who fear that it may make it hard to determine which strategies work best. “I don't think it's clear to me or the field in general which of these techniques is useful for different applications,” says Michigan's Mooney. Adds Bruder: “Companies are worried that combination therapies will be superior to their single bullet” and are therefore reluctant to test their products along with those of their competitors.
So strong is this concern, says Robey, that it has kept her from working with BMPs. “One of the reasons I turned to stem cells was because I couldn't get BMPs to do my work,” she says. And the result is that progress on determining the most effective combinations is slow. Last year, for example, researchers at Osiris and Novartis collaborated to transfect MSCs with the gene for BMP-7, seed them on matrices, and implant them in rats. The results were excellent, says Moseley, but she says the research has since been dropped because Stryker Biotech owns key rights to BMP-7.
Stryker's Kemler says his company is not trying to quash competition but is pursuing its own “proprietary” combination therapies, which he declines to specify. Nevertheless, Robey and others say the balkanized landscape of intellectual property in tissue engineering prevents them from testing novel therapies. “I do consider that to be a real logjam, and I am not sure how that will be broken,” says Robey. Moseley says she believes the logjam will eventually give way as the field matures over the next few years. Says Caplan: “Tissue engineering is just getting off the ground.” |
# RD Sharma Solutions for Class 8 Maths Chapter 1 - Rational Numbers Exercise 1.7
In Exercise 1.7, we shall discuss problems based on the division of rational numbers and their properties. This set of solutions are prepared by our expert tutor team to help students understand the fundamentals easily. Solutions for RD Sharma Class 8 Maths Exercise 1.7 Chapter 1, Rational Numbers are provided here. Students can download from the links given below.
## Download PDF of RD Sharma Solutions for Class 8 Maths Exercise 1.7 Chapter 1 Rational Numbers
### Access Answers to RD Sharma Solutions for Class 8 Maths Exercise 1.7 Chapter 1 Rational Numbers
1. Divide:
(i) 1 by 1/2
Solution:
1/1/2 = 1 × 2/1 = 2
(ii) 5 by -5/7
Solution:
5/-5/7 = 5 × 7/-5 = -7
(iii) -3/4 by 9/-16
Solution:
(-3/4) / (9/-16)
(-3/4) × -16/9 = 4/3
(iv) -7/8 by -21/16
Solution:
(-7/8) / (-21/16)
(-7/8) × 16/-21 = 2/3
(v) 7/-4 by 63/64
Solution:
(7/-4) / (63/64)
(7/-4) × 64/63 = -16/9
(vi) 0 by -7/5
Solution:
0 / (7/5) = 0
(vii) -3/4 by -6
Solution:
(-3/4) / -6
(-3/4) × 1/-6 = 1/8
(viii) 2/3 by -7/12
Solution:
(2/3) / (-7/12)
(2/3) × 12/-7 = -8/7
(ix) -4 by -3/5
Solution:
-4 / (-3/5)
-4 × 5/-3 = 20/3
(x) -3/13 by -4/65
Solution:
(-3/13) / (-4/65)
(-3/13) × (65/-4) = 15/4
2. Find the value and express as a rational number in standard form:
(i) 2/5 ÷ 26/15
Solution:
(2/5) / (26/15)
(2/5) × (15/26)
(2/1) × (3/26) = (2×3)/ (1×26) = 6/26 = 3/13
(ii) 10/3 ÷ -35/12
Solution:
(10/3) / (-35/12)
(10/3) × (12/-35)
(10/1) × (4/-35) = (10×4)/ (1×-35) = -40/35 = -8/7
(iii) -6 ÷ -8/17
Solution:
-6 / (-8/17)
-6 × (17/-8)
-3 × (17/-4) = (-3×17)/ (1×-4) = 51/4
(iv) -40/99 ÷ -20
Solution:
(-40/99) / -20
(-40/99) × (1/-20)
(-2/99) × (1/-1) = (-2×1)/ (99×-1) = 2/99
(v) -22/27 ÷ -110/18
Solution:
(-22/27) / (-110/18)
(-22/27) × (18/-110)
(-1/9) × (6/-5)
(-1/3) × (2/-5) = (-1×2) / (3×-5) = 2/15
(vi) -36/125 ÷ -3/75
Solution:
(-36/125) / (-3/75)
(-36/125) × (75/-3)
(-12/25) × (15/-1)
(-12/5) × (3/-1) = (-12×3) / (5×-1) = 36/5
3. The product of two rational numbers is 15. If one of the numbers is -10, find the other.
Solution:
We know that the product of two rational numbers = 15
One of the number = -10
∴ other number can be obtained by dividing the product by the given number.
Other number = 15/-10
= -3/2
4. The product of two rational numbers is -8/9. If one of the numbers is -4/15, find the other.
Solution:
We know that the product of two rational numbers = -8/9
One of the number = -4/15
∴ other number is obtained by dividing the product by the given number.
Other number = (-8/9)/(-4/15)
= (-8/9) × (15/-4)
= (-2/3) × (5/-1)
= (-2×5) /(3×-1)
= -10/-3
= 10/3
5. By what number should we multiply -1/6 so that the product may be -23/9?
Solution:
Let us consider a number = x
So, x × -1/6 = -23/9
x = (-23/9)/(-1/6)
x = (-23/9) × (6/-1)
= (-23/3) × (2×-1)
= (-23×-2)/(3×1)
= 46/3
6. By what number should we multiply -15/28 so that the product may be -5/7?
Solution:
Let us consider a number = x
So, x × -15/28 = -5/7
x = (-5/7)/(-15/28)
x = (-5/7) × (28/-15)
= (-1/1) × (4×-3)
= 4/3
7. By what number should we multiply -8/13 so that the product may be 24?
Solution:
Let us consider a number = x
So, x × -8/13 = 24
x = (24)/(-8/13)
x = (24) × (13/-8)
= (3) × (13×-1)
= -39
8. By what number should -3/4 be multiplied in order to produce 2/3?
Solution:
Let us consider a number = x
So, x × -3/4 = 2/3
x = (2/3)/(-3/4)
x = (2/3) × (4/-3)
= -8/9
9. Find (x+y) ÷ (x-y), if
(i) x= 2/3, y= 3/2
Solution:
(x+y) ÷ (x-y)
(2/3 + 3/2) / (2/3 – 3/2)
((2×2 + 3×3)/6) / ((2×2 – 3×3)/6)
((4+9)/6) / ((4-9)/6)
(13/6) / (-5/6)
(13/6) × (6/-5)
-13/5
(ii) x= 2/5, y= 1/2
Solution:
(x+y) ÷ (x-y)
(2/5 + 1/2) / (2/5 – 1/2)
((2×2 + 1×5)/10) / ((2×2 – 1×5)/10)
((4+5)/10) / ((4-5)/10)
(9/10) / (-1/10)
(9/10) × (10/-1)
-9
(iii) x= 5/4, y= -1/3
Solution:
(x+y) ÷ (x-y)
(5/4 – 1/3) / (5/4 + 1/3)
((5×3 – 1×4)/12) / ((5×3 + 1×4)/12)
((15-4)/12) / ((15+4)/12)
(11/12) / (19/12)
(11/12) × (12/19)
11/19
(iv) x= 2/7, y= 4/3
Solution:
(x+y) ÷ (x-y)
(2/7 + 4/3) / (2/7 – 4/3)
((2×3 + 4×7)/21) / ((2×3 – 4×7)/21)
((6+28)/21) / ((6-28)/21)
(34/21) / (-22/21)
(34/21) × (21/-22)
-34/22
-17/11
(v) x= 1/4, y= 3/2
Solution:
(x+y) ÷ (x-y)
(1/4 + 3/2) / (1/4 – 3/2)
((1×1 + 3×2)/4) / ((1×1 – 3×2)/4)
((1+6)/4) / ((1-6)/4)
(7/4) / (-5/4)
(7/4) × (4/-5) = -7/5
10. The cost of $7\frac{2}{3}$ meters of rope is Rs 12 ¾. Find the cost per meter.
Solution:
We know that 23/3 meters of rope = Rs 51/4
Let us consider a number = x
So, x × 23/3 = 51/4
x = (51/4)/(23/3)
x = (51/4) × (3/23)
= (51×3) / (4×23)
= 153/92
= $1\frac{61}{92}$
∴ cost per meter is Rs $1\frac{61}{92}$
11. The cost of $2\frac{1}{3}$ meters of cloth is Rs 75 ¼. Find the cost of cloth per meter.
Solution:
We know that 7/3 meters of cloth = Rs 301/4
Let us consider a number = x
So, x × 7/3 = 301/4
x = (301/4)/(7/3)
x = (301/4) × (3/7)
= (301×3) / (4×7)
= (43×3) / (4×1)
= 129/4
= 32.25
∴ cost of cloth per meter is Rs 32.25
12. By what number should -33/16 be divided to get -11/4?
Solution:
Let us consider a number = x
So, (-33/16)/x = -11/4
-33/16 = x × -11/4
x = (-33/16) / (-11/4)
= (-33/16) × (4/-11)
= (-33×4)/(16×-11)
= (-3×1)/(4×-1)
= ¾
13. Divide the sum of -13/5 and 12/7 by the product of -31/7 and -1/2.
Solution:
sum of -13/5 and 12/7
-13/5 + 12/7
((-13×7) + (12×5))/35
(-91+60)/35
-31/35
Product of -31/7 and -1/2
-31/7 × -1/2
(-31×-1)/(7×2)
31/14
∴ by dividing the sum and the product we get,
(-31/35) / (31/14)
(-31/35) × (14/31)
(-31×14)/(35×31)
-14/35
-2/5
14. Divide the sum of 65/12 and 12/7 by their difference.
Solution:
The sum is 65/12 + 12/7
The difference is 65/12 – 12/7
When we divide, (65/12 + 12/7) / (65/12 – 12/7)
((65×7 + 12×12)/84) / ((65×7 – 12×12)/84)
((455+144)/84) / ((455 – 144)/84)
(599/84) / (311/84)
599/84 × 84/311
599/311
15. If 24 trousers of equal size can be prepared in 54 meters of cloth, what length of cloth is required for each trouser?
Solution:
We know that total number trousers = 24
Total length of the cloth = 54
Length of the cloth required for each trouser = total length of the cloth/number of trousers
= 54/24
= 9/4
∴ 9/4 meters is required for each trouser.
Class 8 Maths Chapter 1 Rational Numbers Exercise 1.7 is based on the division of rational numbers. To facilitate easy learning and understanding of concepts download free RD Sharma Solutions of Chapter 1 in PDF format, which provides answers to all the questions. Practising as many times as possible helps students in building time management skills and also boosts the confidence level to achieve high marks. |
# complex arithmetic-geometric mean
It is also possible to define the arithmetic-geometric mean for complex numbers. To do this, we first must make the geometric mean unambiguous by choosing a branch of the square root. We may do this as follows: Let $a$ and $b$ br two non-zero complex numbers such that $a\neq sb$ for any real number $s<0$. Then we will say that $c$ is the geometric mean of $a$ and $b$ if $c^{2}=ab$ and $c$ is a convex combination of $a$ and $b$ (i.e. $c=sa+tb$ for positive real numbers $s$ and $t$).
Geometrically, this may be understood as follows: The condition $a\neq sb$ means that the angle between $0a$ and $0b$ differs from $\pi$. The square root of $ab$ will lie on a line bisecting this angle, at a distance $\sqrt{|ab|}$ from $0$. Our condition states that we should choose $c$ such that $0c$ bisects the angle smaller than $\pi$, as in the figure below:
$\xy,(2,-1)*{0},(0,0);(50,50)**@{-};(52,52)*{b},(0,0);(-16,16)**@{-},(-18,18)*{% a},(0,0);(0,40)**@{-},(0,42)*{c},(0,0);(0,-40)**@{-},(0,-42)*{-c}$
Analytically, if we pick a polar representation $a=|a|e^{i\alpha}$, $b=|b|e^{i\beta}$ with $|\alpha-\beta|<\pi$, then $c=\sqrt{|ab|}e^{i{\alpha+\beta\over 2}}$. Having clarified this preliminary item, we now proceed to the main definition.
As in the real case, we will define sequences of geometric and arithmetic means recursively and show that they converge to the same limit. With our convention, these are defined as follows:
$\displaystyle g_{0}$ $\displaystyle=a$ $\displaystyle a_{0}$ $\displaystyle=b$ $\displaystyle g_{n+1}$ $\displaystyle=\sqrt{a_{n}g_{n}}$ $\displaystyle a_{n+1}$ $\displaystyle={a_{n}+g_{n}\over 2}$
We shall first show that the phases of these sequences converge. As above, let us define $\alpha$ and $\beta$ by the conditions $a=|a|e^{i\alpha}$, $b=|b|e^{i\beta}$, and $|\alpha-\beta|<\pi$. Suppose that $z$ and $w$ are any two complex numbers such that $z=|z|e^{i\theta}$ and $w=|w|e^{i\phi}$ with $|\phi-\theta|<\pi$. Then we have the following:
• The phase of the geometric mean of $z$ and $w$ can be chosen to lie between $\theta$ and $\phi$. This is because, as described earlier, this phase can be chosen as $(\theta+\phi)/2$.
• The phase of the arithmetic mean of $z$ and $w$ can be chosen to lie between $\theta$ and $\phi$.
By a simple induction argument, these two facts imply that we can introduce polar representations $a_{n}=|a_{n}|e^{i\theta_{n}}$ and $g_{n}=|g_{n}|e^{i\phi_{n}}$ where, for every $n$, we find that $\theta_{n}$ lies between $\alpha$ and $\beta$ and likewise $\phi_{n}$ lies between $\alpha$ and $\beta$. Furthermore, since $\phi_{n+1}=(\phi_{n}+\theta_{n})/2$ and $\theta_{n+1}$ lies between $\phi_{n}$ and $\theta_{n}$, it follows that
$|\phi_{n+1}-\theta_{n+1}|\leq{1\over 2}|\phi_{n}-\theta_{n}|.$
Hence, we conclude that $|\phi_{n}-\theta_{n}|\to 0$ as $n\to\infty$. By the principle of nested intervals, we further conclude that the sequences $\{\theta_{n}\}_{n=0}^{\infty}$ and $\{\phi_{n}\}_{n=0}^{\infty}$ are both convergent and converge to the same limit.
Having shown that the phases converge, we now turn our attention to the moduli. Define $m_{n}=\max(|a_{n}|,|g_{n}|)$. Given any two complex numbers $z,w$, we have
$|\sqrt{zw}|\leq\max(|z|,|w|)$
and
$\left|{z+w\over 2}\right|\leq\max(|z|,|w|),$
so this sequence $\{m_{n}\}_{n=0}^{\infty}$ is decreasing. Since it bounded from below by $0$, it converges.
Finally, we consider the ratios of the moduli of the arithmetic and geometric means. Define $x_{n}=|a_{n}|/|g_{n}|$. As in the real case, we shall derive a recursion relation for this quantity:
$\displaystyle x_{n+1}$ $\displaystyle={|a_{n+1}|\over|g_{n+1}|}$ $\displaystyle={|a_{n}+g_{n}|\over 2\sqrt{|a_{n}g_{n}|}}$ $\displaystyle={\sqrt{|a_{n}^{2}|+2|a_{n}||g_{n}|\cos(\theta_{n}-\phi_{n})+|g_{% n}|^{2}}\over 2\sqrt{|a_{n}g_{n}|}}$ $\displaystyle={1\over 2}\sqrt{{|a_{n}|\over|g_{n}|}+2\cos(\theta_{n}-\phi_{n})% +{|g_{n}|\over|a_{n}|}}$ $\displaystyle={1\over 2}\sqrt{x_{n}+2\cos(\theta_{n}-\phi_{n})+{1\over x_{n}}}$
For any real number $x\geq 1$, we have the following:
$\displaystyle x-1$ $\displaystyle\geq 0$ $\displaystyle(x-1)^{2}$ $\displaystyle\geq 0$ $\displaystyle x^{2}-2x+1$ $\displaystyle\geq 0$ $\displaystyle x^{2}+1$ $\displaystyle\geq 2x$ $\displaystyle x+{1\over x}$ $\displaystyle\geq 2$
If $0, then $1/x>1$, so we can swithch the roles of $x$ and $1/x$ and conclude that, for all real $x>0$, we have
$x+{1\over x}\geq 2.$
Applying this to the recursion we just derived and making use of the half-angle identity for the cosine, we see that
$x_{n+1}\geq{1\over 2}\sqrt{2+2\cos(\theta_{n}-\phi_{n})}=\cos\left({\theta_{n}% -\phi_{n}\over 2}\right).$
Title complex arithmetic-geometric mean ComplexArithmeticgeometricMean 2013-03-22 17:10:05 2013-03-22 17:10:05 rspuzio (6075) rspuzio (6075) 15 rspuzio (6075) Result msc 33E05 msc 26E60 |
# Matlab pole zero to transfer function pdf
Zeros, poles and static gain of an lti model are computed with the commands zero. Matlab code to plot ber of qpsk under awgn channel method. Correlate pulse response in serdes designer to ibisami simulation in the serdes designer app, plot the ctle transfer function and pulse response from the add plots button. The zero pole block models a system that you define with the zeros, poles, and gain of a laplacedomain transfer function. Model system by zeropolegain transfer function matlab. Assume pole locations are 2, 1, zero at 1 and gain is 7.
Additionally, it should be noted here, that a direct manual propering of the improper. Drag a pole or a zero of a discrete system transfer function to a different location and observe the effect on the system. If sys is a transfer function or statespace model, it is first converted to zero pole gain form using zpk for siso zero pole gain models, the syntax. You can convert from transfer function to zeropole representation and vica versa using the following commands. The polezero splace plot can be zoomed in and out using a slider. Figure 1 is an example of a polezero plot for a thirdorder system with a single real zero, a real pole and a complex conjugate pole pair, that is. For mimo systems, pzmap plots the system poles and transmission zeros. Blue and red transfer functions are cleared when moving poles zeroes in the plane. Write matlab code to obtain transfer function of a system from its pole, zero, gain values.
Mcnames portland state university ece 222 transfer functions ver. Based on the transfer function, the poles and zeros can be defined as, a 1 2. Also, the influence of the transfer function zero with the time constant of 0. To study the poles and zeros of the noise component of an inputoutput model or a time series model, use noise2meas to first extract the noise model as an independent inputoutput model, whose inputs are the noise channels of the original model. Matlab can compute the poles and transmission zeros the. The rational function returns poles and residues, but you need to convert these into zeros, poles and gains for a ctle block. If b is a matrix, then each row of b corresponds to an output of the system. Control system toolbox software supports transfer functions that are continuoustime or discretetime, and siso or mimo. Represent transfer functions in terms of numerator and denominator coefficients or zeros, poles, and gain. It has two examples and the second example also shows how to find out the gain of a given transfer function.
You can create pole zero plots of linear identified models. Dynamic systems that you can use include continuoustime or discretetime numeric lti models such as tf, zpk, or ss models if sys is a generalized statespace model genss or an uncertain statespace model uss, zero returns the zeros of the current or nominal value of sys. Matlab provides transfer function and zeropolegain. Examples functions and other reference release notes pdf documentation.
Because the transfer function completely represents a system di. Since the poles are not strictly in the left half plane, the open loop system will be unstable as seen in the step response below. Z and p are cell arrays of vectors with as many rows as outputs and as many columns as inputs, and k is a matrix with as many rows as outputs and as many columns as inputs. For the design of a control system, it is important to understand how the system of interest behaves and how it responds to. How to find and plot zeros and poles of a transfer. Understanding poles and zeros 1 system poles and zeros.
Pole zero plot of transfer fucntion hz matlab answers. This block can model singleinput singleoutput siso and singleinput multipleoutput simo systems. With the transfer function now known, the numerator and denominator. Finding poles and zeros and other polynomial operations. Once the zeroespoles are movedaddeddeleted, the original calculation will not hold true any more. Rational function computing with poles and residues.
The ball and beam system is a type ii system which has two poles at the origin, as seen in the pole zero map below. Roots of transfer function numerator called the system zeros. Transfer function numerator coefficients, specified as a vector or matrix. Transfer function numerator coefficients, returned as a row vector or a matrix. A siso continuoustime transfer function is expressed as the ratio. Here, there poles and zeros of cl1 are blue, and those of cl2 are green the plot shows that all poles of cl1 are in the left halfplane, and therefore cl1 is stable. Understanding poles and zeros in transfer functions. In matlab project 2, you saw how the matlab residue function can help. Find zeros, poles, and gains for ctle from transfer function.
Only the first green transfer function is configurable. However the impulse response of the system is correct, but its only shifted to the right side by one. The models can have different numbers of inputs and outputs and can be a mix of continuous and discrete systems. Transfer function analysis by manipulation of poles and zeros. Tranferfunction from zeros and polesmatlab youtube. This representation can be obtained in both the ways from equations to pole zero plot and from pole zero plot to the equation. Your h here is not the same as the transfer function in your original post. A video that teaches you how to obtain a transfer function by taking zeros, poles and gain as input from the user. Create transfer function model using zeros, poles, and gain.
Observe the change in the magnitude and phase bode plots. This function has three poles, two of which are negative integers and one of which is zero. You can represent linear systems as transfer functions in polynomial or factorized zeropolegain form. Convert zeropolegain filter parameters to transfer. The characteristic equation, poles and zeros are then defined and calculated in closed form. May 26, 2019 weve explored the basic theoretical and practical aspects of transferfunction poles and zeros, and weve seen that we can create a direct relationship between a filters pole and zero frequencies and its magnitude and phase response. The zeropole block models a system that you define with the zeros, poles, and gain of a laplacedomain transfer function. This matlab function finds the matrix of zeros z, the vector of poles p, and the associated vector of gains k from the transfer function parameters b and a. There are no poles of ls in the right half plane so p 0. Polezero plot of dynamic system matlab pzmap mathworks. Click the pole zero plot toolbar button, select analysis pole zero plot from the menu, or type the following code to see the plot. Convert to zeros, poles, gains from poles and residues.
We use matlab to find the laplace transform of any symbolic function ft was and. If b is a matrix, then it has a number of rows equal to the number of columns of z. If sys is a transfer function or statespace model, it is first converted to zeropole gain form using zpk. If we rewrite this in a standard form such that the highest order term of the numerator and denominator are unity the reason for this is explained below. Fateman computer science division, eecs university of california, berkeley december 24, 2010 abstract computer algebra systems cas usually support computation with exact or approximate rational functions stored as ratios of polynomials in \expanded form with explicit coe cients. Use designfilt to generate d based on frequencyresponse specifications. This example shows how to examine the pole and zero locations of dynamic systems both graphically using pzplot and numerically using pole and zero examining the pole and zero locations can be useful for tasks such as stability analysis or identifying nearcanceling pole zero pairs for model simplification. But based on the matlab command to plot pole and zeros, zplanea,b i. If some io pairs have a common denominator, the roots of such io pair denominator are counted only once. In this tutorial we look at using the minreal function in matlab to perform pole zero cancellation from transfer functions. Specifying 1e7 as the second input causes minreal to eliminate pole zero pairs within 1 07 rads of each other. When an openloop system has righthalfplane poles in which case the system is unstable, one idea to alleviate the problem is to add zeros at the same locations as the unstable poles, to in effect cancel the unstable poles. Transfer function mostly used in control systems and signals and systems.
Access zeropolegain data matlab zpkdata mathworks italia. Let n order of as and m order of bs the order of a polynomial is the highest power of s that appears in it. In laplace space, the system is represented by the system has unit gain, a double zero at, and two complexconjugate poles. Zeros are defined as the roots of the polynomial of the numerator of a transfer function and poles are defined as the roots of the denominator of a transfer function. Tutorial to perform polynomial operations in matlab, including finding poles and zeros of a transfer function. In general, the poles and zeros of a transfer function may be. When you provide multiple models, pzplot plots the poles and zeros of each model in a different color.
Mar 23, 2014 a video that teaches you how to obtain a transfer function by taking zeros,poles and gain as input from the user. Matlab code is used to plot the polezero locations for the nine. All i can find are pole zero plots and that basically the poles define the system stability and time response. The pole zero and transfer function representations of a system are tightly linked. Dynamic system, specified as a siso dynamic system model, or an array of siso dynamic system models. Poles of dynamic system matlab pole mathworks deutschland. You can also have time delays in your transfer function representation. This means that the characteristic equation of the closed loop transfer function has no zeros in the right half plane the closed loop transfer function has no poles there. In the next article, well examine the transfer function of a firstorder highpass filter. Write matlab code to obtain transfer function of a system from its pole,zero, gain values. When the poles are visualized on the complex splane, then they must all lie in the lefthalf plane lhp to ensure stability.
Specifying 1e7 as the second input causes minreal to eliminate pole zero pairs within 1 0 7 rads of each other the reduced model tred includes all the dynamics of the original closedloop model t, except for the nearcanceling zero pole pair. By default, minreal reduces transfer function order by canceling exact pole zero pairs or near pole zero pairs within sqrteps. Zeropole plot for discretetime systems matlab zplane. Transfer functions transfer function representations. W e w ould still lik them to resp ectiv ely ha v t h i n terpretations of generated and absorb ed frequencies, in some sense, but that still lea v es us with man y c hoices. For siso transfer functions or zeropolegain models, the poles are the. Convert zeropolegain filter parameters to transfer function. The output k is a matrix with as many rows as outputs and as many columns as inputs such that ki,j is the gain of the transfer function from input j to output i. I know that the zeros are the frequencies where the numerator of a transfer function becomes zero. We can get the poles, zeros and gain for any transfer function and plot the. Transfer functions in matlab top 3 methods examples.
This matlab function returns the poles of the siso or mimo dynamic system. This demonstration shows how the locations of poles and zeros of the system transfer function affect the system properties. Then copy the value of the gpz variable and paste it to the gain pole zero matrix parameter. Run the command by entering it in the matlab command window. Compute the transfer function of a damped massspring system that obeys the differential equation the measurable quantity is the acceleration, and is the driving force. The ctle can be configured to use specification parameter gpz matrix where the units for gains, poles and zeros. This video explains how to obtain the zeros and poles of a given transfer function. Polezero cancellation in matlab matlab programming. Rational function computing with poles and residues richard j. Matlab solution and plot of poles and zeros of ztransform. Polezero cancellation control tutorials for matlab.
The transfer function of the preloaded highpass and lowpass filters is scaled to achieve 0 db attenuation at 0 infinity, respectively. Plot the poles and zeros of the continuoustime system represented by the following transfer function. Convert transfer function filter parameters to zeropolegain. This matlab function creates a pole zero plot of the continuous or discretetime dynamic system model sys. Model system by zeropolegain transfer function simulink. Convert transfer function filter parameters to zeropole. Calculate poles and zeros from a given transfer function.
1560 488 1369 554 223 1425 984 478 1554 1111 857 1455 1642 1410 845 462 711 339 1449 826 184 568 886 1073 1281 1603 700 1276 286 442 1511 1042 238 1427 161 319 207 1143 559 89 474 1021 1464 |
Inverse Function Thm
1. Nov 3, 2006
ak416
Im not sure whether this is a "Homework Question", but it is a question regarding the proof of the Inverse Function Theorem. It starts like this:
Let k be the linear transformation Df(a). Then k is non-singular, since det(f '(a)) != 0. Now D((k^-1(f(a))) = D(k^-1)(f(a)) (Df(a)) = k^-1 (Df(a)) is the identity linear transformation.
Heres what i dont understand:
If the theorem is true for k^-1 (f) then it is clearly true for f. Therefore we may assume at the outset the k is the identity.
Can anyone explain this?
2. Nov 3, 2006
mathwonk
you are trying to prove a certain function is a local homeomorphism. if it is them composing it with an invertible linear map will not chNGE THIS, AND ALSO IOF IT IS NOT COMPOSING WITH an invertible linear map will not changw that.
so we may compose it with an invertible linear map before starting the proof.
i.e. if we wnT to prove f is invertible, and if L is kown to be invertible, then if we prove fT is invertible, we may conclude that also fTT^I(-1) = f is invertible.
the purpose of this reduction is to be able to simplify the derivative.
3. Nov 5, 2006
ak416
Ok so f:Rn->Rn. and by the fact that k: Rn->Rn is a homeomorphism on
an open set:
If k^-1 (f) is a homeomorphism on an open set then f is a
homeomorphism on an open set. Thus it suffices to prove that k^-1 (f)
is a homeomorphism on an open set. (An open set containing the point a
where d is continuously differentiable).
But why can you assume that k is the identity map?
4. Nov 5, 2006
matt grime
Becuase you've just shown that the result is true for arbitrary k (satisfying the hypotheses) if and only if it is true for the identity.
This is perfectly normal. Any result in linear algebra about a vector v can often be translated to showing it for the zero vector only.
The analytic version is to simply rescale so that Df, which just a matrix of derivatives, is the identity.
5. Nov 5, 2006
ak416
ok so if the theorem is true for k = I then it is true for arbitrary k and from what I said before we can conclude that it is true for f?
6. Nov 5, 2006
matt grime
It's just a change of basis argument - draw a picture in 2-d for the y=f(x) case to see what it's saying: if the slope is non-zero at a point we may assume that it is 1. I.e. if f'(0)=2, say, then f(x)/2 is a function whose derivative is 1 at x=0. (the general case is more complicated, it is not just dividing by a number, but the principle is the same).
You can also assume that a=(0,0,..,0) as well, by similar arguments.
Last edited: Nov 5, 2006
7. Nov 5, 2006
ak416
Well actually if its true for k = I then k^-1 f reduces to f and therefore its true for f. It seems a little too simplified of an assumption. Is my logic correct?
8. Nov 5, 2006
ak416
ok the one variable case makes sense. If its true for f(x)/2 then it is true for f(x).
9. Nov 5, 2006
ak416
ok then i think my post number 7 is flawed because when were assuming that k = I were changing the nature of the function (like from f(x) to f(x)/2) so its actually what I said in post num 5 thats true right?
matt grime
No, you're missing the point. The assumption is not that 'because k=I, we then have that k^-1f=f' at all. I mean, it's true, but not relevant.
The assumption is that we may assume f satisfies Df(a)=I, because if it didn't the function k^-1f would satisfy Df(a)=I, and if k^-1f is invertible, so is f.
We may always assume in these cases that a=(0,0,..,0), and Df(a)=I if it helps, and other things too just by a change of coordinates.
ak416
Ok im still not sure. I understand that its true for arbitrary k if and only if it is true for k=I. But we are not sure that k = I. And by assuming k = I arent you changing the function?
Im thinking we are supposed to somehow use the fact that D(k^-1 f)(a) = k^-1 Df(a) is the identity linear transformation.
12. Nov 5, 2006
ak416
Ok so if Df(a) is not I, then you move on to k^-1 f which satisfies D(k^-1 f) = I. And if you can prove it for D(k^-1 f) then you proved it for f. But what makes you assume k = I?
13. Nov 5, 2006
mathwonk
i thionk i understand your question. you are puzzled because they are changing notation. i.e. if the derivative k of f is not I then consider the derivative of k^-1f which is I. then call that new derivative k gain, to save letters. now the derivative of k^-1f , which is I, is still being called "k", although that is confusing.
get it?
i.e. instead of saying "we can assume k = I" they should more accurately have said "thus we only have to prove the result for functions whose derivative is I. so if k is the derivative, we may assume k =I".
14. Nov 5, 2006
matt grime
I am really baffled by these questions. We are allowed to assume that k=I since we have shown that we may replace f by a function g(x)=Df(a)^-1f(x) that has Dg(a)=I, and that the inverse function theorem will be true for f if and only if it is true for g. Thus replacing f with g we can assume Dg=I from the beginning. What part of that don't you understand?
15. Nov 5, 2006
matt grime
Yes, we are changing the function. But it doesn't matter the result is true for the original function if and only if it is true for the one we replace it by.
16. Nov 5, 2006
ya so we now prove it for the function g = Df(a)^-1 f which has Dg(a) = I. Ok that makes sense. But in the next part of the proof it says:
Whenever f(a+h) = f(a) we have
|f(a+h)-f(a)-k(h)|/|h| = |h|/|h| = 1
So is he still talking about the original f and the original k, or is he talking about the g and k = Dg(a). This is what confused me, but from what you guys are saying im assuming that hes talking about the g. |
SortPermutation - Maple Help
StringTools
SortPermutation
return a permutation that sorts a list of strings
Calling Sequence SortPermutation( los )
Parameters
los - list(string); a list of strings
Description
• The SortPermutation( los ) command returns a permutation $p$ that sorts the list los, that is, for which $\left[\mathrm{seq}\right]\left({\mathrm{los}}_{{p}_{i}},i=1..\mathrm{nops}\left(\mathrm{los}\right)\right)$ is equal to $\mathrm{sort}\left(\mathrm{los}\right)$. The sorting order is lexicographic.
• The permutation returned by SortPermutation is represented as a list of the positive integers from $1$ to $\mathrm{nops}\left(\mathrm{los}\right)$.
• Note that an empty list, which is vacuously a permutation, is returned if the input list los is empty.
• All of the StringTools package commands treat strings as (null-terminated) sequences of $8$-bit (ASCII) characters. Thus, there is no support for multibyte character encodings, such as unicode encodings.
Examples
> $\mathrm{with}\left(\mathrm{StringTools}\right):$
> $L≔\left["b","c","a"\right]:$
> $p≔\mathrm{SortPermutation}\left(L\right)$
${p}{≔}\left[{3}{,}{1}{,}{2}\right]$ (1)
> $\mathrm{type}\left(p,'\mathrm{permlist}'\right)$
${\mathrm{true}}$ (2)
> $\left[\mathrm{seq}\right]\left(L\left[p\left[i\right]\right],i=1..3\right)$
$\left[{"a"}{,}{"b"}{,}{"c"}\right]$ (3)
> $\mathrm{sort}\left(L\right)$
$\left[{"a"}{,}{"b"}{,}{"c"}\right]$ (4)
> $\mathrm{SortPermutation}\left(\left[\right]\right)$
$\left[\right]$ (5) |
Principle of Inclusion-Exclusion – IMT DeCal
# Principle of Inclusion-Exclusion
by Suraj Rampure
Here, we will re-visit the Principle of Inclusion and Exclusion.
Note: You may find it easier to understand the Principle of Inclusion-Exclusion by watching a video. Two from this class are linked below. The former also has a walkthrough of the derivation for three sets.
PIE for two sets: Suppose $A$ and $B$ are two sets, and we want to count the number of elements in $A \cup B$, i.e. $|A \cup B|$, assuming that we know $|A|$, $|B|$, and $|A \cap B|$, the cardinality of the intersection of $A$ and $B$.
First, we count every item in $A$ and $B$ individually, yielding $|A| + |B|$. We then see that the intersection $A \cap B$ has been counted twice – once in $|A|$, and once in $|B|$. By subtracting $|A \cap B|$ we yield $|A \cup B| = |A| + |B| - |A \cap B|$ as required.
PIE for three sets: Let’s now derive an expression for $|A \cup B \cup C|$ in terms of the individual cardinalities and all possible intersections.
Again, we start by counting each set individually, giving us $|A| + |B| + |C|$. We now notice that each pairwise overlap has been counted twice – $|A \cap B|$ was counted in both $|A|$ and $|B|$, $|A \cap C|$ was counted in both $|A|$ and $|C|$, and $|B \cap C|$ was counted in both $|B|$ and $|C|$; additionally, the triple intersection $|A \cap B \cap C|$ is counted three times.
By subtracting $|A \cap B|$, $|A \cap C|$ and $|B \cap C|$, we have subtracted the triple overlap $|A \cap B \cap C|$ three times (as it is part of each pairwise intersection). Since it was originally counted three times, we need to add it back once. Thus, our final relation yields $|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|$.
To summarize:
$\boxed{|A \cup B| = |A| + |B| - |A \cap B|}$
$\boxed{|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|}$
### Example
Suppose there are 150 high school seniors at Billy High, and suppose each senior is required to take at least one of Calculus or Statistics. If 100 students are enrolled in Statistics, and 70 are enrolled in Statistics, how many are enrolled in both?
Let $C$ be the set of students taking Calculus, and $S$ be the set of students taking Statistics. We are given $| C \cup S| = 100$, $| C | = 70$ and $| S | = 100$, and we are asked to find $| C \cap S |$. PIE states $|P \cup C| = |P| + |C| - |P \cap C|$. Substituting our known quantities yields $150 = 70 + 100 - | C \cap S|$, implying that there are $| C \cap S | = 20$ students taking both Calculus and Statistics.
Now, suppose students aren’t necessarily required to take either Calculus or Statistics; they can elect to take neither. If 25 students are taking both, 100 students are taking Calculus and 20 students are taking neither, how many students are taking Statistics?
We have two unknowns – $|C \cup S|$ and $|S|$. We need two equations in terms of these unknown quantities to solve for them. We can use the Principle of Inclusion-Exclusion to find $| C \cup S | = 100 + | S | - 25$. To continue, we must realize that we’re actually given the size of the universe, $\big| U \big| = 150$. Either a student is taking one of the courses, or they are not. The sum of the number of students in each of these disjoint groups must be 150. We are given that 25 students aren’t taking either course, meaning $150 = | C \cup S | + 25$, i.e. $| C \cup S | = 125$, allowing us to solve $|S| = 50$.
In the note titled Key Examples in Counting, we will use PIE in some rather interesting examples. |
# Proof for Cosine of 18 degrees in Geometric method
You have learned how to derive the value of cos 18 degrees in trigonometric method. It is time to learn how to derive the value of cosine of angle eighteen degree experimentally in geometric method. It is practically possible by constructing a right triangle (or right angled triangle) with an angle of eighteen degrees.
1. Use a ruler and draw a line of any length horizontally. For example, $10 \, cm$ line is drawn and it is called the line $\overline{DE}$.
2. Use a protractor and draw a perpendicular line to the line segment $\overline{DE}$ at point $E$.
3. Now, coincide the middle point of the protractor with the point $D$, then mark on plane at $18$ degrees indication line of protractor in anticlockwise direction. Finally, draw a line from point $D$ through $18$ degrees mark and it intersects the perpendicular line at point $F$.
The three steps helped us in constructing a right triangle, known as $\Delta FDE$. In this case, the angle of the right angled triangle is $18$ degrees. So, let us evaluate the cosine of angle $\dfrac{\pi}{10}$ radian.
$\cos{(18^\circ)} \,=\, \dfrac{DE}{DF}$
The length of the adjacent side ($\overline{DE}$) is $10 \, cm$ but the length of the hypotenuse ($\overline{DF}$) is unknown. However, it can be measured by using a ruler and it is measured that the length of the hypotenuse is $10.5 \, cm$.
$\implies$ $\cos{(18^\circ)} \,=\, \dfrac{10}{10.5}$
$\implies$ $\cos{(18^\circ)} \,=\, 0.9523809523\ldots$
|
# Thread: Find a set of vectors whose span is the kernel of the following matrix.
1. ## Find a set of vectors whose span is the kernel of the following matrix.
Find a set of vectors whose span is the kernel of the following matrix:
1 1 2 0
2 1 0 -1
0 1 4 1
When I calculate the kernel I get:
2s+t
-4s-t
s
t
To find the set of vectors that span this kernel do I factor out the s and t?
s * [2, -4, 1, 0]T + t * [1, -1, 0, 1]T
Am I close on this one?
2. ## Re: Find a set of vectors whose span is the kernel of the following matrix.
correct
The two vectors $\displaystyle \{\{1,-1,0,1\},\{2,-4,1,0\}\}$ form a basis for the kernel of the matrix
3. ## Re: Find a set of vectors whose span is the kernel of the following matrix.
So I am assuming that I would leave out the constant multipliers s and t?
4. ## Re: Find a set of vectors whose span is the kernel of the following matrix.
Saying that "{{1, -1, 0, 1}, {2, -4, 1 , 0}} is a basis for the kernel" means that any vector in the kernel can be written in the form s{1, -1, 0, 1}+ t{2, -4, 1, 0} for numbers s and t. Those are just two different ways of saying the same thing. |
# Find the Period of the rotation, the math gets tricky
## Homework Statement
[PLAIN]http://img573.imageshack.us/img573/6932/72101948.png [Broken]
In the diagram above, a block is on a cone rotating above a height of h and is a distance r from the rotating axis. The cone is subjected to gravity, but is resisting due to the static friction on the cone (static friction because it is not going up or down). Find an expression of the period of this rotation.
## The Attempt at a Solution
[PLAIN]http://img204.imageshack.us/img204/9218/28918584.png [Broken]
Breaking the components of the forces I get
(1) $$ncos(\alpha) - fsin(\alpha) = \frac{mv^2}{r}$$
(2) $$nsin(\alpha) +fcos(\alpha)=mg$$
(3) $$v^2 = \frac{4\pi^2 r^2}{T^2}$$
Now, here is the problem(s)
Did I set it up right? Is there another equation missing?
Also, is it mathematically correctly to divide (1) by (2)
According to this thread https://www.physicsforums.com/showthread.php?t=465924 I can, but is only because the functions were linear? If I multiply (1) by $$sin(\alpha)$$ and (2) by cos(\alpha), can I add (1) to (2)?
Related Introductory Physics Homework Help News on Phys.org
Can some mod edit my tex in cos\alpha...? I forgot to add [ tex] and [/tex]
Last edited:
Omg I solved it
$$f = \mu n$$
And so
$$(1)ncos\alpha - \mu nsin\alpha = \frac{mv^2}{r}$$
$$(2)nsin\alpha + \mu ncos\alpha = mg$$
From Euclid's elements, I can divide (1) by (2) and I get
$$\frac{cos\alpha - \mu sin\alpha}{sin\alpha + \mu cos\alpha}$$
Then substituting $$4\frac{\pi^2 r^2}{T^2} = v^2$$ and simplifying I get
T = $$\pm 2\pi\sqrt{\frac{r}{g}\frac{sin\alpha - \mu cos\alpha}{cos\alpha - \mu sin\alpha}}}$$
And of course we reject negative period
Omg I solved it
$$f = \mu n$$
And so
$$(1)ncos\alpha - \mu nsin\alpha = \frac{mv^2}{r}$$
$$(2)nsin\alpha + \mu ncos\alpha = mg$$
From Euclid's elements, I can divide (1) by (2) and I get
$$\frac{cos\alpha - \mu sin\alpha}{sin\alpha + \mu cos\alpha} = \frac{v^2}{rg}$$
Then substituting $$\frac{4\pi^2 r^2}{T^2} = v^2$$ and simplifying I get
$$T = \pm 2\pi\sqrt{\frac{r}{g}\left (\frac{sin\alpha - \mu cos\alpha}{cos\alpha - \mu sin\alpha}\right )}}$$
And of course we reject negative period |
Math Help - amplitude for cos graph
1. amplitude for cos graph
from the diagram, i can say that the
amplitude, a is 3 but the ans given is 1.5.
y intercept, b is 0 but the ans given is 1.5
and how to determine the n?? the ans for n is 4.
i need the above ans before continue to b question..tq
2. I had a big thing typed out but lost it.
Long story short - their answers are correct. This graph has been flipped and shifted horizontally. Go back to what the graph of cosine looks like; if you still need help let me know.
3. do u mean the attached grah already flipped and shifted..
i need your help on this
4. Well, yes the attached graph has already been flipped and shifted. . .but you already know that from the equation they give you (assuming they aren't being jerks):
-acos(nx)+b
If you work from scratch and start applying the transformations:
-cos(x): Graph of cosine is flipped (so 0,1 becomes 0,-1)
-acos(x): Graph of cosine is stretched/compressed by a factor of a.
-acos(nx): Graph of cosines period is adjusted from $2\pi$ to $\frac{2\pi}{n}$
-acos(nx)+b: Graph of cosine is shifted vertically by b units.
You then end up with the graph you are looking at right now. It is up to you however to, using the values of points on this graph as well as an understanding of how to calculate amplitude, period and shifting, figure out what a, b, c and n are.
5. tq for the above information and i will try to work out on it..
tq
6. How are you going with the problem? Thought I'd help.
The best way I think is to first draw in the equilibrium line (sometimes called mean value line) basically its the horizontal line through the middle of the graph. Thats at y=1.5. That's what b is.
a is the amplitude which is the max vertical distance between the equilibrium line and the curve, so a = 1.5. (The neg sign on a simply means the graph has been flipped)
The period of the function is = 2pi/n. On your graph the period is clearly 0.5pi. Therefore solve 2pi/n =0.5pi to give n=4.
Hope this helps. |
## Engineering Mechanics: Statics & Dynamics (14th Edition)
Published by Pearson
# Chapter 13 - Kinetics of a Particle: Force and Acceleration - Section 13.4 - Equations of Motion: Rectangular Coordinates - Problems - Page 130: 12
#### Answer
$F=\frac{m(g+a_B)\sqrt{4y^2+d^2}}{4y}$
#### Work Step by Step
We can determine the magnitude of the force as follows: $\Sigma F_y=ma_y$ $\implies 2Fcos\theta-mg=ma_B$ But from the given figure, $cos\theta=\frac{y}{\sqrt{y^2+(d/2)^2}}$ Now the above equation becomes $2F(\frac{y}{\sqrt{y^2+(d/2)^2}})-mg=ma_B$ $\implies F=\frac{mg+ma_B}{2(\frac{y}{\sqrt{y^2+(d/2)^2}})}$ $\implies F=\frac{m(g+a_B)}{4(\frac{y}{\sqrt{4y^2+d^2}})}$ This simplifies to: $F=\frac{m(g+a_B)\sqrt{4y^2+d^2}}{4y}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
## Monday, July 13, 2015
### Solve for all 6 complex roots of the equation $x^6+10x^5+70x^4+288x^3+880x^2+1600x+1792=0$ (Heuristic Solution)
Solve for all 6 complex roots of the equation $x^6+10x^5+70x^4+288x^3+880x^2+1600x+1792=0$.
My solution:
Let $f(x)=x^6+10x^5+70x^4+288x^3+880x^2+1600x+1792=(x^2+ax+b)(x^2+px+q)(x^2+mx+n)$, we notice then that
1.
where each of the discriminant for each quadratic factor is less than zero (since we're told $f(x)$ has all 6 complex roots) and
2.
$a,\,b,\,p,\,q,\,m,\,n \in N$ since the the coefficient on the leading term is $1$.
When $x=-1$, we get:
$\color{black}845=\color{yellow}\bbox[5px,purple]{(b-a+1)}\color{black}\bbox[5px,orange]{(q-p+1)}\color{orange}\bbox[5px,blue]{(n-m+1)}[/MATH]$5\cdot 13 \cdot 13 =(b-a+1)(q-p+1)(n-m+1)$When$x=1$, we have:$4641=(b+a+1)(p+q+1)(m+n+1)3\cdot 7 \cdot 13 \cdot 17 =(b+a+1)(q+p+1)(n+m+1)$If we let$\color{yellow}\bbox[5px,purple]{b-a+1=5}$,$\color{black}\bbox[5px,orange]{q-p+1=13}$and$\color{orange}\bbox[5px,blue]{n-m+1=13}$, we obtain:$\begin{align*}3\cdot 7 \cdot 13 \cdot 17&=(b+a+1)(q+p+1)(n+m+1)\\&=(b-a+1+a+a)(q-p+1+p+p)(n-m+1+m+m)\\&=(\color{yellow}\bbox[5px,purple]{(b-a+1)}\color{black}+2a)(\color{black}\bbox[5px,orange]{(q-p+1)}\color{black}+2p)(\color{orange}\bbox[5px,blue]{(n-m+1)}\color{black}+2m)\\&=(5+2a)(13+2p)(13+2m)\end{align*}$Now, if we consider for one more case that is when$x=-2$, that gives:$x^6+10x^5+70x^4+288x^3+880x^2+1600x+1792=(x^2+ax+b)(x^2+px+q)(x^2+mx+n)672=(4-2a+b)(4-2p+q)(4-2m+n)672=(3-a+1-a+b)(3-p+1-p+q)(3-m+1-m+n)672=(3-a+\color{yellow}\bbox[5px,purple]{(b-a+1)}\color{black})(3-p+\color{black}\bbox[5px,orange]{(q-p+1)}\color{black})(3-m+\color{orange}\bbox[5px,blue]{(n-m+1)}\color{black})672=(8-a)(16-p)(16-m)2^5\cdot 3 \cdot 7=(8-a)(16-p)(16-m)$Now, if we focus solely on the conditions $\color{yellow}\bbox[5px,green]{3\cdot 7 \cdot 13 \cdot 17 =(5+2a)(13+2p)(13+2m)}$ and $\color{black}\bbox[5px,yellow]{2^5\cdot 3 \cdot 7=(8-a)(16-p)(16-m)}$, It's easy to check that$a=4,\,p=2,\,m=4$satisfy the condition and that yields$b=8,\,p=14,\,m=16$and hence$x^2+ax+b=x^2+4x+8=0$gives the complex roots of$-2 \pm 2i$.$x^2+px+q=x^2+2x+14=0$gives the complex roots of$-1 \pm \sqrt{13}i$.$x^2+mx+n=x^2+4x+16=0$gives the complex roots of$-2\pm2\sqrt{3}i\$. |
# Question about this linear equation with fractions
hackedagainanda
Homework Statement:
8/(x - 2) - (13/2) = 3/(2x - 4)
Relevant Equations:
None.
I use 2x -4 as the LCD and turn 8/(x - 2) - (13/2) = 3 into 16 - 13x - 4 = 3, I then get 12 - 13x = 3 which leads me to 13x = -9 so x = -9/13 which is the wrong answer.
Where did I make a mistake?
## Answers and Replies
hackedagainanda
I found my mistake, (-13/2) * 2x -4 is - 13(x - 2) = 13x + 26 so then its 16 + 26 = 42 + 13x = 3 and then 42 - 3 = 39 then 13x = 39 so x is 3.
Staff Emeritus
Science Advisor
Homework Helper
Gold Member
I found my mistake, (-13/2) * 2x -4 is - 13(x - 2) = 13x + 26 so then its 16 + 26 = 42 + 13x = 3 and then 42 - 3 = 39 then 13x = 39 so x is 3.
The solution may be x = 3, but you have several errors.
−13(x − 2) = −13x +26 .
That gives the Left Hand Side as being:
(2x − 4)(8/(x − 2) − (13/2)) which is 16 + 26 − 13x and finally 42 − 13x
The Right Hand Side becomes 3.
hackedagainanda
Thanks for the tip, I was sloppy when writing down the equation. I need to slow down and make sure all the signs are correct.
berkeman |
## Using Matrices to Solve Systems of Equations if (parent.playingGame) document.writeln('<i><font color = AA00DD>Game Version</font></i>')
This tutorial: Part A: The Matrix of a System and Row Operations
In the preceding tutorial we talked about systems of linear equations in two unknowns. Here we generalize to any number of unknowns and talk about a different way of solving all such systems.
Linear Equations
A linear equation in the n unknowns x_1, x_2, ..., x_n has the form
a_1x_1 + ... + a_nx_n = b (a_1, a_2, ..., a_n constants)
The numbers a_1, a_2, ..., a_n are the coefficients and b is the constant term, or right-hand side.
Note We often call the unknowns x, y, z, ... instead of x_1, x_2, ..., x_n when convenient.
Examples:
Two unknowns: 4x - 5y = 0 a_1 = 4, a_2 = -5, b = 0 Three unknowns: -4x + y + 2z = -3 a_1 = -4, a_2 = 1, a_3 = 2, b = -3 Four unknowns: 3x_1 + x_2 - x_3 + 11x_4 = 5 a_1 = 3. a_2 = 1, a_3 = -1, a_4 = 11, b = 5
Matrix Form of a Linear Equation
The matrix form of the equation a_1x_1 + a_2x_2 + ... + a_nx_n = b is the row matrix [a_1 a_2 ... a_n b].
Examples:
4x - 5y = 0; (Unknowns: x, y) Matrix form: [4 -5 0] 4x = -3 (Unknowns: x, y) Matrix form: [4 0 -3] 4x + 0y = -3 2x - z = 0 (Unknowns: x, y, z) Matrix form: [2 0 -1 0] 2x + 0y - z = 0
(Unknowns: x, y)
Matrix form:
(Unknowns: x, y)
Matrix form:
(Unknowns: x, y, z)
Matrix form:
Matrix Form of a System of Linear Equations; Augmented Matrix
If we have a system of two or more linear equations in the same unknowns, then the augmented matrix of the system is just the matrix whose rows are the matrix forms of the individual equations. (It is called "augmented" because it includes the right-hand-sides of the equations.)
Examples:
System of Equations
Augmented Matrix
x - 2y = 5 3x = 9
1 -2 5 3 0 9
### Row Operations
Here are three things you can do to a system of equations without effecting the solution:
1. Switch any two equations
2. Multiply both sides of any equation by a non-zero number
3. Replace any equation by its sum with another equation. More generally, you can replace an equation by, say, 4 times itself plus 5 times another equation.
Corresponding to these changes are the following row operations on an augemented matrix.
Row Operation
Example
1. Switch two rows
We write R_iR_j to indicate "Switch Row i and Row j."
1 -2 5 3 0 9
R1R2
3 0 9 1 -2 5
2. Multiply a row by a non-zero number a
We write a\.R_i next to the ith row to indicate "Multiply Row i by a."
To multiply row 2 by 5, we write the instruction 5\.R_2 next to Row 2.
1 -2 5 3 0 9 5R2
1 -2 5 15 0 45
3. Replace a row by a combination with another row
We write a\.R_i ± b\.R_j next to the ith row to mean "Replace Row i by a times Row i plus or minus b times Row j".
Write the instruction 2R1-3R2 next to Row 1 to mean:"Replace Row 1 by two times Row 1 minus three times Row 2."
In words:
"Twice the top minus three times the bottom."
1 -2 5 2R1-3R2 3 0 9
-7 -4 -17 3 0 9
Press here to see how we got that.
Perform the indicated row operations and press
You are now ready to go on to
|
Open-Web-Math-Pro is refined from open-web-math using the ProX refining framework. It contains about 5B high quality math related tokens, ready for pre-training.
Open-Web-Math-Pro is based on open-web-math, which is made available under an ODC-By 1.0 license; users should also abide by the CommonCrawl ToU: https://commoncrawl.org/terms-of-use/. We do not alter the license of any of the underlying data.
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}