Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
0 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 1
Step1: If you've set up your environment properly, this cell should run without problems
Step2: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
Step5: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https
Step6: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example
Step7: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example
Step8: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
Step9: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
Step10: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays
Step11: Math operations on arrays happen element-wise. Here's what we mean
Step12: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip
Step13: Another Jupyter pro-tip
Step14: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint
Step15: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question
Step16: This is a useful recipe for plotting any function
Step17: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is
Step18: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note
Step19: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
Step20: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
Step21: Arrays allow you to assign to multiple places at once. The special character
Step22: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
Step23: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
Step24: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
Write your answer here, replacing this text.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains
Step25: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels
Step27: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
Step28: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation
Step29: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
Step30: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel
Step31: Let's plot the proportions you computed above on a bar chart
Step32: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
Step33: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
Write your answer here, replacing this text.
Final note
Step34: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
Step35: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https | Python Code:
!pip install -U okpy
Explanation: Homework 1: Setup and (Re-)Introduction to Python
Course Policies
Here are some important course policies. These are also located at
http://www.ds100.org/sp17/.
Tentative Grading
There will be 7 challenging homework assignments. Homeworks must be completed
individually and will mix programming and short answer questions. At the end of
each week of instruction we will have an online multiple choice quiz ("vitamin") that will
help you stay up-to-date with lecture materials. Labs assignments will be
graded for completion and are intended to help with the homework assignments.
40% Homeworks
13% Vitamins
7% Labs
15% Midterm
25% Final
Collaboration Policy
Data science is a collaborative activity. While you may talk with others about
the homework, we ask that you write your solutions individually. If you do
discuss the assignments with others please include their names at the top
of your solution. Keep in mind that content from the homework and vitamins will
likely be covered on both the midterm and final.
This assignment
In this assignment, you'll learn (or review):
How to set up Jupyter on your own computer.
How to check out and submit assignments for this class.
Python basics, like defining functions.
How to use the numpy library to compute with arrays of numbers.
1. Setup
If you haven't already, read through the instructions at
http://www.ds100.org/spring-2017/setup.
The instructions for submission are at the end of this notebook.
First, let's make sure you have the latest version of okpy.
End of explanation
import math
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from datascience import *
from client.api.notebook import Notebook
ok = Notebook('hw1.ok')
Explanation: If you've set up your environment properly, this cell should run without problems:
End of explanation
ok.auth(inline=True)
Explanation: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
End of explanation
2 + 2
# This is a comment.
# In Python, the ** operator performs exponentiation.
math.e**(-2)
print("Hello" + ",", "world!")
"Hello, cell output!"
def add2(x):
This docstring explains what this function does: it adds 2 to a number.
return x + 2
def makeAdder(amount):
Make a function that adds the given amount to a number.
def addAmount(x):
return x + amount
return addAmount
add3 = makeAdder(3)
add3(4)
# add4 is very similar to add2, but it's been created using a lambda expression.
add4 = lambda x: x + 4
add4(5)
sameAsMakeAdder = lambda amount: lambda x: x + amount
add5 = sameAsMakeAdder(5)
add5(6)
def fib(n):
if n <= 1:
return 1
# Functions can call themselves recursively.
return fib(n-1) + fib(n-2)
fib(4)
# A for loop repeats a block of code once for each
# element in a given collection.
for i in range(5):
if i % 2 == 0:
print(2**i)
else:
print("Odd power of 2")
# A list comprehension is a convenient way to apply a function
# to each element in a given collection.
# The String method join appends together all its arguments
# separated by the given string. So we append each element produced
# by the list comprehension, each separated by a newline ("\n").
print("\n".join([str(2**i) if i % 2 == 0 else "Odd power of 2" for i in range(5)]))
Explanation: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https://docs.python.org/3/tutorial/ is a good place to
start.
End of explanation
def nums_reversed(n):
...
_ = ok.grade('q01a')
_ = ok.backup()
Explanation: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example:
>>> nums_reversed(5)
'5 4 3 2 1'
Note: The ellipsis (...) indicates something you should fill in. It doesn't necessarily imply you should replace it with only one line of code.
End of explanation
def string_splosion(string):
...
_ = ok.grade('q01b')
_ = ok.backup()
Explanation: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example:
>>> string_splosion('Code')
'CCoCodCode'
>>> string_splosion('data!')
'ddadatdatadata!'
>>> string_splosion('hi')
'hhi'
End of explanation
def double100(nums):
...
_ = ok.grade('q01c')
_ = ok.backup()
Explanation: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
End of explanation
def median(number_list):
...
_ = ok.grade('q01d')
_ = ok.backup()
Explanation: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
End of explanation
array1 = np.array([2, 3, 4, 5])
array2 = np.arange(4)
array1, array2
Explanation: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays:
End of explanation
array1 * 2
array1 * array2
array1 ** array2
Explanation: Math operations on arrays happen element-wise. Here's what we mean:
End of explanation
np.arange?
Explanation: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip: Pull up the docs for any function in Jupyter by running a cell with
the function name and a ? at the end:
End of explanation
np.linspace
Explanation: Another Jupyter pro-tip: Pull up the docs for any function in Jupyter by typing the function
name, then <Shift>-<Tab> on your keyboard. Super convenient when you forget the order
of the arguments to a function. You can press <Tab> multiple tabs to expand the docs.
Try it on the function below:
End of explanation
xs = ...
ys = ...
_ = ok.grade('q02')
_ = ok.backup()
Explanation: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint: Use the np.sin function. You should be able to define each variable with one line of code.)
End of explanation
plt.plot(xs, ys)
Explanation: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question:
End of explanation
# Try plotting cos here.
Explanation: This is a useful recipe for plotting any function:
1. Use linspace or arange to make a range of x-values.
2. Apply the function to each point to produce y-values.
3. Plot the points.
You might remember from calculus that the derivative of the sin function is the cos function. That means that the slope of the curve you plotted above at any point xs[i] is given by cos(xs[i]). You can try verifying this by plotting cos in the next cell.
End of explanation
def derivative(xvals, yvals):
...
slopes = ...
slopes[:5]
_ = ok.grade('q03')
_ = ok.backup()
Explanation: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is:
(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])
If the difference between xs[i+1] and xs[i] were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use "really small" differences instead.
Question 3
Define a function called derivative that takes in an array of x-values and their
corresponding y-values and computes the slope of the line connecting each point to the next point.
>>> derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
np.array([2., 2.])
>>> derivative(np.arange(5), np.arange(5) ** 2)
np.array([0., 2., 4., 6.])
Notice that the output array has one less element than the inputs since we can't
find the slope for the last point.
It's possible to do this in one short line using slicing, but feel free to use whatever method you know.
Then, use your derivative function to compute the slopes for each point in xs, ys.
Store the slopes in an array called slopes.
End of explanation
...
...
Explanation: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note: Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error.
End of explanation
plt.plot(xs[:-1], slopes, label="Numerical derivative")
plt.plot(xs[:-1], np.cos(xs[:-1]), label="True derivative")
# You can just call plt.legend(), but the legend will cover up
# some of the graph. Use bbox_to_anchor=(x,y) to set the x-
# and y-coordinates of the center-left point of the legend,
# where, for example, (0, 0) is the bottom-left of the graph
# and (1, .5) is all the way to the right and halfway up.
plt.legend(bbox_to_anchor=(1, .5), loc="center left");
Explanation: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
End of explanation
# The zeros function creates an array with the given shape.
# For a 2-dimensional array like this one, the first
# coordinate says how far the array goes *down*, and the
# second says how far it goes *right*.
array3 = np.zeros((4, 5))
array3
# The shape attribute returns the dimensions of the array.
array3.shape
# You can think of array3 as an array containing 4 arrays, each
# containing 5 zeros. Accordingly, we can set or get the third
# element of the second array in array 3 using standard Python
# array indexing syntax twice:
array3[1][2] = 7
array3
# This comes up so often that there is special syntax provided
# for it. The comma syntax is equivalent to using multiple
# brackets:
array3[1, 2] = 8
array3
Explanation: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
End of explanation
array4 = np.zeros((3, 5))
array4[:, 2] = 5
array4
Explanation: Arrays allow you to assign to multiple places at once. The special character : means "everything."
End of explanation
array5 = np.zeros((3, 5))
rows = np.array([1, 0, 2])
cols = np.array([3, 1, 4])
# Indices (1,3), (0,1), and (2,4) will be set.
array5[rows, cols] = 3
array5
Explanation: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
End of explanation
twice_identity = ...
...
twice_identity
_ = ok.grade('q05')
_ = ok.backup()
Explanation: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
End of explanation
def read_file_lines(filename):
...
...
file1 = ...
file1[:5]
_ = ok.grade('q07')
_ = ok.backup()
Explanation: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
Write your answer here, replacing this text.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains:
1 2 3
3 4 5
7 8 9
the return value should be: ['1 2 3\n', '3 4 5\n', '7 8 9\n'].
Then, use the read_file_lines function on the file 1.txt, reading the contents
into a variable called file1.
Hint: Check out this Stack Overflow page on reading lines of files.
End of explanation
def lines_to_image(file_lines):
...
image_array = ...
# Make sure to call astype like this on the 3-dimensional array
# you produce, before returning it.
return image_array.astype(np.uint8)
image1 = ...
image1.shape
_ = ok.grade('q08')
_ = ok.backup()
Explanation: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels: red, green, and blue.
Question 8
Define a function called lines_to_image that takes in the contents of a
file as a list (such as file1). It should return an array containing integers of
shape (n_rows, n_cols, 3). That is, it contains the pixel triplets organized in the
correct number of rows and columns.
For example, if the file originally contained:
4 2
0 0 0
10 10 10
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
The resulting array should be a 3-dimensional array that looks like this:
array([
[ [0,0,0], [10,10,10] ],
[ [2,2,2], [3,3,3] ],
[ [4,4,4], [5,5,5] ],
[ [6,6,6], [7,7,7] ]
])
The string method split and the function np.reshape might be useful.
Important note: You must call .astype(np.uint8) on the final array before
returning so that numpy will recognize the array represents an image.
Once you've defined the function, set image1 to the result of calling
lines_to_image on file1.
End of explanation
def show_images(images, ncols=2, figsize=(10, 7), **kwargs):
Shows one or more color images.
images: Image or list of images. Each image is a 3-dimensional
array, where dimension 1 indexes height and dimension 2
the width. Dimension 3 indexes the 3 color values red,
blue, and green (so it always has length 3).
def show_image(image, axis=plt):
plt.imshow(image, **kwargs)
if not (isinstance(images, list) or isinstance(images, tuple)):
images = [images]
images = [image.astype(np.uint8) for image in images]
nrows = math.ceil(len(images) / ncols)
ncols = min(len(images), ncols)
plt.figure(figsize=figsize)
for i, image in enumerate(images):
axis = plt.subplot2grid(
(nrows, ncols),
(i // ncols, i % ncols),
)
axis.tick_params(bottom='off', left='off', top='off', right='off',
labelleft='off', labelbottom='off')
axis.grid(False)
show_image(image, axis)
# Show image1 here:
...
Explanation: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
End of explanation
# This array is provided for your convenience.
transformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])
def expand_image_range(image):
...
expanded1 = ...
show_images(expanded1)
_ = ok.grade('q10')
_ = ok.backup()
Explanation: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation:
old value | new value
========= | =========
0 | 12
1 | 37
2 | 65
3 | 89
4 | 114
5 | 137
6 | 162
7 | 187
8 | 214
9 | 240
10 | 250
This expands the color range of the image. For example, a pixel that previously
had the value [5 5 5] (almost-black) will now have the value [137 137 137]
(gray).
Set expanded1 to the expanded image1, then display it with show_images.
This page
from the numpy docs has some useful information that will allow you
to use indexing instead of for loops.
However, the slickest implementation uses one very short line of code.
Hint: If you index an array with another array or list as in question 5, your
array (or list) of indices can contain repeats, as in array1[[0, 1, 0]].
Investigate what happens in that case.
End of explanation
def reveal_file(filename):
...
filenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',
'6.txt', '7.txt', '8.txt', '9.txt', '10.txt']
expanded_images = ...
show_images(expanded_images, ncols=5)
Explanation: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
End of explanation
def proportion_by_channel(image):
...
image_proportions = ...
image_proportions
_ = ok.grade('q12')
_ = ok.backup()
Explanation: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel: red, green, or blue. Then
the function returns an array of length three containing the proportion of
pixels categorized as red, the proportion categorized as green, and the
proportion categorized as blue (respectively). (Again, don't count pixels
that are tied between 2 or 3 colors as any category, but do count them
in the denominator when you're computing proportions.)
For example:
```
test_im = np.array([
[ [5, 2, 2], [2, 5, 10] ]
])
proportion_by_channel(test_im)
array([ 0.5, 0, 0.5 ])
If tied, count neither as the highest
test_im = np.array([
[ [5, 2, 5], [2, 50, 50] ]
])
proportion_by_channel(test_im)
array([ 0, 0, 0 ])
```
Then, set image_proportions to the result of proportion_by_channel called
on each image in expanded_images as a 2d array.
Hint: It's fine to use a for loop, but for a difficult challenge, try
avoiding it. (As a side benefit, your code will be much faster.) Our solution
uses the NumPy functions np.reshape, np.sort, np.argmax, and np.bincount.
End of explanation
# You'll learn about Pandas and DataFrames soon.
import pandas as pd
pd.DataFrame({
'red': image_proportions[:, 0],
'green': image_proportions[:, 1],
'blue': image_proportions[:, 2]
}, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\
.iloc[::-1]\
.plot.barh();
Explanation: Let's plot the proportions you computed above on a bar chart:
End of explanation
def summer_or_winter(image):
...
_ = ok.grade('q13')
_ = ok.backup()
Explanation: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
End of explanation
import skimage as sk
import skimage.io as skio
def read_image(filename):
'''Reads in an image from a filename'''
return skio.imread(filename)
def compress_image(im):
'''Takes an image as an array and compresses it to look black.'''
res = im / 25
return res.astype(np.uint8)
def to_text_file(im, filename):
'''
Takes in an image array and a filename for the resulting text file.
Creates the encoded text file for later decoding.
'''
h, w, c = im.shape
to_rgb = ' '.join
to_row = '\n'.join
to_lines = '\n'.join
rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]
lines = to_lines([to_row(row) for row in rgb])
with open(filename, 'w') as f:
f.write('{} {}\n'.format(h, w))
f.write(lines)
f.write('\n')
summers = skio.imread_collection('orig/summer/*.jpg')
winters = skio.imread_collection('orig/winter/*.jpg')
len(summers)
sum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])
win_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])
for im, n in zip(summers, sum_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
for im, n in zip(winters, win_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
Explanation: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
Write your answer here, replacing this text.
Final note: While our approach here is simplistic, skin color segmentation
-- figuring out which parts of the image belong to a human body -- is a
key step in many algorithms such as face detection.
Optional: Our code to encode images
Here are the functions we used to generate the text files for this assignment.
Feel free to send not-so-secret messages to your friends if you'd like.
End of explanation
_ = ok.grade_all()
Explanation: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
End of explanation
# Now, we'll submit to okpy
_ = ok.submit()
Explanation: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https://okpy.org/cal/data100/sp17/.
End of explanation |
1 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Accessing C Struct Data
This notebook illustrates the use of @cfunc to connect to data defined in C.
Via CFFI
Numba can map simple C structure types (i.e. with scalar members only) into NumPy structured dtypes.
Let's start with the following C declarations
Step2: We can create my_struct data by doing
Step3: Using numba.cffi_support.map_type we can convert the cffi type into a Numba Record type.
Step4: The function type can be mapped in a signature
Step5: and @cfunc can take that signature directly
Step6: Testing the cfunc via the .ctypes callable
Step7: Manually creating a Numba Record type
Sometimes it is useful to create a numba.types.Record type directly. The easiest way is to use the Record.make_c_struct() method. Using this method, the field offsets are calculated from the natural size and alignment of prior fields.
In the example below, we will manually create the my_struct structure from above.
Step8: Here's another example to demonstrate the offset calculation
Step9: Notice how the byte at pad0 and pad1 moves the offset of f2 and d3.
A function signature can also be created manually | Python Code:
from cffi import FFI
src =
/* Define the C struct */
typedef struct my_struct {
int i1;
float f2;
double d3;
float af4[7];
} my_struct;
/* Define a callback function */
typedef double (*my_func)(my_struct*, size_t);
ffi = FFI()
ffi.cdef(src)
Explanation: Accessing C Struct Data
This notebook illustrates the use of @cfunc to connect to data defined in C.
Via CFFI
Numba can map simple C structure types (i.e. with scalar members only) into NumPy structured dtypes.
Let's start with the following C declarations:
End of explanation
# Make a array of 3 my_struct
mydata = ffi.new('my_struct[3]')
ptr = ffi.cast('my_struct*', mydata)
for i in range(3):
ptr[i].i1 = 123 + i
ptr[i].f2 = 231 + i
ptr[i].d3 = 321 + i
for j in range(7):
ptr[i].af4[j] = i * 10 + j
Explanation: We can create my_struct data by doing:
End of explanation
from numba import cffi_support
cffi_support.map_type(ffi.typeof('my_struct'), use_record_dtype=True)
Explanation: Using numba.cffi_support.map_type we can convert the cffi type into a Numba Record type.
End of explanation
sig = cffi_support.map_type(ffi.typeof('my_func'), use_record_dtype=True)
sig
Explanation: The function type can be mapped in a signature:
End of explanation
from numba import cfunc, carray
@cfunc(sig)
def foo(ptr, n):
base = carray(ptr, n) # view pointer as an array of my_struct
tmp = 0
for i in range(n):
tmp += base[i].i1 * base[i].f2 / base[i].d3 + base[i].af4.sum()
return tmp
Explanation: and @cfunc can take that signature directly:
End of explanation
addr = int(ffi.cast('size_t', ptr))
print("address of data:", hex(addr))
result = foo.ctypes(addr, 3)
result
Explanation: Testing the cfunc via the .ctypes callable:
End of explanation
from numba import types
my_struct = types.Record.make_c_struct([
# Provides a sequence of 2-tuples i.e. (name:str, type:Type)
('i1', types.int32),
('f2', types.float32),
('d3', types.float64),
('af4', types.NestedArray(dtype=types.float32, shape=(7,)))
])
my_struct
Explanation: Manually creating a Numba Record type
Sometimes it is useful to create a numba.types.Record type directly. The easiest way is to use the Record.make_c_struct() method. Using this method, the field offsets are calculated from the natural size and alignment of prior fields.
In the example below, we will manually create the my_struct structure from above.
End of explanation
padded = types.Record.make_c_struct([
('i1', types.int32),
('pad0', types.int8), # padding bytes to move the offsets
('f2', types.float32),
('pad1', types.int8), # padding bytes to move the offsets
('d3', types.float64),
])
padded
Explanation: Here's another example to demonstrate the offset calculation:
End of explanation
new_sig = types.float64(types.CPointer(my_struct), types.uintp)
print('signature:', new_sig)
# Our new signature matches the previous auto-generated one.
print('signature matches:', new_sig == sig)
Explanation: Notice how the byte at pad0 and pad1 moves the offset of f2 and d3.
A function signature can also be created manually:
End of explanation |
2 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas and Friends
Austin Godber
Mail
Step1: Background - NumPy - Arrays
Step2: Background - NumPy - Arrays
Arrays have NumPy specific types, dtypes, and can be operated on.
Step3: Now, on to Pandas
Pandas
Tabular, Timeseries, Matrix Data - labeled or not
Sensible handling of missing data and data alignment
Data selection, slicing and reshaping features
Robust data import utilities.
Advanced time series capabilities
Data Structures
Series - 1D labeled array
DataFrame - 2D labeled array
Panel - 3D labeled array (More D)
Assumed Imports
In my code samples, assume I import the following
Step4: Series
one-dimensional labeled array
holds any data type
axis labels known as index
implicit integert indexes
dict-like
Create a Simple Series
Step5: Series Operations
Step6: Series Operations - Cont.
Step7: Series Index
Step8: Date Convenience Functions
A quick aside ...
Step9: Datestamps as Index
Step10: Selecting By Index
Note that the integer index is retained along with the new date index.
Step11: Selecting by value
Step12: Selecting by Label (Date)
Step13: Series Wrapup
Things not covered but you should look into
Step14: DataFrame - Index/Column Names
Step15: DataFrame - Operations
Step16: See? You never need Excel again!
DataFrame - Column Access
Deleting a column.
Step17: DataFrame
Remember this, data2, for the next examples.
Step18: DataFrame - Column Access
As a dict
Step19: DataFrame - Column Access
As an attribute
Step20: DataFrame - Row Access
By row label
Step21: DataFrame - Row Access
By integer location
Step22: DataFrame - Cell Access
Access column, then row or use iloc and row/column indexes.
Step23: DataFrame - Taking a Peek
Look at the beginning of the DataFrame
Step24: DataFrame - Taking a Peek
Look at the end of the DataFrame.
Step25: DataFrame Wrap Up
Just remember,
A DataFrame is just a bunch of Series grouped together.
Any one dimensional slice returns a Series
Any two dimensional slice returns another DataFrame.
Elements are typically NumPy types or Objects.
Panel
Like DataFrame but 3 or more dimensions.
IO Tools
Robust IO tools to read in data from a variety of sources
CSV - pd.read_csv()
Clipboard - pd.read_clipboard()
SQL - pd.read_sql_table()
Excel - pd.read_excel()
Plotting
Matplotlib - s.plot() - Standard Python Plotting Library
Trellis - rplot() - An 'R' inspired Matplotlib based plotting tool
Bringing it Together - Data
The csv file (phx-temps.csv) contains Phoenix weather data from
GSOD
Step26: Bringing it Together - Code
Advanced read_csv(), parsing the dates and using them as the index, and naming the columns.
Step27: Bringing it Together - Plot
Step28: Boo, Pandas and Friends would cry if they saw such a plot.
Bringing it Together - Plot
Lets see a smaller slice of time
Step29: Bringing it Together - Plot
Lets operate on the DataFrame ... lets take the differnce between the highs and lows. | Python Code:
import numpy as np
# np.zeros, np.ones
data0 = np.zeros((2, 4))
data0
# Make an array with 20 entries 0..19
data1 = np.arange(20)
# print the first 8
data1[0:8]
Explanation: Pandas and Friends
Austin Godber
Mail: [email protected]
Twitter: @godber
Presented at DesertPy, Jan 2015.
What does it do?
Pandas is a Python data analysis tool built on top of NumPy that provides a
suite of data structures and data manipulation functions to work on those data
structures. It is particularly well suited for working with time series data.
Getting Started - Installation
Installing with pip or apt-get::
```
pip install pandas
or
sudo apt-get install python-pandas
```
Mac - Homebrew or MacPorts to get the dependencies, then pip
Windows - Python(x,y)?
Commercial Pythons: Anaconda, Canopy
Getting Started - Dependencies
Dependencies, required, recommended and optional
```
Required
numpy, python-dateutil, pytx
Recommended
numexpr, bottleneck
Optional
cython, scipy, pytables, matplotlib, statsmodels, openpyxl
```
Pandas' Friends!
Pandas works along side and is built on top of several other Python projects.
IPython
Numpy
Matplotlib
Pandas gets along with EVERYONE!
<img src='panda-on-a-unicorn.jpg'>
Background - IPython
IPython is a fancy python console. Try running ipython or ipython --pylab on your command line. Some IPython tips
```python
Special commands, 'magic functions', begin with %
%quickref, %who, %run, %reset
Shell Commands
ls, cd, pwd, mkdir
Need Help?
help(), help(obj), obj?, function?
Tab completion of variables, attributes and methods
```
Background - IPython Notebook
There is a web interface to IPython, known as the IPython notebook, start it
like this
```
ipython notebook
or to get all of the pylab components
ipython notebook --pylab
```
IPython - Follow Along
Follow along by connecting to TMPNB.ORG!
http://tmpnb.org
Background - NumPy
NumPy is the foundation for Pandas
Numerical data structures (mostly Arrays)
Operations on those.
Less structure than Pandas provides.
Background - NumPy - Arrays
End of explanation
# make it a 4,5 array
data = np.arange(20).reshape(4, 5)
data
Explanation: Background - NumPy - Arrays
End of explanation
print("dtype: ", data.dtype)
result = data * 20.5
print(result)
Explanation: Background - NumPy - Arrays
Arrays have NumPy specific types, dtypes, and can be operated on.
End of explanation
import pandas as pd
import numpy as np
Explanation: Now, on to Pandas
Pandas
Tabular, Timeseries, Matrix Data - labeled or not
Sensible handling of missing data and data alignment
Data selection, slicing and reshaping features
Robust data import utilities.
Advanced time series capabilities
Data Structures
Series - 1D labeled array
DataFrame - 2D labeled array
Panel - 3D labeled array (More D)
Assumed Imports
In my code samples, assume I import the following
End of explanation
s1 = pd.Series([1, 2, 3, 4, 5])
s1
Explanation: Series
one-dimensional labeled array
holds any data type
axis labels known as index
implicit integert indexes
dict-like
Create a Simple Series
End of explanation
# integer multiplication
print(s1 * 5)
Explanation: Series Operations
End of explanation
# float multiplication
print(s1 * 5.0)
Explanation: Series Operations - Cont.
End of explanation
s2 = pd.Series([1, 2, 3, 4, 5],
index=['a', 'b', 'c', 'd', 'e'])
s2
Explanation: Series Index
End of explanation
dates = pd.date_range('20130626', periods=5)
print(dates)
print()
print(dates[0])
Explanation: Date Convenience Functions
A quick aside ...
End of explanation
s3 = pd.Series([1, 2, 3, 4, 5], index=dates)
print(s3)
Explanation: Datestamps as Index
End of explanation
print(s3[0])
print(type(s3[0]))
print()
print(s3[1:3])
print(type(s3[1:3]))
Explanation: Selecting By Index
Note that the integer index is retained along with the new date index.
End of explanation
s3[s3 < 3]
Explanation: Selecting by value
End of explanation
s3['20130626':'20130628']
Explanation: Selecting by Label (Date)
End of explanation
data1 = pd.DataFrame(np.random.rand(4, 4))
data1
Explanation: Series Wrapup
Things not covered but you should look into:
Other instantiation options: dict
Operator Handling of missing data NaN
Reforming Data and Indexes
Boolean Indexing
Other Series Attributes:
index - index.name
name - Series name
DataFrame
2-dimensional labeled data structure
Like a SQL Table, Spreadsheet or dict of Series objects.
Columns of potentially different types
Operations, slicing and other behavior just like Series
DataFrame - Simple
End of explanation
dates = pd.date_range('20130626', periods=4)
data2 = pd.DataFrame(
np.random.rand(4, 4),
index=dates, columns=list('ABCD'))
data2
Explanation: DataFrame - Index/Column Names
End of explanation
data2['E'] = data2['B'] + 5 * data2['C']
data2
Explanation: DataFrame - Operations
End of explanation
# Deleting a Column
del data2['E']
data2
Explanation: See? You never need Excel again!
DataFrame - Column Access
Deleting a column.
End of explanation
data2
Explanation: DataFrame
Remember this, data2, for the next examples.
End of explanation
data2['B']
Explanation: DataFrame - Column Access
As a dict
End of explanation
data2.B
Explanation: DataFrame - Column Access
As an attribute
End of explanation
data2.loc['20130627']
Explanation: DataFrame - Row Access
By row label
End of explanation
data2.iloc[1]
Explanation: DataFrame - Row Access
By integer location
End of explanation
print(data2.B[0])
print(data2['B'][0])
print(data2.iloc[0,1]) # [row,column]
Explanation: DataFrame - Cell Access
Access column, then row or use iloc and row/column indexes.
End of explanation
data3 = pd.DataFrame(np.random.rand(100, 4))
data3.head()
Explanation: DataFrame - Taking a Peek
Look at the beginning of the DataFrame
End of explanation
data3.tail()
Explanation: DataFrame - Taking a Peek
Look at the end of the DataFrame.
End of explanation
# simple readcsv
phxtemps1 = pd.read_csv('phx-temps.csv')
phxtemps1.head()
Explanation: DataFrame Wrap Up
Just remember,
A DataFrame is just a bunch of Series grouped together.
Any one dimensional slice returns a Series
Any two dimensional slice returns another DataFrame.
Elements are typically NumPy types or Objects.
Panel
Like DataFrame but 3 or more dimensions.
IO Tools
Robust IO tools to read in data from a variety of sources
CSV - pd.read_csv()
Clipboard - pd.read_clipboard()
SQL - pd.read_sql_table()
Excel - pd.read_excel()
Plotting
Matplotlib - s.plot() - Standard Python Plotting Library
Trellis - rplot() - An 'R' inspired Matplotlib based plotting tool
Bringing it Together - Data
The csv file (phx-temps.csv) contains Phoenix weather data from
GSOD::
1973-01-01 00:00:00,53.1,37.9
1973-01-02 00:00:00,57.9,37.0
...
2012-12-30 00:00:00,64.9,39.0
2012-12-31 00:00:00,55.9,41.0
Bringing it Together - Code
Simple read_csv()
End of explanation
# define index, parse dates, name columns
phxtemps2 = pd.read_csv(
'phx-temps.csv', index_col=0,
names=['highs', 'lows'], parse_dates=True)
phxtemps2.head()
Explanation: Bringing it Together - Code
Advanced read_csv(), parsing the dates and using them as the index, and naming the columns.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
phxtemps2.plot() # pandas convenience method
Explanation: Bringing it Together - Plot
End of explanation
phxtemps2['20120101':'20121231'].plot()
Explanation: Boo, Pandas and Friends would cry if they saw such a plot.
Bringing it Together - Plot
Lets see a smaller slice of time:
End of explanation
phxtemps2['diff'] = phxtemps2.highs - phxtemps2.lows
phxtemps2['20120101':'20121231'].plot()
Explanation: Bringing it Together - Plot
Lets operate on the DataFrame ... lets take the differnce between the highs and lows.
End of explanation |
3 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
X_train.shape
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for kc in k_choices:
k_to_accuracies[kc] = []
for val_idx in range(num_folds):
XX = np.concatenate(X_train_folds[0:val_idx] + X_train_folds[val_idx+1: num_folds])
yy = np.concatenate(y_train_folds[0:val_idx] + y_train_folds[val_idx+1: num_folds])
classifier = KNearestNeighbor()
classifier.train(XX, yy)
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict(X_train_folds[val_idx], k=kc)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_train_folds[val_idx])
accuracy = float(num_correct) / y_train_folds[val_idx].shape[0]
k_to_accuracies[kc].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
4 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
正向传播和反向传播实现
反向传播算法
之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\theta}\left(x\right)$。
现在,为了计算代价函数的偏导数$\frac{\partial}{\partial\Theta^{(l)}_{ij}}J\left(\Theta\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层
可视化数据
利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据
Step1: 模型展示
按照默认 我们设计一个输入层,一个隐藏层,一个输出层
前向传播和代价函数
在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\newcommand{\subk}[1]{ #1_k }$ $$h_\theta\left(x\right)\in \mathbb{R}^{K}$$ $${\left({h_\theta}\left(x\right)\right)}_{i}={i}^{th} \text{output}$$
$J(\Theta) = -\frac{1}{m} \left[ \sum\limits_{i=1}^{m} \sum\limits_{k=1}^{k} {y_k}^{(i)} \log \subk{(h_\Theta(x^{(i)}))} + \left( 1 - y_k^{(i)} \right) \log \left( 1- \subk{\left( h_\Theta \left( x^{(i)} \right) \right)} \right) \right] + \frac{\lambda}{2m} \sum\limits_{l=1}^{L-1} \sum\limits_{i=1}^{s_l} \sum\limits_{j=1}^{s_{l+1}} \left( \Theta_{ji}^{(l)} \right)^2$
Step2: 反向传播
这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。
Step3: 初始话参数
到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。
我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:
Theta1 = rand(10, 11) (2eps) – eps
Step4: 反向传播
反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算
步骤:
随机初始化权重theta
实现前向传递对任何xi 都能取得h(xi)
实现Jθ
Step5: 梯度检验
梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\theta$,我们计算出在 $\theta$-$\varepsilon $ 处和 $\theta$+$\varepsilon $ 的代价值($\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\theta$ 处的代价值。 | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from scipy.io import loadmat
from sklearn.preprocessing import OneHotEncoder
data = loadmat('../data/andrew_ml_ex33507/ex3data1.mat')
data
X = data['X']
y = data['y']
X.shape, y.shape#看下维度
# 目前考虑输入是图片的像素值,20*20像素的图片有400个输入层单元,不包括需要额外添加的加上常数项。 材料已经提供了训练好的神经网络的参数,有25个单元和10个输出单元(10个输出)
weight = loadmat("../data/andrew_ml_ex33507/ex3weights.mat")
theta1, theta2 = weight['Theta1'], weight['Theta2']
theta1.shape, theta2.shape
sample_idx = np.random.choice(np.arange(data['X'].shape[0]), 100)
sample_images = data['X'][sample_idx, :]
#展示二进制图
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(8, 8))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(sample_images[5 * r + c].reshape((20, 20))).T,cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
Explanation: 正向传播和反向传播实现
反向传播算法
之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\theta}\left(x\right)$。
现在,为了计算代价函数的偏导数$\frac{\partial}{\partial\Theta^{(l)}_{ij}}J\left(\Theta\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层
可视化数据
利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据
End of explanation
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#2st 上面传播规律,定义第一层,并计算第二层(隐藏层)的值,并添加额外值
def forward_propagate(X,theta1,theta2):
m= X.shape[0]
a1 = np.insert(X,0, values=np.ones(m), axis=1)
Z2 = a1*theta1.T
a2= np.insert(sigmoid(Z2),0, values=np.ones(m), axis=1)
Z3= a2*theta2.T
h= sigmoid(Z3)
return a1,Z2,a2,Z3,h
# 代价函数(不带规则化项(也叫权重衰减项) Y=R(5000*10) ,这里直接使用二维矩阵,代替循环累加
def cost(X,Y,theta1,theta2):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
# 对y标签进行编码 一开始我们得到的y是维500*1 的向量,但我们要把他编码成的矩阵。 比如说,原始y0=2,那么转化后的Y对应行就是[0,1,0...0],原始转化后的Y对应行就是[0,0...0,1]
# Scikitlearn有一个内置的编码函数,我们可以使用这个。
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y)
y_onehot.shape
y[0], y_onehot[0,:] # y0是数字0
# 初始化设置
input_size = 400
num_labels = 10
cost(X, y_onehot,theta1, theta2)
# 加入正则项
def cost_reg(X,Y,theta1,theta2,learning_rate):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
# 计算正则时,第一项时不参与计算
J = (-1/m)*J + (float(learning_rate) / (2 * m))*(np.sum(np.power(theta1[:,1:],2))+np.sum(np.power(theta2[:,1:],2)))
return J
# theta1.shape,theta2.shape
cost_reg(X, y_onehot,theta1, theta2,1)
Explanation: 模型展示
按照默认 我们设计一个输入层,一个隐藏层,一个输出层
前向传播和代价函数
在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\newcommand{\subk}[1]{ #1_k }$ $$h_\theta\left(x\right)\in \mathbb{R}^{K}$$ $${\left({h_\theta}\left(x\right)\right)}_{i}={i}^{th} \text{output}$$
$J(\Theta) = -\frac{1}{m} \left[ \sum\limits_{i=1}^{m} \sum\limits_{k=1}^{k} {y_k}^{(i)} \log \subk{(h_\Theta(x^{(i)}))} + \left( 1 - y_k^{(i)} \right) \log \left( 1- \subk{\left( h_\Theta \left( x^{(i)} \right) \right)} \right) \right] + \frac{\lambda}{2m} \sum\limits_{l=1}^{L-1} \sum\limits_{i=1}^{s_l} \sum\limits_{j=1}^{s_{l+1}} \left( \Theta_{ji}^{(l)} \right)^2$
End of explanation
# 计算sigmoid函数的导数
def sigmoid_gradient(z):
return np.multiply(sigmoid(z) ,(1-sigmoid(z)))
# 检查
sigmoid_gradient(0)
Explanation: 反向传播
这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。
End of explanation
# 初始化设置
input_size = 400 #输入单元数量
hidden_size = 25 # y隐藏单元数量
num_labels = 10 # 输出单元数
epsilon = 0.001
theta01=np.random.rand(hidden_size,input_size+1) * 2*epsilon - epsilon# +1是添加偏置单元
theta02 =np.random.rand(num_labels,hidden_size+1)* 2*epsilon - epsilon
theta01.shape,theta02.shape
Explanation: 初始话参数
到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。
我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:
Theta1 = rand(10, 11) (2eps) – eps
End of explanation
# 分别得出
def forward_propagateNEW(X,thetalist):
m= X.shape[0]
a = np.insert(X,0, values=np.ones(m), axis=1)
alist=[a]
zlist=[]
for i in range(len(thetalist)):
theta= thetalist[i]
z = a * theta
# a= np.insert(sigmoid(z),0, values=np.ones(m), axis=1)
a=sigmoid(z)
if(i<len(thetalist)-1):
a= np.insert(a,0, values=np.ones(m), axis=1)
zlist.append(z)
alist.append(a)
return zlist,alist
# Δ 用delta1 和delta2 替代
def backpropRegSelf(input_size, hidden_size, num_labels, X, y, learning_rate,L=3): # 随机化后的 这里为3层
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
#初始化参数
theta1 = (np.random.random((input_size+1,hidden_size))- 0.5)* 0.24
theta2 = (np.random.random((hidden_size+1,num_labels))- 0.5)* 0.24
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y) # 格式化y
# 前向计算 每层值
theta = [theta1, theta2]
zlist,alist = forward_propagateNEW(X, theta)# 返回 a1 z2 a2 。。。
# 初始化Deta
Delta=[]
for th in theta:
Delta.append(np.zeros(th.shape))
for i in range(m):
# 以计算a z
for l in range(L,1,-1): # 3,2 表示层数,最后一层已经算出来,单独列放
#最后一层
if l==L:
delta=alist[-1][i,:]-y_onehot[i,:] # 最后一层得δ
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta
else:
zl = zlist[l-2][i,:]
zl = np.insert(zl, 0, values=np.ones(1)) # (1, 26) 怎加偏执项
# d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
# delta1 = delta1 + (d2t[:,1:]).T * a1t
delta = np.multiply(delta*theta[l-1].T, sigmoid_gradient(zl)) #
# 因为数组从零开始,且 Delta 为 1 2 层开始 delta 从2 层开始 # (25, 401)# (10, 26)
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta[:,1:]
# add the gradient regularization term
gradAll = None
for j in range(len(Delta)):
Delta[j][:,1:] = Delta[j][:,1:]/m + (theta[j][:,1:] * learning_rate) / m
if gradAll is None:
gradAll = np.ravel(Delta[j])
else:
tmp=np.ravel(Delta[j])
gradAll = np.concatenate([gradAll,tmp])
# Delta[:,:,1:] = Delta[:,:,1:] + (theta[:,:,1:] * learning_rate) / m
return gradAll
grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
print(grad2.shape)
def backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
# reshape the parameter array into parameter matrices for each layer
theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
# run the feed-forward pass
a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)
# initializations
J = 0
delta1 = np.zeros(theta1.shape) # (25, 401)
delta2 = np.zeros(theta2.shape) # (10, 26)
# compute the cost
for i in range(m):
first_term = np.multiply(-y[i,:], np.log(h[i,:]))
second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
J += np.sum(first_term - second_term)
J = J / m
# add the cost regularization term
J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))
# perform backpropagation
for t in range(m):
a1t = a1[t,:] # (1, 401)
z2t = z2[t,:] # (1, 25)
a2t = a2[t,:] # (1, 26)
ht = h[t,:] # (1, 10)
yt = y[t,:] # (1, 10)
d3t = ht - yt # (1, 10)
z2t = np.insert(z2t, 0, values=np.ones(1)) # (1, 26)
d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
delta1 = delta1 + (d2t[:,1:]).T * a1t
delta2 = delta2 + d3t.T * a2t
delta1 = delta1 / m
delta2 = delta2 / m
# add the gradient regularization term
delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m
delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m
# unravel the gradient matrices into a single array
grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))
return J, grad
# np.random.random(size) 返回size大小的0-1随机浮点数
params = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24
j,grad = backpropReg(params, input_size, hidden_size, num_labels, X, y, 1)
print(j,grad.shape)
# j2,grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
# print(j2,grad2[0:10])
Explanation: 反向传播
反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算
步骤:
随机初始化权重theta
实现前向传递对任何xi 都能取得h(xi)
实现Jθ
End of explanation
# #J θ
# input_size = 400 #输入单元数量
# hidden_size = 25 # y隐藏单元数量
# num_labels = 10 # 输出单元数
def jcost(X, y,input_size, hidden_size, output_size,theta):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
theta1 = np.reshape(theta[0:hidden_size*(input_size+1)],(hidden_size,input_size+1))#(25,401)
theta2 = np.reshape(theta[hidden_size*(input_size+1):],(output_size,hidden_size+1))#(10.26)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(y,np.log(h))
second = np.multiply((1-y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
def check(X,y,theta1,theta2,eps):
theta = np.concatenate((np.ravel(theta1), np.ravel(theta2)))
gradapprox=np.zeros(len(theta))
for i in range(len(theta)):
thetaplus = theta
thetaplus[i] = thetaplus[i] + eps
thetaminus = theta
thetaminus[i] = thetaminus[i] - eps
gradapprox[i] = (jcost(X,y,input_size,hidden_size,num_labels,thetaplus) - jcost(X,y,input_size,hidden_size,num_labels,thetaminus)) / (2 * epsilon)
return gradapprox
# theta01.shape , theta02.shape
# 计算很慢
gradapprox = check(X,y_onehot,theta1,theta2,0.001)
numerator = np.linalg.norm(grad2-gradapprox, ord=2) # Step 1'
denominator = np.linalg.norm(grad2, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2'
difference = numerator / denominator
print(difference)
# 使用工具库计算参数最优解
from scipy.optimize import minimize
# opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))
fmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate),
method='TNC', jac=True, options={'maxiter': 250})
fmin
X = np.matrix(X)
thetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
thetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
print(thetafinal1[0,1],grad2[1])
# 计算使用优化后的θ得出的预测
a1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2 )
y_pred = np.array(np.argmax(h, axis=1) + 1)
y_pred
# 最后,我们可以计算准确度,看看我们训练完毕的神经网络效果怎么样。
# 预测值与实际值比较
from sklearn.metrics import classification_report#这个包是评价报告
print(classification_report(y, y_pred))
hidden_layer = thetafinal1[:, 1:]
hidden_layer.shape
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(12, 12))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(hidden_layer[5 * r + c].reshape((20, 20))),cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
Explanation: 梯度检验
梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\theta$,我们计算出在 $\theta$-$\varepsilon $ 处和 $\theta$+$\varepsilon $ 的代价值($\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\theta$ 处的代价值。
End of explanation |
5 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
train_words = # The final subsampled word list
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
return
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None, None])
labels = tf.placeholder(tf.int32, [None, 1])
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.variable(tf.truncated_normal(n_vocab, n_embedding)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.variable(tf.truncated_normal(n_embedding,n_vocab, stdev=0.1) # create softmax weight matrix here
softmax_b = tf.variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='sampled_softmax_loss')
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
6 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The goal of this post is to investigate if it is possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym, like SECOORA for example.
In the cell above we do the usual
Step1: We need a list of all the Regional Associations we know.
Step2: To streamline the query we can create a function that instantiate the fes filter and returns the records.
Step3: I would not trust those number completely.
Surely some of the RA listed above have more than 0/1 record.
Note that we have more information in the csw.records.
Let's inspect one of SECOORA's stations for example.
Step4: We can verify the station type, title, and last date of modification.
Step5: The subjects field contains the variables and some useful keywords.
Step6: And we can access the full XML description for the station.
Step7: This query is very simple, but also very powerful.
We can quickly assess the data available for a certain Regional Association data with just a few line of code.
You can see the original notebook here. | Python Code:
from owslib.csw import CatalogueServiceWeb
endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint, timeout=30)
Explanation: The goal of this post is to investigate if it is possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym, like SECOORA for example.
In the cell above we do the usual: instantiate a Catalogue Service Web (csw) using the NGDC catalog endpoint.
End of explanation
ioos_ras = ['AOOS', # Alaska
'CaRA', # Caribbean
'CeNCOOS', # Central and Northern California
'GCOOS', # Gulf of Mexico
'GLOS', # Great Lakes
'MARACOOS', # Mid-Atlantic
'NANOOS', # Pacific Northwest
'NERACOOS', # Northeast Atlantic
'PacIOOS', # Pacific Islands
'SCCOOS', # Southern California
'SECOORA'] # Southeast Atlantic
Explanation: We need a list of all the Regional Associations we know.
End of explanation
from owslib.fes import PropertyIsEqualTo
def query_ra(csw, ra='SECOORA'):
q = PropertyIsEqualTo(propertyname='apiso:Keywords', literal=ra)
csw.getrecords2(constraints=[q], maxrecords=100, esn='full')
return csw
for ra in ioos_ras:
csw = query_ra(csw, ra)
ret = csw.results['returned']
word = 'records' if ret > 1 else 'record'
print("{0:>8} has {1:>3} {2}".format(ra, ret, word))
csw.records.clear()
Explanation: To streamline the query we can create a function that instantiate the fes filter and returns the records.
End of explanation
csw = query_ra(csw, 'SECOORA')
key = csw.records.keys()[0]
print(key)
Explanation: I would not trust those number completely.
Surely some of the RA listed above have more than 0/1 record.
Note that we have more information in the csw.records.
Let's inspect one of SECOORA's stations for example.
End of explanation
station = csw.records[key]
station.type, station.title, station.modified
Explanation: We can verify the station type, title, and last date of modification.
End of explanation
station.subjects
Explanation: The subjects field contains the variables and some useful keywords.
End of explanation
print(station.xml)
Explanation: And we can access the full XML description for the station.
End of explanation
HTML(html)
Explanation: This query is very simple, but also very powerful.
We can quickly assess the data available for a certain Regional Association data with just a few line of code.
You can see the original notebook here.
End of explanation |
7 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One Dimensional Visualisation
Data from https
Step1: Assign column headers to the dataframe
Step2: Refine the Data
Step3: Clean Rows & Columns
Lets start by dropping redundant columns - in airports data frame, we don't need type, source
Step4: Lets start by dropping redundant rows - in airlines data frame, we don't need id = -1
Step5: Check for Consistency
All routes have an airline_id which is in the airline dataset
All routes have an source_id and dest_id which is in the airport dataset
Step6: Remove missing values
Lets remove routes where there is no airline_id provided to us | Python Code:
import pandas as pd
# Read in the airports data.
airports = pd.read_csv("../data/airports.dat.txt", header=None, na_values=['\\N'], dtype=str)
# Read in the airlines data.
airlines = pd.read_csv("../data/airlines.dat.txt", header=None, na_values=['\\N'], dtype=str)
# Read in the routes data.
routes = pd.read_csv("../data/routes.dat.txt", header=None, na_values=['\\N'], dtype=str)
Explanation: One Dimensional Visualisation
Data from https://openflights.org/data.html
Airports
Airlines
Routes
Airports
Airport ID: Unique OpenFlights identifier for this airport.
Name: Name of airport. May or may not contain the City name.
City: Main city served by airport. May be spelled differently from Name.
Country: Country or territory where airport is located. See countries.dat to cross-reference to ISO 3166-1 codes.
IATA: 3-letter IATA code. Null if not assigned/unknown.
ICAO: 4-letter ICAO code. Null if not assigned.
Latitude: Decimal degrees, usually to six significant digits. Negative is South, positive is North.
Longitude: Decimal degrees, usually to six significant digits. Negative is West, positive is East.
*Altitude: In feet.
Timezone: Hours offset from UTC. Fractional hours are expressed as decimals, eg. India is 5.5.
DST: Daylight savings time. One of E (Europe), A (US/Canada), S (South America), O (Australia), Z (New Zealand), N (None) or U (Unknown). See also: Help: Time
Tz: database time zone Timezone in "tz" (Olson) format, eg. "America/Los_Angeles".
Type: Type of the airport. Value "airport" for air terminals, "station" for train stations, "port" for ferry terminals and "unknown" if not known. In airports.csv, only type=airport is included.
Source: "OurAirports" for data sourced from OurAirports, "Legacy" for old data not matched to OurAirports (mostly DAFIF), "User" for unverified user contributions. In airports.csv, only source=OurAirports is included.
Airlines
Airline ID: Unique OpenFlights identifier for this airline.
Name: Name of the airline.
Alias: Alias of the airline. For example, All Nippon Airways is commonly known as "ANA".
IATA: 2-letter IATA code, if available.
ICAO: 3-letter ICAO code, if available.
Callsign: Airline callsign.
Country: Country or territory where airline is incorporated.
Active: "Y" if the airline is or has until recently been operational, "N" if it is defunct. This field is not reliable: in particular, major airlines that stopped flying long ago, but have not had their IATA code reassigned (eg. Ansett/AN), will incorrectly show as "Y"
Routes
Airline: 2-letter (IATA) or 3-letter (ICAO) code of the airline.
Airline ID: Unique OpenFlights identifier for airline (see Airline).
Source airport: 3-letter (IATA) or 4-letter (ICAO) code of the source airport.
Source airport ID: Unique OpenFlights identifier for source airport (see Airport)
Destination airport: 3-letter (IATA) or 4-letter (ICAO) code of the destination airport.
Destination airport ID: Unique OpenFlights identifier for destination airport (see Airport)
Codeshare "Y" if this flight is a codeshare (that is, not operated by Airline, but another carrier), empty otherwise.
Stops: Number of stops on this flight ("0" for direct)
Equipment: 3-letter codes for plane type(s) generally used on this flight, separated by spaces
Acquire the Data
End of explanation
airports.columns = ["id", "name", "city", "country", "code", "icao", "latitude",
"longitude", "altitude", "offset", "dst", "timezone", "type", "source"]
airlines.columns = ["id", "name", "alias", "iata", "icao", "callsign", "country", "active"]
routes.columns = ["airline", "airline_id", "source", "source_id", "dest",
"dest_id", "codeshare", "stops", "equipment"]
airports.head()
airlines.head()
routes.head()
Explanation: Assign column headers to the dataframe
End of explanation
airports.head()
Explanation: Refine the Data
End of explanation
airports.drop(['type', 'source'], axis=1, inplace=True)
airports.head()
airports.shape
Explanation: Clean Rows & Columns
Lets start by dropping redundant columns - in airports data frame, we don't need type, source
End of explanation
airlines.drop(0, axis=0, inplace=True)
airlines.shape
airlines.head()
Explanation: Lets start by dropping redundant rows - in airlines data frame, we don't need id = -1
End of explanation
def checkConsistency (s1, s2):
true_count = s1.isin(s2).sum()
total_count = s1.count()
consistency = true_count / total_count
return consistency
not(routes.airline_id.isin(airlines.id))
??missing
checkConsistency(routes.airline_id, airlines.id)
checkConsistency(routes.source_id, airports.id)
checkConsistency(routes.dest_id, airports.id)
Explanation: Check for Consistency
All routes have an airline_id which is in the airline dataset
All routes have an source_id and dest_id which is in the airport dataset
End of explanation
import missingno as msno
%matplotlib inline
msno.matrix(airlines)
msno.matrix(airports)
routes[routes["airline_id"] == "\\N"].count()
routes = routes[routes["airline_id"] != "\\N"]
routes.shape
Explanation: Remove missing values
Lets remove routes where there is no airline_id provided to us
End of explanation |
8 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/JHI_STRAP_Web.png" style="width
Step1: Microarray data <a id="microarray_data"></a>
<div class="alert alert-warning">
Raw array data was previously converted to plain text comma-separated variable format from two `Excel` files
Step2: Import array data <a id="import_data"></a>
Step3: <div class="alert alert-warning">
We reduce the full dataset to only the raw intensity values. We also rename the columns in each of the `control` and `treatment` dataframes.
</div>
In both control and treatment datasets, the mapping of experimental samples (input and output) across the three replicates is
Step4: Data QA <a id="data_qa"></a>
We expect that there is good agreement between input and output raw intensities for each replicate control or treatment experiment. We also expect that there should be good agreement across replicates within the controls, and within the treatment. We inspect this agreement visually with a matrix of scatterplots, below.
The plot_correlation() function can be found in the accompanying tools.py module.
Step5: There is good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation.
Step6: There is - mostly - good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation. There appear to be three problematic probes in replicate 3 that we may need to deal with in the data cleanup.
<div class="alert alert-success">
<b>Taken together, these plots indicate
Step7: Interpolating values for problem probes <a id="interpolation"></a>
We replace the three clear outlying values for the three problematic probes in input.3 of the treatment array with interpolated values. We assume that input.1 and input.2 are typical of the input intensities for these three probes, and take the average of their values to substitute for input.3 for each.
Step8: We can visualise the change in correlation for the treatment dataframe that results
Step9: Normalisation <a id="normalisation"></a>
We expect the array intensity distribution to vary according to whether the sample was from the input (strong) or output (weak) set, and whether the sample came from the control or treatment pools. We therefore divide the dataset into four independently-normalised components
Step10: We visualise the resulting distributions, in violin plots
Step11: <div class="alert-success">
These plots illustrate that there is relative reduction in measured array intensity between control and treatment arrays for both the input and output arrays.
</div>
Wide to long form <a id="wide_to_long"></a>
We have four dataframes containing normalised data
Step12: Long form data has some advantages for melting into new arrangments for visualisation, analysis, and incorporation of new data. For instance, we can visualise the distributions of input and output log intensities against each other, as below
Step13: <div class="alert-success">
This visualisation again shows that treatment intensities are generally lower than control intensities, but also suggests that the bulk of output intensities are lower than input intensities.
<br /><br />
There is a population of low-intensity values for each set of arrays, however. These appear to have a slight increase in intensity in the output, compared to input arrays.
</div>
Probe matches to Sakai and DH10B <a id="probe_matches"></a>
<div class="alert-warning">
Evidence for potential hybridisation of probes to DH10B or Sakai isolates was determined by default `BLASTN` query of each probe sequence against chromosome and plasmid feature nucleotide sequences from the NCBI records
Step14: We then add parent gene annotations to the unique probes
Step15: <div class="alert-danger">
We will certainly be interested in probes that hybridise unambiguously to Sakai or to DH10B. The [array was however designed to report on several *E. coli* isolates](http
Step16: <div class="alert-success">
This leaves us with a dataset comprising
Step17: Write data <a id="write"></a>
<div class="alert-warning">
<b>We write the censored, normalised, long-format data to the `datasets/` subdirectory.</b>
</div>
Step18: For modelling with Stan, we assign indexes for common probe ID, locus tag, and array (combination of replicate and treatment) to each probe, before writing out the complete dataset.
Step19: For testing, we want to create two data subsets, one containing a reduced number of probes, and one with a reduced number of genes/locus tags. | Python Code:
%pylab inline
import os
import random
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import scipy
import seaborn as sns
from Bio import SeqIO
import tools
Explanation: <img src="images/JHI_STRAP_Web.png" style="width: 150px; float: right;">
Supplementary Information: Holmes et al. 2020
1. Data cleaning, normalisation and quality assurance
This notebook describes raw data import, cleaning, and QA, then writing out of processed data to the data/ subdirectory, for use in model fitting.
Table of Contents
Microarray data
Import array data
Data QA
Problematic probes
Interpolation for problematic probes
Normalisation
Wide to long form
Probe matches to Sakai and DH10B
Write data
Python imports
End of explanation
# Input array data filepaths
controlarrayfile = os.path.join('..', 'data', 'control_unix_endings_flags.csv') # control experiment array data (preprocessed)
treatmentarrayfile = os.path.join('..', 'data', 'treatment_unix_endings.csv') # treatment experiment array data (preprocessed)
Explanation: Microarray data <a id="microarray_data"></a>
<div class="alert alert-warning">
Raw array data was previously converted to plain text comma-separated variable format from two `Excel` files:
<ul>
<li> The file `AH alldata 12082013.xlsx` was converted to `data/treatment_unix_endings.csv`
<li> The file `AH alldata expt1 flagged 05092013.xlsx` was converted to `data/control_unix_endings_flags.csv`
</ul>
</div>
These describe microarray results for samples that underwent two treatments:
in vitro growth only - i.e. control: data/control_unix_endings_flags.csv
in vitro growth and plant passage - i.e. treatment: data/treatment_unix_endings.csv
End of explanation
control = pd.read_csv(controlarrayfile, sep=',', skiprows=4, index_col=0)
treatment = pd.read_csv(treatmentarrayfile, sep=',', skiprows=4, index_col=0)
# Uncomment the lines below to inspect the first few rows of each dataframe
#control.head()
#treatment.head()
len(control)
Explanation: Import array data <a id="import_data"></a>
End of explanation
colnames_in = ['Raw', 'Raw.1', 'Raw.2', 'Raw.3', 'Raw.4', 'Raw.5'] # raw data columns
colnames_out = ['input.1', 'output.1', 'input.2', 'output.2', 'input.3', 'output.3'] # renamed raw data columns
# Reduce control and treatment arrays to raw data columns only
control = control[colnames_in]
control.columns = colnames_out
treatment = treatment[colnames_in]
treatment.columns = colnames_out
Explanation: <div class="alert alert-warning">
We reduce the full dataset to only the raw intensity values. We also rename the columns in each of the `control` and `treatment` dataframes.
</div>
In both control and treatment datasets, the mapping of experimental samples (input and output) across the three replicates is:
replicate 1 input: Raw $\rightarrow$ input.1
replicate 1 output: Raw.1 $\rightarrow$ output.1
replicate 2 input: Raw.2 $\rightarrow$ input.2
replicate 2 output: Raw.3 $\rightarrow$ output.2
replicate 3 input: Raw.4 $\rightarrow$ input.3
replicate 3 output: Raw.5 $\rightarrow$ output.3
End of explanation
# Plot correlations for control data
tools.plot_correlation(control);
Explanation: Data QA <a id="data_qa"></a>
We expect that there is good agreement between input and output raw intensities for each replicate control or treatment experiment. We also expect that there should be good agreement across replicates within the controls, and within the treatment. We inspect this agreement visually with a matrix of scatterplots, below.
The plot_correlation() function can be found in the accompanying tools.py module.
End of explanation
# Plot correlations for treatment data
tools.plot_correlation(treatment);
Explanation: There is good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation.
End of explanation
# Select outlying treatment input.3 values
treatment.loc[treatment['input.3'] > 4e4]
# Define problem probes:
problem_probes = list(treatment.loc[treatment['input.3'] > 4e4].index)
Explanation: There is - mostly - good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation. There appear to be three problematic probes in replicate 3 that we may need to deal with in the data cleanup.
<div class="alert alert-success">
<b>Taken together, these plots indicate:</b>
<ul>
<li> the intensities of the control arrays are systematically larger than intensities for the treatment arrays, suggesting that the effects of noise may be proportionally greater for the treatment arrays. This might be a concern for reliably inferring enrichment or depletion in the treatment.
<li> the control arrays are good candidates for quantile normalisation (QN; $r > 0.95$, with similar density distributions)
<li> the treatment array `input.3` dataset is potentially problematic, due to three treatment probe datapoints with intensities greater than 40,000 units having large leverage.
</ul>
</div>
Problematic probes <a id="problem_probes"></a>
<div class="alert-warning">
We can readily identify problematic probes in treatment replicate 3, as they are the only probes with intensity greater than 40,000.
The problematic probes are:
<ul>
<li> <code>A_07_P000070</code>
<li> <code>A_07_P061472</code>
<li> <code>A_07_P052489</code>
</ul>
</div>
End of explanation
# Interpolate values
treatment.set_value(index=problem_probes, col='input.3',
value=treatment.loc[problem_probes][['input.1', 'input.2']].mean(1))
treatment.loc[problem_probes]
Explanation: Interpolating values for problem probes <a id="interpolation"></a>
We replace the three clear outlying values for the three problematic probes in input.3 of the treatment array with interpolated values. We assume that input.1 and input.2 are typical of the input intensities for these three probes, and take the average of their values to substitute for input.3 for each.
End of explanation
# Plot correlations for treatment data
tools.plot_correlation(treatment);
Explanation: We can visualise the change in correlation for the treatment dataframe that results:
End of explanation
input_cols = ['input.1', 'input.2', 'input.3'] # input columns
output_cols = ['output.1', 'output.2', 'output.3'] # output columns
# Normalise inputs and outputs for control and treatment separately
control_input = tools.quantile_norm(control, columns=input_cols)
control_output = tools.quantile_norm(control, columns=output_cols)
treatment_input = tools.quantile_norm(treatment, columns=input_cols)
treatment_output = tools.quantile_norm(treatment, columns=output_cols)
Explanation: Normalisation <a id="normalisation"></a>
We expect the array intensity distribution to vary according to whether the sample was from the input (strong) or output (weak) set, and whether the sample came from the control or treatment pools. We therefore divide the dataset into four independently-normalised components:
control_input
control_output
treatment_input
treatment_output
<br /><div class="alert-success">
We have established that because the input and output arrays in both control and treatment conditions have strong correlation across all intensities, and have similar intensity distributions, we are justified in using quantile (mean) normalisation.
</div>
End of explanation
# Make violinplots of normalised data
tools.plot_normalised(control_input, control_output,
treatment_input, treatment_output)
Explanation: We visualise the resulting distributions, in violin plots:
End of explanation
# Convert data from wide to long form
data = tools.wide_to_long(control_input, control_output,
treatment_input, treatment_output)
data.head()
Explanation: <div class="alert-success">
These plots illustrate that there is relative reduction in measured array intensity between control and treatment arrays for both the input and output arrays.
</div>
Wide to long form <a id="wide_to_long"></a>
We have four dataframes containing normalised data:
control_input
control_output
treatment_input
treatment_output
Each dataframe is indexed by the array probe systematic name, with three columns that correspond to replicates 1, 2, and 3 for either a control or a treatment run. For downstream analysis we want to organise this data as the following columns:
index: unique ID
probe: probe name (these apply across treatment/control and input/output)
input: normalised input intensity value (for a particular probe and replicate)
output: normalised input intensity value (for a particular probe and replicate)
treatment: 0/1 indicating whether the measurement was made for the control or treatment sample
replicate: 1, 2, 3 indicating which replicate the measurement was made from
<br /><div class="alert-warning">
We will add other columns with relevant data later, and to enable this, we convert the control and treatment data frames from wide (e.g. input.1, input.2, input.3 columns) to long (e.g. probe, input, output, replicate) form - once for the control data, and once for the treatment data. We match on a multi-index of probe and replicate.
</div>
End of explanation
# Visualise input v output distributions
tools.plot_input_output_violin(data)
Explanation: Long form data has some advantages for melting into new arrangments for visualisation, analysis, and incorporation of new data. For instance, we can visualise the distributions of input and output log intensities against each other, as below:
End of explanation
# BLASTN results files
sakai_blastfile = os.path.join('..', 'data', 'probes_blastn_sakai.tab')
dh10b_blastfile = os.path.join('..', 'data', 'probes_blastn_dh10b.tab')
# Obtain a dataframe of unique probes and their BLASTN matches
unique_probe_hits = tools.unique_probe_matches((sakai_blastfile, dh10b_blastfile))
Explanation: <div class="alert-success">
This visualisation again shows that treatment intensities are generally lower than control intensities, but also suggests that the bulk of output intensities are lower than input intensities.
<br /><br />
There is a population of low-intensity values for each set of arrays, however. These appear to have a slight increase in intensity in the output, compared to input arrays.
</div>
Probe matches to Sakai and DH10B <a id="probe_matches"></a>
<div class="alert-warning">
Evidence for potential hybridisation of probes to DH10B or Sakai isolates was determined by default `BLASTN` query of each probe sequence against chromosome and plasmid feature nucleotide sequences from the NCBI records:
<ul>
<li> `GCF_000019425.1_ASM1942v1_cds_from_genomic.fna`
<li> `GCF_000008865.1_ASM886v1_cds_from_genomic.fna`
</ul>
</div>
$ blastn -query Array/probe_seqlist.fas -subject Sakai/GCF_000008865.1_ASM886v1_cds_from_genomic.fna -outfmt 6 -out probes_blastn_sakai.tab -perc_identity 100
$ blastn -query Array/probe_seqlist.fas -subject DH10B/GCF_000019425.1_ASM1942v1_cds_from_genomic.fna -outfmt 6 -out probes_blastn_dh10b.tab -perc_identity 100
We first identify the probes that match uniquely at 100% identity to a single E. coli gene product from either Sakai or DH10B
End of explanation
# Sequence data files
sakai_seqfile = os.path.join('..', 'data', 'Sakai', 'GCF_000008865.1_ASM886v1_cds_from_genomic.fna')
dh10b_seqfile = os.path.join('..', 'data', 'DH10B', 'GCF_000019425.1_ASM1942v1_cds_from_genomic.fna')
# Add locus tag information to each unique probe
unique_probe_hits = tools.annotate_seqdata(unique_probe_hits, (sakai_seqfile, dh10b_seqfile))
Explanation: We then add parent gene annotations to the unique probes:
End of explanation
censored_data = pd.merge(data, unique_probe_hits[['probe', 'match', 'locus_tag']],
how='inner', on='probe')
censored_data.head()
Explanation: <div class="alert-danger">
We will certainly be interested in probes that hybridise unambiguously to Sakai or to DH10B. The [array was however designed to report on several *E. coli* isolates](http://www.ebi.ac.uk/arrayexpress/arrays/A-GEOD-13359/?ref=E-GEOD-46455), and not all probes should be expected to hybridise, so we could consider the non-uniquely matching probes not to be of interest, and censor them.
<br /><br />
A strong reason to censor probes is that we will be estimating locus tag/gene-level treatment effects, on the basis of probe-level intensity measurements. Probes that may be reporting on multiple genes may mislead our model fit, and so are better excluded.
</div>
We exclude non-unique matching probes by performing an inner join between the data and unique_probe_hits dataframes.
End of explanation
# Visually inspect the effect of censoring on distribution
tools.plot_input_output_violin(censored_data)
Explanation: <div class="alert-success">
This leaves us with a dataset comprising:
<ul>
<li> 49872 datapoints (rows)
<li> 8312 unique probes
<li> 6084 unique locus tags
</ul>
</div>
As can be seen in the violin plot below, censoring the data in this way removes a large number of low-intensity probes from all datasets.
End of explanation
# Create output directory
outdir = 'datasets'
os.makedirs(outdir, exist_ok=True)
# Output files
full_dataset = os.path.join(outdir, "normalised_array_data.tab") # all censored data
reduced_probe_dataset = os.path.join(outdir, "reduced_probe_data.tab") # subset of data grouped by probe
reduced_locus_dataset = os.path.join(outdir, "reduced_locus_data.tab") # subset of data grouped by locus tag
Explanation: Write data <a id="write"></a>
<div class="alert-warning">
<b>We write the censored, normalised, long-format data to the `datasets/` subdirectory.</b>
</div>
End of explanation
# Index on probes
indexed_data = tools.index_column(censored_data, 'probe')
# Index on locus tags
indexed_data = tools.index_column(indexed_data, 'locus_tag')
# Index on array (replicate X treatment)
indexed_data = tools.index_column(indexed_data, 'repXtrt')
# Uncomment the line below to inspect the data
#indexed_data.head(20)
# Write the full dataset to file
indexed_data.to_csv(full_dataset, sep="\t", index=False)
Explanation: For modelling with Stan, we assign indexes for common probe ID, locus tag, and array (combination of replicate and treatment) to each probe, before writing out the complete dataset.
End of explanation
# Reduced probe set
reduced_probes = tools.reduce_dataset(indexed_data, 'probe')
reduced_probes.to_csv(reduced_probe_dataset, sep="\t", index=False)
# Reduced locus tag set
reduced_lts = tools.reduce_dataset(indexed_data, 'locus_tag')
reduced_lts.to_csv(reduced_locus_dataset, sep="\t", index=False)
Explanation: For testing, we want to create two data subsets, one containing a reduced number of probes, and one with a reduced number of genes/locus tags.
End of explanation |
9 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do
Step1: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands
Step2: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy
Step3: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly
Step4: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded
Step5: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
Step6: 1.3 Iterative mapping
First of all, we need to import useful Python packages
Step7: Then we need to set some parameters and prepare our environment
Step8: Let's take a look at .sam files that were created during iterative mapping
Step9: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file)
Step10: Let's take a look at the created file
Step11: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts
Step12: Nice visualisation of the data
Step13: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
Step14: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
Step15: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
Step16: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
Step17: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069
Step18: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example. | Python Code:
# This is regular Python comment inside Jupyter "Code" cell.
# You can easily run "Hello world" in the "Code" cell (focus on the cell and press Shift+Enter):
print("Hello world!")
Explanation: <a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do:
Notebook basics
Reads maping
Data filtering
Binning
Hi-C data visualisation
Iterative correction
Compartments and TADs
If you have any questions, please, contact Aleksandra Galitsyna ([email protected])
<a id="basics"></a>
0. Notebook basics
If you are new to Python and Jupyter notebook, please, take a quick look through this small list of tips.
First of all, Jupyter notebook is organised in cells, which may contain text, comments and code blocks of any size.
End of explanation
%%bash
echo "Current directory is: "; pwd
echo "List of files in the current directory is: "; ls
Explanation: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands:
End of explanation
# Module import under custom name
import numpy as np
# You've started asking questions about it
np?
Explanation: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy:
End of explanation
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R1.fastq'
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R2.fastq'
Explanation: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly:
End of explanation
%%bash
ls ../GENOMES/HG19_FASTA
Explanation: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded:
End of explanation
#%%bash
#bowtie2-build /home/jovyan/GENOMES/HG19_FASTA/chr1.fa /home/jovyan/GENOMES/HG19_IND/hg19_chr1
#Time consuming step
%%bash
ls ../GENOMES/HG19_IND
Explanation: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
End of explanation
import os
from hiclib import mapping
from mirnylib import h5dict, genome
Explanation: 1.3 Iterative mapping
First of all, we need to import useful Python packages:
End of explanation
%%bash
which bowtie2
# Bowtie 2 path
%%bash
pwd
# Current working directory path
# Setting parameters and environmental variables
bowtie_path = '/opt/conda/bin/bowtie2'
enzyme = 'DpnII'
bowtie_index_path = '/home/jovyan/GENOMES/HG19_IND/hg19_chr1'
fasta_path = '/home/jovyan/GENOMES/HG19_FASTA/'
chrms = ['1']
# Reading the genome
genome_db = genome.Genome(fasta_path, readChrms=chrms)
# Creating directories for further data processing
if not os.path.exists('tmp/'):
os.mkdir('tmp/', exists_)
if not os.path.exists('../DATA/SAM/'):
os.mkdir('../DATA/SAM/')
# Set parameters for iterative mapping
min_seq_len = 25
len_step = 5
nthreads = 2
temp_dir = 'tmp'
bowtie_flags = '--very-sensitive'
infile1 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R1.fastq'
infile2 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R2.fastq'
out1 = '/home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam'
out2 = '/home/jovyan/DATA/SAM/K562_B-bulk_R2.chr1.sam'
# Iterative mapping itself. Time consuming step!
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile1,
out_sam_path = out1,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile2,
out_sam_path = out2,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
Explanation: Then we need to set some parameters and prepare our environment:
End of explanation
%%bash
ls /home/jovyan/DATA/SAM/
%%bash
head -n 10 /home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam.25
Explanation: Let's take a look at .sam files that were created during iterative mapping:
End of explanation
# Create the directory for output
if not os.path.exists('../DATA/HDF5/'):
os.mkdir('../DATA/HDF5/')
# Define file name for output
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
# Open output file
mapped_reads = h5dict.h5dict(out)
# Parse mapping data and write to output file
mapping.parse_sam(
sam_basename1 = out1,
sam_basename2 = out2,
out_dict = mapped_reads,
genome_db = genome_db,
enzyme_name = enzyme,
save_seqs = False,
keep_ids = False)
Explanation: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file):
End of explanation
%%bash
ls /home/jovyan/DATA/HDF5/
import h5py
# Reading the file
a = h5py.File('/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5')
# "a" variable has dictionary-like structure, we can view its keys, for example:
list( a.keys() )
# Mapping positions for forward reads are stored under 'cuts1' key:
a['cuts1'].value
Explanation: Let's take a look at the created file:
End of explanation
from hiclib import fragmentHiC
inp = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments_filtered.hdf5'
# Create output file
fragments = fragmentHiC.HiCdataset(
filename = out,
genome = genome_db,
maximumMoleculeLength= 500,
mode = 'w')
# Parse input data
fragments.parseInputData(
dictLike=inp)
# Filtering
fragments.filterRsiteStart(offset=5) # reads map too close to restriction site
fragments.filterDuplicates() # remove PCR duplicates
fragments.filterLarge() # remove too large restriction fragments
fragments.filterExtreme(cutH=0.005, cutL=0) # remove fragments with too high and low counts
# Some hidden filteres were also applied, we can check them all:
fragments.printMetadata()
Explanation: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts:
End of explanation
import pandas as pd
df_stat = pd.DataFrame(list(fragments.metadata.items()), columns=['Feature', 'Count'])
df_stat
df_stat['Ratio of total'] = 100*df_stat['Count']/df_stat.loc[2,'Count']
df_stat
Explanation: Nice visualisation of the data:
End of explanation
# Define file name for binned data. Note "{}" prepared for string formatting
out_bin = '/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'
res_kb = [100, 20] # Several resolutions in Kb
for res in res_kb:
print(res)
outmap = out_bin.format(str(res)+'kb') # String formatting
fragments.saveHeatmap(outmap, res*1000) # Save heatmap
del fragments # delete unwanted object
Explanation: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
End of explanation
# Importing visualisation modules
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
%matplotlib inline
from hiclib.binnedData import binnedDataAnalysis
res = 100 # Resolution in Kb
# prepare to read the data
data_hic = binnedDataAnalysis(resolution=res*1000, genome=genome_db)
# read the data
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'.format(str(res)+'kb'),'hic')
mtx = data_hic.dataDict['hic']
# show heatmap
plt.figure(figsize=[15,15])
plt.imshow(mtx[0:200, 0:200], cmap='jet', interpolation='None')
Explanation: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
End of explanation
# Additional data filtering
data_hic.removeDiagonal()
data_hic.removePoorRegions()
data_hic.removeZeros()
data_hic.iterativeCorrectWithoutSS(force=True)
data_hic.restoreZeros()
mtx = data_hic.dataDict['hic']
plt.figure(figsize=[15,15])
plt.imshow(mtx[200:500, 200:500], cmap='jet', interpolation='None')
Explanation: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
End of explanation
# Load compartments computed previously based on K562 dataset from Rao et al. 2014
eig = np.loadtxt('/home/jovyan/DATA/ANNOT/comp_K562_100Kb_chr1.tsv')
eig
from matplotlib import gridspec
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto')
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
End of explanation
mtx_Rao = np.genfromtxt('../DATA/ANNOT/Rao_K562_chr1.csv', delimiter=',')
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx_Rao[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto', vmax=1000)
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069:
End of explanation
# Import Python package
import lavaburst
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gam=[0.15, 0.25, 0.5, 0.75, 1.0] # set of parameters gamma for TADs calling
segments_dict = {}
for gam_current in gam:
print(gam_current)
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
segments_dict[gam_current] = segments.copy()
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
blues = sns.cubehelix_palette(0.4, gamma=0.5, rot=-0.3, dark=0.1, light=0.9, as_cmap=True)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap=blues)
cmap = mpl.cm.get_cmap('brg')
gammas = segments_dict.keys()
for n, gamma in enumerate(gammas):
segments = segments_dict[gamma]
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
a = segments[-1]
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5, label=gamma)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
ax.set_xlim([0,end_tmp-start_tmp])
ax.set_ylim([100,-100])
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_aspect(0.5)
#Let's check what are median TAD sized with different parameters:
for gam_current in gam:
segments = segments_dict[gam_current]
tad_lens = segments[:,1]-segments[:,0]
good_lens = (tad_lens>=200/res)&(tad_lens<100)
print(res*1000*np.mean(tad_lens[good_lens]))
Explanation: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example.
End of explanation |
10 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Download and Explore the Data
Step2: <h6> Plot the Data Points </h6>
Step3: Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation
Step4: <div align="right">
<a href="#createvar" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="createvar" class="collapse">
```
X = tf.placeholder(tf.float32, shape=(x_data.size))
Y = tf.placeholder(tf.float32,shape=(y_data.size))
# tf.Variable call creates a single updatable copy in the memory and efficiently updates
# the copy to relfect any changes in the variable values through out the scope of the tensorflow session
m = tf.Variable(3.0)
c = tf.Variable(2.0)
# Construct a Model
Ypred = tf.add(tf.multiply(X, m), c)
```
</div>
Create and Run a Session to Visualize the Predicted Line from above Graph
<h6> Feel free to change the values of "m" and "c" in future to check how the initial position of line changes </h6>
Step5: <div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
pred = session.run(Ypred, feed_dict={X
Step6: <div align="right">
<a href="#matmul12" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul12" class="collapse">
```
# normalization factor
nf = 1e-1
# seting up the loss function
loss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf))
```
</div>
Define an Optimization Graph to Minimize the Loss and Training the Model
Step7: <div align="right">
<a href="#matmul13" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul13" class="collapse">
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
#optimizer = tf.train.AdagradOptimizer(0.01 )
# pass the loss function that optimizer should optimize on.
train = optimizer.minimize(loss)
```
</div>
Initialize all the vairiables again
Step8: Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step
Get the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.
Step9: <div align="right">
<a href="#matmul18" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul18" class="collapse">
```
# run a session to train , get m and c values with loss function
_, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X
Step10: <div align="right">
<a href="#matmul199" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul199" class="collapse">
```
plt.plot(losses[ | Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.__version__
Explanation: <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png" width = 300, align = "center"></a>
<h1 align=center> <font size = 5> Exercise-Linear Regression with TensorFlow </font></h1>
This exercise is about modelling a linear relationship between "chirps of a cricket" and ground temperature.
In 1948, G. W. Pierce in his book "Songs of Insects" mentioned that we can predict temperature by listening to the frequency of songs(chirps) made by stripped Crickets. He recorded change in behaviour of crickets by recording number of chirps made by them at several "different temperatures" and found that there is a pattern in the way crickets respond to the rate of change in ground temperature 60 to 100 degrees of farenhite. He also found out that Crickets did not sing
above or below this temperature.
This data is derieved from the above mentioned book and aim is to fit a linear model and predict the "Best Fit Line" for the given "Chirps(per 15 Second)" in Column 'A' and the corresponding "Temperatures(Farenhite)" in Column 'B' using TensorFlow. So that one could easily tell what temperature it is just by listening to the songs of cricket.
Let's import tensorFlow and python dependencies
End of explanation
#downloading dataset
!wget -nv -O ../data/PierceCricketData.csv https://ibm.box.com/shared/static/fjbsu8qbwm1n5zsw90q6xzfo4ptlsw96.csv
df = pd.read_csv("../data/PierceCricketData.csv")
df.head()
Explanation: Download and Explore the Data
End of explanation
%matplotlib inline
x_data, y_data = (df["Chirps"].values,df["Temp"].values)
# plots the data points
plt.plot(x_data, y_data, 'ro')
# label the axis
plt.xlabel("# Chirps per 15 sec")
plt.ylabel("Temp in Farenhiet")
Explanation: <h6> Plot the Data Points </h6>
End of explanation
# Create place holders and Variables along with the Linear model.
m = tf.Variable(3, dtype=tf.float32)
c = tf.Variable(2, dtype=tf.float32)
x = tf.placeholder(dtype=tf.float32, shape=x_data.size)
y = tf.placeholder(dtype=tf.float32, shape=y_data.size)
# Linear model
y_pred = m * x + c
Explanation: Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation:
#### Ypred = m X + c
We have to estimate the values of the slope 'm' and the inrtercept 'c' to fit a line where, X is the "Chirps" and Ypred is "Predicted Temperature" in this case.
Create a Data Flow Graph using TensorFlow
Model the above equation by assigning arbitrary values of your choice for slope "m" and intercept "c" which can predict the temp "Ypred" given Chirps "X" as input.
example m=3 and c=2
Also, create a place holder for actual temperature "Y" which we will be needing for Optimization to estimate the actual values of slope and intercept.
End of explanation
#create session and initialize variables
session = tf.Session()
session.run(tf.global_variables_initializer())
#get prediction with initial parameter values
y_vals = session.run(y_pred, feed_dict={x: x_data})
#Your code goes here
plt.plot(x_data, y_vals, label='Predicted')
plt.scatter(x_data, y_data, color='red', label='GT')
Explanation: <div align="right">
<a href="#createvar" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="createvar" class="collapse">
```
X = tf.placeholder(tf.float32, shape=(x_data.size))
Y = tf.placeholder(tf.float32,shape=(y_data.size))
# tf.Variable call creates a single updatable copy in the memory and efficiently updates
# the copy to relfect any changes in the variable values through out the scope of the tensorflow session
m = tf.Variable(3.0)
c = tf.Variable(2.0)
# Construct a Model
Ypred = tf.add(tf.multiply(X, m), c)
```
</div>
Create and Run a Session to Visualize the Predicted Line from above Graph
<h6> Feel free to change the values of "m" and "c" in future to check how the initial position of line changes </h6>
End of explanation
loss = tf.reduce_mean(tf.squared_difference(y_pred*0.1, y*0.1))
Explanation: <div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
pred = session.run(Ypred, feed_dict={X:x_data})
#plot initial prediction against datapoints
plt.plot(x_data, pred)
plt.plot(x_data, y_data, 'ro')
# label the axis
plt.xlabel("# Chirps per 15 sec")
plt.ylabel("Temp in Farenhiet")
```
</div>
Define a Graph for Loss Function
The essence of estimating the values for "m" and "c" lies in minimizing the difference between predicted "Ypred" and actual "Y" temperature values which is defined in the form of Mean Squared error loss function.
$$ loss = \frac{1}{n}\sum_{i=1}^n{[Ypred_i - {Y}_i]^2} $$
Note: There are also other ways to model the loss function based on distance metric between predicted and actual temperature values. For this exercise Mean Suared error criteria is considered.
End of explanation
# Your code goes here
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)
Explanation: <div align="right">
<a href="#matmul12" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul12" class="collapse">
```
# normalization factor
nf = 1e-1
# seting up the loss function
loss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf))
```
</div>
Define an Optimization Graph to Minimize the Loss and Training the Model
End of explanation
session.run(tf.global_variables_initializer())
Explanation: <div align="right">
<a href="#matmul13" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul13" class="collapse">
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
#optimizer = tf.train.AdagradOptimizer(0.01 )
# pass the loss function that optimizer should optimize on.
train = optimizer.minimize(loss)
```
</div>
Initialize all the vairiables again
End of explanation
convergenceTolerance = 0.0001
previous_m = np.inf
previous_c = np.inf
steps = {}
steps['m'] = []
steps['c'] = []
losses=[]
for k in range(10000):
########## Your Code goes Here ###########
_, _l, _m, _c = session.run([train_op, loss, m, c], feed_dict={x: x_data, y: y_data})
steps['m'].append(_m)
steps['c'].append(_c)
losses.append(_l)
if (np.abs(previous_m - _m) or np.abs(previous_c - _c) ) <= convergenceTolerance :
print("Finished by Convergence Criterion")
print(k)
print(_l)
break
previous_m = _m
previous_c = _c
Explanation: Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step
Get the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.
End of explanation
# Your Code Goes Here
plt.plot(losses)
Explanation: <div align="right">
<a href="#matmul18" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul18" class="collapse">
```
# run a session to train , get m and c values with loss function
_, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X:x_data,Y:y_data})
```
</div>
Print the loss function
End of explanation
y_vals_pred = y_pred.eval(session=session, feed_dict={x: x_data})
plt.scatter(x_data, y_vals_pred, marker='x', color='blue', label='Predicted')
plt.scatter(x_data, y_data, label='GT', color='red')
plt.legend()
plt.ylabel('Temperature (Fahrenheit)')
plt.xlabel('# Chirps per 15 s')
session.close()
Explanation: <div align="right">
<a href="#matmul199" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul199" class="collapse">
```
plt.plot(losses[:])
```
</div>
End of explanation |
11 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xpath always returns a list of results, but there's only one, so we'll use that
Step2: Why is there only one element? I don't know and don't have the time to care, so I'll write
a helper function that always gives me an outline of the HTML (sub)tree that I'm
currently processing.
Step3: It looks like everything we need is in the <tbody>, so we'll grab that.
Step4: There are only <tr> (rows) in here, it's probably the right place.
The first one is the header, the rest should be the countries
Step5: The 3rd column contains the country's name, but also some other crap
Step6: We need to dig deeper, so let's look at the complete HTML of that column | Python Code:
xpath_result = tree.xpath('/html/body/div[3]/div[3]/div[4]/div/table[2]')
table = xpath_result[0]
for elem in table:
print(elem)
Explanation: xpath always returns a list of results, but there's only one, so we'll use that:
End of explanation
def print_outline(tree, indent=0):
print the outline of the given lxml.html tree
indent_prefix = indent * ' '
print(indent_prefix + '<' + tree.tag + '>')
for elem in tree.iterchildren():
print_outline(elem, indent=indent+1)
print_outline(table)
Explanation: Why is there only one element? I don't know and don't have the time to care, so I'll write
a helper function that always gives me an outline of the HTML (sub)tree that I'm
currently processing.
End of explanation
table.getchildren()
tbody = table.getchildren()[0]
tbody
for elem in tbody.getchildren():
print(elem.tag, end=' ')
Explanation: It looks like everything we need is in the <tbody>, so we'll grab that.
End of explanation
rows = tbody.getchildren()
header = rows[0]
countries = rows[1:]
print(header.text_content())
countries[0].text_content()
Explanation: There are only <tr> (rows) in here, it's probably the right place.
The first one is the header, the rest should be the countries:
End of explanation
countries[0][2].text_content()
print_outline(countries[0][2])
Explanation: The 3rd column contains the country's name, but also some other crap:
End of explanation
from lxml import etree
etree.tostring(countries[0][2])
for country in countries:
name_column = country[2]
country_link = name_column.find('a') # get the first '<a>' subtree
country_name = country_link.get('title') # get the 'title' attribute of the link
print(country_name)
Explanation: We need to dig deeper, so let's look at the complete HTML of that column:
End of explanation |
12 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import required packages
Step5: Create a utility class for camera calibration
This is used for calibrating camera and undistorting the images
Step13: Create a class to keep track of lane detections
Here we use the average of last maxSamples to identify the lane
Step16: Use the lane pixals identified to fit a ploygon and draw it back on the original image
Step18: Here we validate the detected lines and add them to the lane class
A valid detection satisfies below rules
Minmum number of pixals must be greater than 2000
Left lane mean should be more than a minimum
Right lane mean should be less then a minimum
Lane width whoud be atlest 300 and atmost 800
New detections must be within 100px of the average of last n detections
Step20: Find the lane using sliding window technique
Use the minimum of bottom 1/4 of the histogram to find the initial left and right base
Use the base points to find more points within a margin and min number of pixals
Using
windows size = 9
margin = 80
min pixals = 30
Step22: Find Lanes Wrapper
If left or right lane found in the last iteration. Get the pixals in a margin of 30 and validate
If the validation fails or this is the first iteration use the sliding window technique to find lanes and then validate.
Step24: Warp the image to get birds' eye view
Use source points
bounding_top_right = [img_shape[1]0.5 + 90,img_shape[0]0.70]
bounding_btm_right = [img_shape[1]*0.5 + 450,img_shape[0]]
bounding_btm_left = [img_shape[1]*0.5 - 400,img_shape[0]]
bounding_top_left = [img_shape[1]0.5 - 60,img_shape[0]0.70]
Destinations points
bounding_top_right = [img_shape[1]0.5 + 250,img_shape[0]0.60]
bounding_btm_right = [img_shape[1]*0.5 + 390,img_shape[0]]
bounding_btm_left = [img_shape[1]*0.5 - 345,img_shape[0]]
bounding_top_left = [img_shape[1]0.5 - 205,img_shape[0]0.60]
Get perpective transform
Get inverse perpective transform
warp the image using perspective transform
Step27: Threshold
Use color threshold
The number of lane pixals must be considerably less than the background pixals and have a minimum value.
We use this to recursively increase or decrease the minimum threshold value to find the optimal value.
Use Sobel operator to find gradients
Combine the two to get the result
Step29: Apply all the steps
Undistort the image
Apply perspective transform
Apply threshold
Find lanes
Draw the result back on image
Step30: Generate obj points and img points
Step31: Calibrate camera and undistort the chessbaord images
Step32: Test on images
Step36: Test on videos | Python Code:
import os
import math
import glob
import cv2
from collections import deque
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from moviepy.editor import VideoFileClip
%matplotlib inline
Explanation: Import required packages
End of explanation
class cam_util():
util class for camera operations
ret = None
mtx = None
dist = None
rvecs = None
tvecs = None
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
def gen_camera_points(self):
generate objpoints and impoints from calibration images
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
self.objpoints.append(objp)
self.imgpoints.append(corners)
def undistort(self, img):
undistort an image with camera matrix
if self.mtx is None:
self.ret, self.mtx, self.dist, self.rvecs, self.tvecs = cv2.calibrateCamera(self.objpoints, self.imgpoints,
img.shape[:2],None,None)
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(self.mtx, self.dist, (w,h), 1, (w,h))
dst = cv2.undistort(img, self.mtx, self.dist, None, newcameramtx)
x,y,w,h = roi
return dst[y:y+h, x:x+w]
def clean_mat(self):
Reset camera calibration
self.ret = None
self.mtx = None
self.dist = None
self.rvecs = None
self.tvecs = None
Explanation: Create a utility class for camera calibration
This is used for calibrating camera and undistorting the images
End of explanation
class Line():
class to store detected lane stats
def __init__(self, maxSamples=15):
self.maxSamples = maxSamples
# x values of the last n fits of the line
self.recent_xfitted = deque(maxlen=self.maxSamples)
#polynomial coefficients for the most recent fit
self.current_fit = [np.array([False])]
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
#average x values of the fitted line over the last n iterations
self.bestx = None
# was the line detected in the last iteration?
self.detected = False
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
def update_lane(self, ally, allx):
Function to update the stats
# get the mean as the best x
self.bestx = np.mean(allx, axis=0)
# fit a 2 order polynomial
new_fit = np.polyfit(ally, allx, 2)
# calculate the difference between last fit and new fit
self.diffs = np.subtract(self.current_fit, new_fit)
# update current fit
self.current_fit = new_fit
# add the new fit to the queue
self.recent_xfitted.append(self.current_fit)
# Use the queue mean as the best fit
self.best_fit = np.mean(self.recent_xfitted, axis=0)
# meters per pixel in y dimension
ym_per_pix = 30/720
# meters per pixel in x dimension
xm_per_pix = 3.7/700
# Calculate radius of curvature
fit_cr = np.polyfit(ally*ym_per_pix, allx*xm_per_pix, 2)
y_eval = np.max(ally)
self.radius_of_curvature = ((1 + (2*fit_cr[0]*y_eval*ym_per_pix + fit_cr[1])**2)**1.5) / np.absolute(2*fit_cr[0])
# Utility Functions
def get_roi(img, vertices):
Apply mask and get region of interest within the mask
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def hide_roi(img, vertices):
Apply mask and get region of interest outside the mask
mask = np.zeros_like(img)
mask=mask+255
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (0,) * channel_count
else:
ignore_mask_color = 0
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def drow_on_images(img, vertices):
Draw ploygon on image
cv2.polylines(img, [vertices], True, (255,255,255), 2)
plot_img(img, 'img drawing', True)
def plot_img(img, step, show_stages=False):
plot image
if show_stages:
print('######################## '+step+' ########################')
plt.imshow(img, cmap='gray')
plt.show()
def plot_hist(histogram, show_stages=False):
plot histogram
if show_stages:
print('######################## histogram ########################')
plt.plot(histogram)
plt.show()
Explanation: Create a class to keep track of lane detections
Here we use the average of last maxSamples to identify the lane
End of explanation
def write_stats(img):
Write lane stats on image
font = cv2.FONT_HERSHEY_SIMPLEX
size = 1
weight = 2
color = (255,70,0)
cv2.putText(img,'Left Curve : '+ '{0:.2f}'.format(left_line.radius_of_curvature)+' m',(10,30), font, size, color, weight)
cv2.putText(img,'Right Curve : '+ '{0:.2f}'.format(right_line.radius_of_curvature)+' m',(10,60), font, size, color, weight)
cv2.putText(img,'Left Lane Pos: '+ '{0:.2f}'.format(left_line.bestx),(10,100), font, size, color, weight)
cv2.putText(img,'Right Lane Pos: '+ '{0:.2f}'.format(right_line.bestx),(10,130), font, size, color, weight)
cv2.putText(img,'Distance from center: '+ "{0:.2f}".format(left_line.line_base_pos)+' m',(10,180), font, size, color, weight)
def draw_lane(undist, img, Minv):
Draw the detected lane bak on the image
# Generate x and y values for plotting
ploty = np.linspace(300, 700)
# Create an image to draw the lines on
warp_zero = np.zeros_like(img).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
left_fit = left_line.best_fit
right_fit = right_line.best_fit
if left_fit is not None and right_fit is not None:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (20,120, 80))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist, 1, newwarp, 0.6, 0)
write_stats(result)
return result
return undist
Explanation: Use the lane pixals identified to fit a ploygon and draw it back on the original image
End of explanation
def validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds, show_stages=False):
Validate the detected lane ids and update the lane stats if valid.
# Extract left and right line pixel positions
left_line_allx = nonzerox[left_lane_inds]
left_line_ally = nonzeroy[left_lane_inds]
right_line_allx = nonzerox[right_lane_inds]
right_line_ally = nonzeroy[right_lane_inds]
# Discard the detections if any of the detected lane is less than 2000 pixals.
# This is done because for very small size the poly fit function gives unpredictable results.
# A better approch would be to use the largest lane curvature to extend the other one
if len(left_line_allx) <= 2000 or len(right_line_allx) <= 2000:
left_line.detected = False
right_line.detected = False
return
left_x_mean = np.mean(left_line_allx, axis=0)
right_x_mean = np.mean(right_line_allx, axis=0)
lane_width = np.subtract(right_x_mean, left_x_mean)
# Discard the detections if the lane with is too large or too small
if left_x_mean > 450 or right_x_mean < 850:
left_line.detected = False
right_line.detected = False
return
if lane_width < 300 or lane_width > 800:
left_line.detected = False
right_line.detected = False
return
# Update the lane stats if the current detection is the first one or
# the detection is within 100 pixals of the last n detection mean
if left_line.bestx is None or np.abs(np.subtract(left_line.bestx, np.mean(left_line_allx, axis=0))) < 100:
left_line.update_lane(left_line_ally, left_line_allx)
left_line.detected = True
else:
left_line.detected = False
if right_line.bestx is None or np.abs(np.subtract(right_line.bestx, np.mean(right_line_allx, axis=0))) < 100:
right_line.update_lane(right_line_ally, right_line_allx)
right_line.detected = True
else:
right_line.detected = False
# Calculate the distance of car from center of lane
lane_center = right_line.bestx - left_line.bestx
left_line.line_base_pos = ((img.shape[1]*0.5 - lane_center)*3.7)/700
right_line.line_base_pos = left_line.line_base_pos
Explanation: Here we validate the detected lines and add them to the lane class
A valid detection satisfies below rules
Minmum number of pixals must be greater than 2000
Left lane mean should be more than a minimum
Right lane mean should be less then a minimum
Lane width whoud be atlest 300 and atmost 800
New detections must be within 100px of the average of last n detections
End of explanation
def window_search(img, nonzero, nonzerox, nonzeroy, show_stages=False):
Perform a sliding window search to detect lane pixals.
# Temp image to draw detections on
out_img = np.dstack((img, img, img))*255
# Calculate histogram
histogram = np.sum(img[img.shape[0]*.75:,:], axis=0)
plot_hist(histogram, show_stages)
# Take the midpoint and use the max on each side as starting point
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[0:midpoint])
rightx_base = np.argmax(histogram[midpoint:histogram.shape[0]]) + midpoint
# Choose the number of sliding windows
nwindows = 9
# Set height of windows
window_height = np.int(img.shape[0]/nwindows)
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 80
# Set minimum number of pixels found to recenter window
minpix = 30
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = img.shape[0] - (window+1)*window_height
win_y_high = img.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low)
& (nonzeroy < win_y_high)
& (nonzerox >= win_xleft_low)
& (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low)
& (nonzeroy < win_y_high)
& (nonzerox >= win_xright_low)
& (nonzerox < win_xright_high)).nonzero()[0]
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
plot_img(out_img, 'sliding window marked', show_stages)
return left_lane_inds, right_lane_inds
Explanation: Find the lane using sliding window technique
Use the minimum of bottom 1/4 of the histogram to find the initial left and right base
Use the base points to find more points within a margin and min number of pixals
Using
windows size = 9
margin = 80
min pixals = 30
End of explanation
def find_lanes(img, show_stages=False):
Lane finding wrapper function
# Get the foreground pixals
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# If the last detection was successful take the non zero pixals within the 30 pixal margin as the new detections
if left_line.detected and right_line.detected:
margin = 30
left_lane_inds = ((nonzerox > (left_line.current_fit[0]*(nonzeroy**2) + left_line.current_fit[1]*nonzeroy + left_line.current_fit[2] - margin))
& (nonzerox < (left_line.current_fit[0]*(nonzeroy**2) + left_line.current_fit[1]*nonzeroy + left_line.current_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_line.current_fit[0]*(nonzeroy**2) + right_line.current_fit[1]*nonzeroy + right_line.current_fit[2] - margin))
& (nonzerox < (right_line.current_fit[0]*(nonzeroy**2) + right_line.current_fit[1]*nonzeroy + right_line.current_fit[2] + margin)))
# Update the lane detections
validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)
# If first detection or the last detection was unsuccessful perform a sliding window search
else:
#print('doing window search')
left_lane_inds, right_lane_inds = window_search(img, nonzero, nonzerox, nonzeroy, show_stages)
# Update the lane detections
validate_Update_lane(img, nonzero, nonzerox, nonzeroy, left_lane_inds, right_lane_inds)
Explanation: Find Lanes Wrapper
If left or right lane found in the last iteration. Get the pixals in a margin of 30 and validate
If the validation fails or this is the first iteration use the sliding window technique to find lanes and then validate.
End of explanation
def warp(img):
Warp the image to get birds eye view.
img_shape = img.shape
bounding_top_right = [img_shape[1]*0.5 + 90,img_shape[0]*0.70]
bounding_btm_right = [img_shape[1]*0.5 + 450,img_shape[0]]
bounding_btm_left = [img_shape[1]*0.5 - 400,img_shape[0]]
bounding_top_left = [img_shape[1]*0.5 - 60,img_shape[0]*0.70]
# Select source points
pts1 = np.float32([bounding_top_right,bounding_btm_right,bounding_btm_left,bounding_top_left])
# Select destination points
pts2 = np.float32([[img_shape[1]*0.5 + 250,img_shape[0]*0.60],
[img_shape[1]*0.5 + 390,img_shape[0]],
[img_shape[1]*0.5 - 345,img_shape[0]],
[img_shape[1]*0.5 - 205,img_shape[0]*0.60]])
# Get Perspective Transform
M = cv2.getPerspectiveTransform(pts1, pts2)
# Get inverse Perspective Transform
Minv = cv2.getPerspectiveTransform(pts2, pts1)
# Apply warp transform on source image
dst = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]), flags=cv2.INTER_LINEAR)
return dst, Minv
Explanation: Warp the image to get birds' eye view
Use source points
bounding_top_right = [img_shape[1]0.5 + 90,img_shape[0]0.70]
bounding_btm_right = [img_shape[1]*0.5 + 450,img_shape[0]]
bounding_btm_left = [img_shape[1]*0.5 - 400,img_shape[0]]
bounding_top_left = [img_shape[1]0.5 - 60,img_shape[0]0.70]
Destinations points
bounding_top_right = [img_shape[1]0.5 + 250,img_shape[0]0.60]
bounding_btm_right = [img_shape[1]*0.5 + 390,img_shape[0]]
bounding_btm_left = [img_shape[1]*0.5 - 345,img_shape[0]]
bounding_top_left = [img_shape[1]0.5 - 205,img_shape[0]0.60]
Get perpective transform
Get inverse perpective transform
warp the image using perspective transform
End of explanation
def rec_threshold(img, roi, t_min=140, t_max=255):
Funtion to apply recursive threshold with increasing/decreasing boundries
based on the area of lane within a region of interest.
binary = np.zeros_like(img)
binary[(img >= t_min) & (img <= t_max)] = 1
# retrun last val if the threshold levels reach minimum or maximum.
if t_min <= 40 or t_min >= 220:
return binary
binary_1 = get_roi(binary, roi)
#print(np.sum(binary_1.nonzero()))
if np.sum(binary_1.nonzero()) > 9800000:
binary = rec_threshold(img, roi, t_min+10)
elif np.sum(binary_1.nonzero()) < 100000:
binary = rec_threshold(img, roi, t_min-10)
return binary
def threshold(img, roi, show_stages=False):
Apply threshold
# Convert image the HSV
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# Take v channel
v_channel = hsv[:,:,2]
plot_img(v_channel, 'v channel', show_stages)
# Apply threshold to find lane
v_binary = rec_threshold(v_channel, roi)
plot_img(v_binary, 'color threshold', show_stages)
# Convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Take the derivative in x
sobelx = cv2.Sobel(gray, cv2.CV_32F, 1, 0)
#sobelx = cv2.Sobel(sobelx, cv2.CV_32F, 0, 1) # Take the derivative
#plot_img(sobelx, show_stages)
# Absolute x derivative to
abs_sobelx = np.absolute(sobelx)
#accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
#plot_img(sobelx, show_stages)
sxbinary = np.zeros_like(scaled_sobel)
# perform threshold
sxbinary[(scaled_sobel >= 100) & (scaled_sobel <= 255)] = 1
plot_img(sobelx, 'sobel', show_stages)
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, v_binary))
combined_binary = np.zeros_like(sxbinary)
# conbine color and sobel threshold
combined_binary[(v_binary == 1) | (sxbinary == 1)] = 1
plot_img(combined_binary, 'combined threshold', show_stages)
return combined_binary
Explanation: Threshold
Use color threshold
The number of lane pixals must be considerably less than the background pixals and have a minimum value.
We use this to recursively increase or decrease the minimum threshold value to find the optimal value.
Use Sobel operator to find gradients
Combine the two to get the result
End of explanation
def process_image(image, show_stages=False):
Wrapper function for all image processing
# Undistort the image
undistorted = cam.undistort(image)
plot_img(undistorted, 'undistorted', show_stages)
# Apply perpective transform
img, Minv = warp(undistorted)
plot_img(img, 'warped', show_stages)
# Get points for region of interst
vertices = np.array([[(image.shape[1]*0.1,image.shape[0]-50),
(image.shape[1]*0.5-100,image.shape[0]*0.60),
(image.shape[1]*0.5+100,image.shape[0]*0.60),
(image.shape[1]*0.95,image.shape[0]-50)]],
dtype=np.int32)
# Apply threshold
img = threshold(img, vertices, show_stages)
vertices = np.array([[(200,img.shape[0]),
(200,0),
(1050,0),
(1050,img.shape[0])]], dtype=np.int32)
# Get roi
img = get_roi(img, vertices)
# Find Lanes
find_lanes(img, show_stages)
# Draw lanes on image
res = draw_lane(undistorted, img, Minv);
#plot_img(res, show_stages)
return res
Explanation: Apply all the steps
Undistort the image
Apply perspective transform
Apply threshold
Find lanes
Draw the result back on image
End of explanation
# init camera
cam = cam_util()
cam.gen_camera_points()
Explanation: Generate obj points and img points
End of explanation
# Undistort a sample calibration image
cal_dir = "camera_cal/"
cal_images = glob.glob(cal_dir+'*.jpg')
for cal_image in cal_images:
cimg = mpimg.imread(cal_image)
cimg_undistort = cam.undistort(cimg)
cv2.imwrite('output_images/undistort_'+cal_image.split('/')[1],cimg_undistort)
print('calibration done')
# Clean camera matrix
cam.clean_mat()
Explanation: Calibrate camera and undistort the chessbaord images
End of explanation
# Test on images
test_dir = "test_images/"
test_images = glob.glob(test_dir+'test*.jpg')
#test_images = glob.glob(test_dir+'straight_lines*.jpg')
#test_images = glob.glob(test_dir+'*.jpg')
for test_image in test_images:
left_line = Line()
right_line = Line()
image = mpimg.imread(test_image)
res = process_image(image, False)
#plot_img(res, True)
print('######################## Sample Stages ########################')
print()
# display stages for a sample image
left_line = Line()
right_line = Line()
image = mpimg.imread('test_images/test3.jpg')
plot_img(image, 'Initial', True)
res = process_image(image, True)
plot_img(res, 'Final', True)
Explanation: Test on images
End of explanation
# Test on Videos
# Clean data for video
#
left_line = Line()
right_line = Line()
cam.clean_mat()
project_video_res = 'project_video_res.mp4'
clip1 = VideoFileClip("project_video.mp4")
project_video_clip = clip1.fl_image(process_image)
project_video_clip.write_videofile(project_video_res, audio=False)
#
# Clean data for video
#
left_line = Line()
right_line = Line()
cam.clean_mat()
challenge_video_res = 'challenge_video_res.mp4'
clip2 = VideoFileClip('challenge_video.mp4')
challenge_video_clip = clip2.fl_image(process_image)
challenge_video_clip.write_videofile(challenge_video_res, audio=False)
#
# Clean data for video
#
left_line = Line()
right_line = Line()
cam.clean_mat()
harder_challenge_video_res = 'harder_challenge_video_res.mp4'
clip2 = VideoFileClip('harder_challenge_video.mp4')
harder_challenge_video_clip = clip2.fl_image(process_image)
harder_challenge_video_clip.write_videofile(harder_challenge_video_res, audio=False)
#
Explanation: Test on videos
End of explanation |
13 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Incorporating the Parasitoid Model into a Stochastic Model for Parameter Estimation
We have two main tasks here
Step1: $\lambda$ is a probability, so it can only take on continuous values between 0 and 1. We might assume it is close to 1 in starting our MCMC algorithm.
$$\lambda \sim \mbox{Beta}(\alpha = 5,\beta = 1)$$
Step2: $a_1$ and $a_2$ are parameters which control the position of their logistic functions. They center the logistic around a certain time, so the mark the point where the function value will be 0.5. They can take on continuous values, but must be limited
Step3: $b_1$ and $b_2$ are parameters which control the scaling of their logistic functions. They can take on any positive value. Let's use the Gamma distribution. PyMC uses $\alpha$ and $\beta$. What starting values should we choose? Some exploration suggests that $k=2$ and $\theta = 0.5$ may be good values.
$$b_1,b_2 \sim \mbox{Gamma}(k=3,\theta=1)$$
Step4: The wind flight logistic parameters $a_w$ and $b_w$ need not have an upper bound. $a_w$ positions the distribution and should probably start at 2.2 given the result of Kristensen et al. genetic algorithm. $b_w$ is the shape parameter, and we don't have much info on this. We model both with a gamma distribution.
$$a_w \sim \mbox{Gamma}(k=2.2,\theta=1)\ \ \ \ \ \ \ \ \ b_w \sim \mbox{Gamma}(k=5,\theta=1)$$
Step5: There may be two diffusion covariance matrices - one for in-flow diffusion and another for out-of-flow diffusion (a parasitoid is split between these two choices by the probability $\lambda$). Starting off, we might just assume they are the same and let the data dictate any difference.
For the diffusion covariance matrix parameters, $\sigma_x$ and $\sigma_y$ should be greater than zero and $\rho$ should be between -1 and 1. We may expect $\sigma_x$ and $\sigma_y$ to reflect a reasonable distance in meters that a parasitoid might fly during a single day under its own power, as given in the Kalbar study. $\rho$ could possibly be noninformative?? Ask Nadiah for value obtained from Kalbar study.
$$\sigma_x \sim \mbox{Gamma}(k=42.2,\theta=0.5),\ \ \ \ \ \sigma_y \sim \mbox{Gamma}(k=10.6,\theta=1)\ \ \ \ \ \rho \sim \mbox{Uniform}(-1,1)$$
Step6: $\mu_r$, a parameter that scales wind speed to flight speed, is not really known but expected to be 1 or 2 given the previous study. Maybe a normal distribution with a standard deviation covering this range?
$$ \mu_r \sim \mathcal{N}(\mu=1.5,\sigma=0.75) $$
Step7: Flight duration $t_{dur}$ is a finicky value... it represents an average time that parasitoids remain in flight given that they decide to take an in-flow flight sometime during the day. The real situation is complicated, probably dependent upon mating, landscape variables, age/sex of the wasp, etc. In the Python implemenation of the model, we have discretized time into minutes, so for purely numerical purposes, this number should be a postive integer value in minutes. Let's go with a Poission distribution shifted by 1 to avoid zero.
$$ t_{dur} \sim \mbox{Poi}(\lambda=10) $$
Modeling parasitoid emergence sampling
I could not find any correlation between Whitefly emergence numbers in the sentinel fields and E. Hayati emergence numbers. As a result, I will ignore emergence data other than E. Hayati.
The parasitoid population model will return population densities in the form of expected population numbers per unit square. This expected value is then propagated forward in time by some function to get the expected number of wasps in a given location whose oviposition would result in an emergence date. We will model the actual number of emergences per date per unit square as a Poisson random variable with the model predicted population times an emergence-per-wasp factor as the mean. Each emergence in the area then has a chance of being observed with probability $\beta$, which will be location dependent. Let $X_i$ be the number of emergences in field $i$ when it is sampled and let $\gamma_i$ be the model expected population. Let $\xi$ be a population to expected emergence scaling factor. Let $Y_i$ be the number of wasps actually observed emerging.
$$ X_i\sim \mbox{Poi}(\xi\gamma_i) $$
Note | Python Code:
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
a, b = 5,1
plt.figure()
x = np.linspace(0,1,100)
plt.plot(x,stats.beta.pdf(x,a,b),label='beta pdf')
plt.legend(loc='best')
plt.show()
Explanation: Incorporating the Parasitoid Model into a Stochastic Model for Parameter Estimation
We have two main tasks here:
1. Assign priors to the parasitoid model parameters
2. Incorporate the parasitoid model into a larger stochastic model for field sampling and acquiring emergence data, so that we can compute the likelihood of our data given the model.
Assigning priors to the parasitoid model parameters
The parasitoid model has the following parameters:
- current simulation day, $d$ (fixed)
- wind data, $\mathbf{w}(t,d)$ (fixed)
- $h$ flight probability function parameters, including:
+ $\lambda$, probability of flying during the day given ideal conditions
+ $f$ time probability function parameters, including:
- morning logistic parameters, $a_1$, $b_1$
- evening logistic parameters, $a_2$, $b_2$
+ $g$ wind flight probability function logistic parameters $a_w$, $b_w$
- diffusion covariance matrix parameters (perhaps two of these), can be split into:
+ $\sigma_x$ standard deviation in $x$ direction
+ $\sigma_y$ standard deviation in $y$ direction
+ $\rho$ correlation
- distance vs. windspeed scaling parameter $\mu_r$
- flight duration in minutes $t_{dur}$
End of explanation
%matplotlib inline
from math import sqrt
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
mu1,sig1 = 6,1
mu2,sig2 = 18,1
rv1 = stats.norm(mu1,sig1)
rv2 = stats.norm(mu2,sig2)
plt.figure()
x = np.linspace(0,24,200)
plt.hold(True)
plt.plot(x,rv1.pdf(x),'b',label='mid morning')
plt.plot(x,rv2.pdf(x),'r',label='mid evening')
plt.legend(loc='best')
plt.hold(False)
plt.show()
Explanation: $\lambda$ is a probability, so it can only take on continuous values between 0 and 1. We might assume it is close to 1 in starting our MCMC algorithm.
$$\lambda \sim \mbox{Beta}(\alpha = 5,\beta = 1)$$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
k = 3
theta = 1
rv = stats.gamma(k,scale=theta)
plt.figure()
x = np.linspace(rv.ppf(0.001),rv.ppf(0.999))
plt.plot(x,rv.pdf(x))
plt.show()
gstats = rv.stats()
print('mean = {0}, variance = {1}'.format(gstats[0],gstats[1]))
Explanation: $a_1$ and $a_2$ are parameters which control the position of their logistic functions. They center the logistic around a certain time, so the mark the point where the function value will be 0.5. They can take on continuous values, but must be limited: $a_1$ should take on a value sometime generally around sunrise, and $a_2$ should take on a value sometime generally around sunset. We can use a truncated normal distribution for each:
$$a_1 \sim \mathcal{N}(\mu=6,\sigma^2=1,a=0,b=12)\ \ \ \ \ \ \ \ \ a_2 \sim \mathcal{N}(\mu=18,\sigma^2=1,a=12,b=24)$$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
k_a = 2.2
theta_a = 1
rv_a = stats.gamma(k_a,scale=theta_a)
k_b = 5
theta_b = 1
rv_b = stats.gamma(k_b,scale=theta_b)
plt.figure(figsize=(12,4))
plt.subplot(121)
x = np.linspace(rv_a.ppf(0.001),rv_a.ppf(0.999))
plt.plot(x,rv_a.pdf(x),label=r'$a_w$')
plt.legend()
plt.subplot(122)
x = np.linspace(rv_b.ppf(0.001),rv_b.ppf(0.999))
plt.plot(x,rv_b.pdf(x),label=r'$b_w$')
plt.legend()
plt.show()
gstats = rv_a.stats()
print('mean a_w = {0}, variance a_w = {1}'.format(gstats[0],gstats[1]))
gstats = rv_b.stats()
print('mean b_w = {0}, variance b_w = {1}'.format(gstats[0],gstats[1]))
Explanation: $b_1$ and $b_2$ are parameters which control the scaling of their logistic functions. They can take on any positive value. Let's use the Gamma distribution. PyMC uses $\alpha$ and $\beta$. What starting values should we choose? Some exploration suggests that $k=2$ and $\theta = 0.5$ may be good values.
$$b_1,b_2 \sim \mbox{Gamma}(k=3,\theta=1)$$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
k_x = 21.1*2
theta_x = 0.5
rv_x = stats.gamma(k_x,scale=theta_x)
k_y = 10.6
theta_y = 1
rv_y = stats.gamma(k_y,scale=theta_y)
rv_rho = stats.uniform(-1,2)
plt.figure(figsize=(18,4))
plt.subplot(131)
x = np.linspace(rv_x.ppf(0.001),rv_x.ppf(0.999))
plt.plot(x,rv_x.pdf(x),label=r'$\sigma_x$')
plt.legend()
plt.subplot(132)
x = np.linspace(rv_y.ppf(0.001),rv_y.ppf(0.999))
plt.plot(x,rv_y.pdf(x),label=r'$\sigma_y$')
plt.legend()
plt.subplot(133)
x = np.linspace(-1,1)
plt.plot(x,rv_rho.pdf(x),label=r'$\rho$')
plt.legend()
plt.show()
gstats = rv_x.stats()
print('mean std x = {0}, variance std x = {1}'.format(gstats[0],gstats[1]))
gstats = rv_y.stats()
print('mean std y = {0}, variance std y = {1}'.format(gstats[0],gstats[1]))
Explanation: The wind flight logistic parameters $a_w$ and $b_w$ need not have an upper bound. $a_w$ positions the distribution and should probably start at 2.2 given the result of Kristensen et al. genetic algorithm. $b_w$ is the shape parameter, and we don't have much info on this. We model both with a gamma distribution.
$$a_w \sim \mbox{Gamma}(k=2.2,\theta=1)\ \ \ \ \ \ \ \ \ b_w \sim \mbox{Gamma}(k=5,\theta=1)$$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
mu = 1.5
sig = 0.75
mu_r = stats.norm(mu,sig)
plt.figure()
x = np.linspace(mu_r.ppf(0.001),mu_r.ppf(0.999))
plt.plot(x,mu_r.pdf(x),label=r'$\mu_r$')
plt.legend()
plt.show()
Explanation: There may be two diffusion covariance matrices - one for in-flow diffusion and another for out-of-flow diffusion (a parasitoid is split between these two choices by the probability $\lambda$). Starting off, we might just assume they are the same and let the data dictate any difference.
For the diffusion covariance matrix parameters, $\sigma_x$ and $\sigma_y$ should be greater than zero and $\rho$ should be between -1 and 1. We may expect $\sigma_x$ and $\sigma_y$ to reflect a reasonable distance in meters that a parasitoid might fly during a single day under its own power, as given in the Kalbar study. $\rho$ could possibly be noninformative?? Ask Nadiah for value obtained from Kalbar study.
$$\sigma_x \sim \mbox{Gamma}(k=42.2,\theta=0.5),\ \ \ \ \ \sigma_y \sim \mbox{Gamma}(k=10.6,\theta=1)\ \ \ \ \ \rho \sim \mbox{Uniform}(-1,1)$$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
lambdavar = 10
rv = stats.poisson(lambdavar,loc=1)
plt.figure()
x = np.arange(rv.ppf(0.001),rv.ppf(0.999))
plt.hold(True)
plt.scatter(x,rv.pmf(x),label=r'$\mu_r$')
plt.vlines(x,0,rv.pmf(x))
plt.legend()
plt.show()
Explanation: $\mu_r$, a parameter that scales wind speed to flight speed, is not really known but expected to be 1 or 2 given the previous study. Maybe a normal distribution with a standard deviation covering this range?
$$ \mu_r \sim \mathcal{N}(\mu=1.5,\sigma=0.75) $$
End of explanation
%matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
mean = 0.1
def b_param(a):
return a*(1-mean)/mean
rv_list = []
for ii in range(10):
a = ii*0.1 + 0.1
rv_list.append(stats.beta(a,b_param(a)))
rv_list2 = []
# fix var = mean. This only valid if mean < 0.5
for ii in range(10):
mean = ii*0.004 +0.004
a = 1-2*mean
rv_list2.append(stats.beta(a,b_param(a)))
plt.figure(figsize=(18,4))
plt.hold(True)
plt.subplot(121)
for n,rv in enumerate(rv_list):
x = np.linspace(rv.ppf(0.001),rv.ppf(0.999))
plt.plot(x,rv.pdf(x),color='{}'.format(0.5-0.5*(n+1)/len(rv_list)))
plt.axis([0,1,0,10])
plt.subplot(122)
for n,rv in enumerate(rv_list2):
x = np.linspace(rv.ppf(0.001),rv.ppf(0.999))
plt.plot(x,rv.pdf(x),color='{}'.format(0.5-0.5*(n+1)/len(rv_list)))
#plt.axis([0,0.1,0,1000])
plt.show()
Explanation: Flight duration $t_{dur}$ is a finicky value... it represents an average time that parasitoids remain in flight given that they decide to take an in-flow flight sometime during the day. The real situation is complicated, probably dependent upon mating, landscape variables, age/sex of the wasp, etc. In the Python implemenation of the model, we have discretized time into minutes, so for purely numerical purposes, this number should be a postive integer value in minutes. Let's go with a Poission distribution shifted by 1 to avoid zero.
$$ t_{dur} \sim \mbox{Poi}(\lambda=10) $$
Modeling parasitoid emergence sampling
I could not find any correlation between Whitefly emergence numbers in the sentinel fields and E. Hayati emergence numbers. As a result, I will ignore emergence data other than E. Hayati.
The parasitoid population model will return population densities in the form of expected population numbers per unit square. This expected value is then propagated forward in time by some function to get the expected number of wasps in a given location whose oviposition would result in an emergence date. We will model the actual number of emergences per date per unit square as a Poisson random variable with the model predicted population times an emergence-per-wasp factor as the mean. Each emergence in the area then has a chance of being observed with probability $\beta$, which will be location dependent. Let $X_i$ be the number of emergences in field $i$ when it is sampled and let $\gamma_i$ be the model expected population. Let $\xi$ be a population to expected emergence scaling factor. Let $Y_i$ be the number of wasps actually observed emerging.
$$ X_i\sim \mbox{Poi}(\xi\gamma_i) $$
Note: $\mbox{Gamma}(1,r)$ is $\mbox{Exp}(r)$ in $(\alpha,\beta)$ parametrization. Let $r$ be average time between ovipositions.
$$ \xi\sim \mbox{Gamma}(1,1) $$
$$ Y_i\sim \mbox{Bin}(X_i,\beta_i) $$
Then $Y_i$ is a thinned Poisson process:
$$ Y_i\sim \mbox{Poi}(\xi\gamma_i\beta_i) $$
The probability $\beta_i$ of observing a wasp in field $i$ is a random variable related to the density of wasps in the part of the field sampled. In a field with a perfectly uniform distribution of wasps,
$$ \beta_i = \frac{\tilde{A}}{A_i} $$
where $\tilde{A}$ is a random variable denoting the area sampled in each of the fields with total area $A_i$. Otherwise, $\beta_i$ has mean $\tilde{A}$, but its other moments change depending on the field-specific heterogenity. Then we might model
$$ \beta_i \sim \mbox{Beta}(\alpha,\beta\ \big\vert\ \mu=\frac{\alpha}{\alpha+\beta}=\frac{\tilde{A}}{A_i},\sigma^2=\mbox{min}(\mu,0.1)) $$
$$ \tilde{A} \sim \mbox{TruncNorm}(\mu,\sigma^2,0,\mbox{min}\ A_i) $$
Some examples of different Beta functions...
End of explanation |
14 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps # number of characters in a batch
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches*batch_size]
# Reshape into n_seqs rows
arr = np.reshape(arr,(n_seqs,-1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:,n:n+n_steps]
# The targets, shifted by one (and warped around - not sure why its correct to warp)
y=np.zeros_like(x)
y[:,:-1],y[:,-1]=x[:,1:],x[:,0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
15 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of pdfplumber's table-extraction options
This notebook uses a report from the FBI's National Instant Criminal Background Check System.
Import pdfplumber
Step1: Load the PDF
Step2: Get the first page
Step3: What data would we get if we used the default settings?
We can check by using PageImage.debug_tablefinder()
Step4: The default settings correctly identify the table's vertical demarcations, but don't capture the horizontal demarcations between each group of five states/territories. So
Step5: Cleaning up the data
.extract_table worked with our custom settings, but the table it detected contains extraneous headers and footers. Since we know that the Alabama row is the first, and that there are 56 rows we care about (50 states + DC + 4 territories + the "Totals" row), we can slice away the rest
Step6: The first row
Step7: The last
Step8: Now, let's turn those rows into dictionaries, and also convert strings-representing-numbers to the numbers themselves, e.g., "18,870" -> 18870
Step9: Now here's the first row, parsed
Step10: Sort the data
For demonstration purposes, let's list the rows with the highest number of handgun-only background checks
Step11: Use extract_text to extract the report month
It looks like the month of the report is listed in an area 35px to 65px from the top of the page. But there's also some other text directly above and below it. So when we crop for that area, we'll use .within_bbox instead of .crop to select only characters (and other objects) that are fully within the bounding box. | Python Code:
import pdfplumber
print(pdfplumber.__version__)
Explanation: Demonstration of pdfplumber's table-extraction options
This notebook uses a report from the FBI's National Instant Criminal Background Check System.
Import pdfplumber
End of explanation
pdf = pdfplumber.open("../pdfs/background-checks.pdf")
Explanation: Load the PDF
End of explanation
p0 = pdf.pages[0]
im = p0.to_image()
im
Explanation: Get the first page
End of explanation
im.reset().debug_tablefinder()
Explanation: What data would we get if we used the default settings?
We can check by using PageImage.debug_tablefinder():
End of explanation
table_settings = {
"vertical_strategy": "lines",
"horizontal_strategy": "text",
"snap_y_tolerance": 5,
"intersection_x_tolerance": 15,
}
im.reset().debug_tablefinder(table_settings)
table = p0.extract_table(table_settings)
for row in table[:5]:
print(row)
Explanation: The default settings correctly identify the table's vertical demarcations, but don't capture the horizontal demarcations between each group of five states/territories. So:
Using custom .extract_table's settings
Because the columns are separated by lines, we use vertical_strategy="lines"
Because the rows are, primarily, separated by gutters between the text, we use horizontal_strategy="text"
To snap together a handful of the gutters at the top which aren't fully flush with one another, we use snap_y_tolerance, which snaps horizontal lines within a certain distance to the same vertical alignment.
And because the left and right-hand extremities of the text aren't quite flush with the vertical lines, we use "intersection_tolerance": 15
End of explanation
core_table = table[4:4+56]
Explanation: Cleaning up the data
.extract_table worked with our custom settings, but the table it detected contains extraneous headers and footers. Since we know that the Alabama row is the first, and that there are 56 rows we care about (50 states + DC + 4 territories + the "Totals" row), we can slice away the rest:
End of explanation
" • ".join(core_table[0])
Explanation: The first row:
End of explanation
" • ".join(core_table[-1])
Explanation: The last:
End of explanation
COLUMNS = [
"state",
"permit",
"handgun",
"long_gun",
"other",
"multiple",
"admin",
"prepawn_handgun",
"prepawn_long_gun",
"prepawn_other",
"redemption_handgun",
"redemption_long_gun",
"redemption_other",
"returned_handgun",
"returned_long_gun",
"returned_other",
"rentals_handgun",
"rentals_long_gun",
"private_sale_handgun",
"private_sale_long_gun",
"private_sale_other",
"return_to_seller_handgun",
"return_to_seller_long_gun",
"return_to_seller_other",
"totals"
]
def parse_value(i, x):
if i == 0: return x
if x == "": return None
return int(x.replace(",", ""))
from collections import OrderedDict
def parse_row(row):
return {COLUMNS[i]:parse_value(i, cell)
for i, cell in enumerate(row)}
data = [ parse_row(row) for row in core_table ]
Explanation: Now, let's turn those rows into dictionaries, and also convert strings-representing-numbers to the numbers themselves, e.g., "18,870" -> 18870:
End of explanation
data[0]
Explanation: Now here's the first row, parsed:
End of explanation
for row in list(reversed(sorted(data, key=lambda x: x["handgun"])))[:6]:
print("{state}: {handgun:,d} handgun-only checks".format(**row))
Explanation: Sort the data
For demonstration purposes, let's list the rows with the highest number of handgun-only background checks:
End of explanation
month_crop = p0.within_bbox((0, 35, p0.width, 65))
month_crop.to_image()
month_chars = month_crop.extract_text()
month_chars
Explanation: Use extract_text to extract the report month
It looks like the month of the report is listed in an area 35px to 65px from the top of the page. But there's also some other text directly above and below it. So when we crop for that area, we'll use .within_bbox instead of .crop to select only characters (and other objects) that are fully within the bounding box.
End of explanation |
16 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Online CNMF-E
This demo shows an example of doing online analysis on one-photon data. We compare offline and online approaches. The dataset used is courtesy of the Miniscope project.
Step1: Select file(s) to be processed
The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fnames variable as a list. Note that the memory requirement of the offline CNMF-E algorithm are much higher compared to the standard CNMF algorithm. One of the benefits of the online approach is the reduced memory requirements.
Step2: Batch (offline) approach
We start with motion correction and then proceed with the source extraction using the CNMF-E algorithm. For a detailed 1p demo check demo_pipeline_cnmfE.ipynb.
Step3: inspect motion correction results
Step4: The motion correction results look good. We then proceed with memory mapping and checking the correlation/pnr images.
Step5: Inspect correlation and PNR images to set relevant thresholds
Step6: Set parameters for source extraction
From the images above we select min_pnr = 10 and min_corr = 0.8. We pass these alongside the other parameters needed for offline 1p processing.
Step7: View the results
Step8: Show a movie with the results
Step9: Online Processing
Now try the online approach. The idea behind the online algorithm is simple
Step10: Plot timing
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True) the time required to generate this movie will be included here.
Step11: Clean up and compare two approaches
Even though the online algorithm screens any new components, we can still perform the quality tests to filter out any false positive components. To do that, we first need to apply the inferred shifts to the original data in order to have the whole registered dataset in memory mapped form.
Step12: Difference in inferred shifts
Accurate motion correction is important for the online algorithm. Below we plot the difference in the estimated shifts between the two approaches. Note that the online shifts have been rescaled by a factor of ds_factor.
Step13: Constant shifts in the FOV will not significantly affect the results. What is most important is deviatons.
Step14: The standard deviation is at a subpixel level (although it can still be significant). The high degree of similarity can also be seen from the correlation between the shifts of the two approaches. | Python Code:
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
from IPython.display import display, clear_output
import glob
import logging
import numpy as np
import os
import scipy
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)10s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.WARNING)
import caiman as cm
from caiman.source_extraction import cnmf as cnmf
from caiman.motion_correction import MotionCorrect
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
from caiman.utils.visualization import nb_inspect_correlation_pnr
import holoviews as hv
import bokeh.plotting as bpl
bpl.output_notebook()
hv.notebook_extension('bokeh')
Explanation: Online CNMF-E
This demo shows an example of doing online analysis on one-photon data. We compare offline and online approaches. The dataset used is courtesy of the Miniscope project.
End of explanation
fnames = [download_demo('msCam13.avi')]
Explanation: Select file(s) to be processed
The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fnames variable as a list. Note that the memory requirement of the offline CNMF-E algorithm are much higher compared to the standard CNMF algorithm. One of the benefits of the online approach is the reduced memory requirements.
End of explanation
# motion correction parameters
motion_correct = True # flag for performing motion correction
pw_rigid = False # flag for performing piecewise-rigid motion correction (otherwise just rigid)
gSig_filt = (7, 7) # size of high pass spatial filtering, used in 1p data
max_shifts = (20, 20) # maximum allowed rigid shift
border_nan = 'copy' # replicate values along the boundaries
mc_dict = {
'pw_rigid': pw_rigid,
'max_shifts': max_shifts,
'gSig_filt': gSig_filt,
'border_nan': border_nan
}
opts = cnmf.params.CNMFParams(params_dict=mc_dict)
#%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened)
if 'dview' in locals():
cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
backend='local', n_processes=None, single_thread=False)
mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion'))
mc.motion_correct(save_movie=True)
Explanation: Batch (offline) approach
We start with motion correction and then proceed with the source extraction using the CNMF-E algorithm. For a detailed 1p demo check demo_pipeline_cnmfE.ipynb.
End of explanation
inspect_results = False
if inspect_results:
cm.concatenate((cm.load(fnames), cm.load(mc.mmap_file)), axis=1).play()
#plt.figure(); plt.plot(mc.shifts_rig); plt.legend(['x-shifts', 'y-shifts'])
Explanation: inspect motion correction results
End of explanation
from time import time
fname_new = cm.save_memmap(mc.mmap_file, base_name='memmap_', order='C',
border_to_0=0, dview=dview)
Yr, dims, T = cm.load_memmap(fname_new)
images = Yr.T.reshape((T,) + dims, order='F')
Explanation: The motion correction results look good. We then proceed with memory mapping and checking the correlation/pnr images.
End of explanation
gSig = (6, 6)
cn_filter, pnr = cm.summary_images.correlation_pnr(images[::max(T//1000, 1)], gSig=gSig[0], swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
# inspect the summary images and set the parameters
nb_inspect_correlation_pnr(cn_filter, pnr)
Explanation: Inspect correlation and PNR images to set relevant thresholds
End of explanation
min_pnr = 10
min_corr = 0.8
rf = 48 # half size of each patch
stride = 8 # amount of overlap between patches
ssub = 1 # spatial downsampling factor
decay_time = 0.4 # length of typical transient (in seconds)
fr = 10 # imaging rate (Hz)
gSig = (6, 6) # expected half size of neurons
gSiz = (15, 15) # half size for neuron bounding box
p = 0 # order of AR indicator dynamics
min_SNR = 1.5 # minimum SNR for accepting new components
rval_thr = 0.85 # correlation threshold for new component inclusion
merge_thr = 0.65 # merging threshold
K = None # initial number of components
cnmfe_dict = {'fnames': fnames,
'fr': fr,
'decay_time': decay_time,
'method_init': 'corr_pnr',
'gSig': gSig,
'gSiz': gSiz,
'rf': rf,
'stride': stride,
'p': p,
'nb': 0,
'ssub': ssub,
'min_SNR': min_SNR,
'min_pnr': min_pnr,
'min_corr': min_corr,
'bas_nonneg': False,
'center_psf': True,
'rval_thr': rval_thr,
'only_init': True,
'merge_thr': merge_thr,
'K': K}
opts.change_params(cnmfe_dict);
from time import time
t1 = -time()
cnm = cnmf.CNMF(n_processes=n_processes, dview=dview, params=opts)
cnm.fit(images)
t1 += time()
Explanation: Set parameters for source extraction
From the images above we select min_pnr = 10 and min_corr = 0.8. We pass these alongside the other parameters needed for offline 1p processing.
End of explanation
cnm.estimates.plot_contours_nb(img=pnr)
cnm.estimates.hv_view_components(img=cn_filter)
Explanation: View the results
End of explanation
cnm.estimates.play_movie(images, magnification=0.75, include_bck=False)
Explanation: Show a movie with the results
End of explanation
from copy import deepcopy
online_opts = deepcopy(cnm.params)
rf = 48 # half size of patch (used only during initialization)
stride = 8 # overlap between patches (used only during initialization)
ssub = 1 # spatial downsampling factor (during initialization)
ds_factor = 2*ssub # spatial downsampling factor (during online processing)
ssub_B = 4 # background downsampling factor (use that for faster processing)
gSig = (10//ds_factor, 10//ds_factor) # expected half size of neurons
gSiz = (22//ds_factor, 22//ds_factor)
sniper_mode = False # flag using a CNN to detect new neurons (o/w space correlation is used)
init_batch = 300 # number of frames for initialization (presumably from the first file)
expected_comps = 500 # maximum number of expected components used for memory pre-allocation (exaggerate here)
dist_shape_update = False # flag for updating shapes in a distributed way
min_num_trial = 5 # number of candidate components per frame
K = None # initial number of components
epochs = 2 # number of passes over the data
show_movie = False # show the movie with the results as the data gets processed
use_corr_img = True # flag for using the corr*pnr image when searching for new neurons (otherwise residual)
online_dict = {'epochs': epochs,
'nb': 0,
'ssub': ssub,
'ssub_B': ssub_B,
'ds_factor': ds_factor, # ds_factor >= ssub should hold
'gSig': gSig,
'gSiz': gSiz,
'gSig_filt': (3, 3),
'min_corr': min_corr,
'bas_nonneg': False,
'center_psf': True,
'max_shifts_online': 20,
'rval_thr': rval_thr,
'motion_correct': True,
'init_batch': init_batch,
'only_init': True,
'init_method': 'cnmf',
'normalize_init': False,
'update_freq': 200,
'expected_comps': expected_comps,
'sniper_mode': sniper_mode, # set to False for 1p data
'dist_shape_update' : dist_shape_update,
'min_num_trial': min_num_trial,
'epochs': epochs,
'use_corr_img': use_corr_img,
'show_movie': show_movie}
online_opts.change_params(online_dict);
cnm_online = cnmf.online_cnmf.OnACID(params=online_opts, dview=dview)
cnm_online.fit_online()
#images = cm.load(fnames[0], subindices=slice(0,1000))
#Cn, pnr = cm.summary_images.correlation_pnr(images[::1], gSig=gSig[0], swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
cnm_online.estimates.nb_view_components(img=pnr, denoised_color='red');
cnm_online.estimates.plot_contours_nb(img=pnr)
Explanation: Online Processing
Now try the online approach. The idea behind the online algorithm is simple:
- First initialize the estimates by running the batch (offline) algorithm in small subset.
- Then process each frame as it arrives. The processing consists of:
* Motion correct the new frame
* Extract the activity of existing neurons at this frame, and neuropil
* Search for new neurons that appear in this frame and have not been detected earlier.
- Periodically update shapes of existing neurons and background model.
Setup additional parameters for online processing
End of explanation
show_cumulative = True
#if show_cumulative:
T_init = np.array([cnm_online.t_init] + [0]*(epochs*T-1))
T_motion = 1e3*np.array([0]*init_batch + cnm_online.t_motion)/1e3
T_detect = 1e3*np.array([0]*init_batch + cnm_online.t_detect)/1e3
T_shapes = 1e3*np.array([0]*init_batch + cnm_online.t_shapes)/1e3
T_online = 1e3*np.array([0]*init_batch + cnm_online.t_online)/1e3 - T_motion - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_motion)), np.cumsum(T_init), np.cumsum(T_motion), np.cumsum(T_online), np.cumsum(T_detect), np.cumsum(T_shapes))
plt.legend(labels=['init', 'motion', 'process', 'detect', 'shapes'], loc=2)
for i in range(epochs - 1):
plt.plot([(i+1)*T, (i+1)*T], [0, np.array(cnm_online.t_online).sum()+cnm_online.t_init], '--k')
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
#plt.ylim([0, 1.2e3*np.percentile(np.array(cnm_online.t_online), 90)]);
cnm_online.estimates.play_movie(imgs=images, magnification=0.75, include_bck=False)
Explanation: Plot timing
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True) the time required to generate this movie will be included here.
End of explanation
if online_opts.online['motion_correct']:
shifts = cnm_online.estimates.shifts[-cnm_online.estimates.C.shape[-1]:]
if not opts.motion['pw_rigid']:
memmap_file = cm.motion_correction.apply_shift_online(images, shifts,
save_base_name='MC')
else:
mc = MotionCorrect(fnames, dview=dview, **online_opts.get_group('motion'))
mc.y_shifts_els = [[sx[0] for sx in sh] for sh in shifts]
mc.x_shifts_els = [[sx[1] for sx in sh] for sh in shifts]
memmap_file = mc.apply_shifts_movie(fnames, rigid_shifts=False,
save_memmap=True,
save_base_name='MC')
else: # To do: apply non-rigid shifts on the fly
memmap_file = images.save(fnames[0][:-4] + 'mmap')
cnm_online.mmap_file = memmap_file
Yr_online, dims, T = cm.load_memmap(memmap_file)
#cnm_online.estimates.dview=dview
#cnm_online.estimates.compute_residuals(Yr=Yr_online)
images_online = np.reshape(Yr_online.T, [T] + list(dims), order='F')
min_SNR = 2 # peak SNR for accepted components (if above this, acept)
rval_thr = 0.85 # space correlation threshold (if above this, accept)
use_cnn = False # use the CNN classifier
cnm_online.params.change_params({'min_SNR': min_SNR,
'rval_thr': rval_thr,
'use_cnn': use_cnn})
cnm_online.estimates.evaluate_components(images_online, cnm_online.params, dview=dview)
cnm_online.estimates.Cn = pnr
cnm_online.estimates.plot_contours_nb(img=pnr, idx=cnm_online.estimates.idx_components)
cnm_online.estimates.hv_view_components(img=pnr, idx=cnm_online.estimates.idx_components,
denoised_color='red')
cnm_online.estimates.hv_view_components(img=pnr, idx=cnm_online.estimates.idx_components_bad,
denoised_color='red')
Explanation: Clean up and compare two approaches
Even though the online algorithm screens any new components, we can still perform the quality tests to filter out any false positive components. To do that, we first need to apply the inferred shifts to the original data in order to have the whole registered dataset in memory mapped form.
End of explanation
plt.plot(np.array(mc.shifts_rig) - ds_factor*np.array(cnm_online.estimates.shifts[:1000]));
plt.legend(['x-shifts', 'y-shifts']);
plt.title('Difference between offline and online shifts')
plt.xlabel('Frame #')
plt.ylabel('pixels')
Explanation: Difference in inferred shifts
Accurate motion correction is important for the online algorithm. Below we plot the difference in the estimated shifts between the two approaches. Note that the online shifts have been rescaled by a factor of ds_factor.
End of explanation
np.std(np.array(mc.shifts_rig) - ds_factor*np.array(cnm_online.estimates.shifts[:1000]), axis=0)
Explanation: Constant shifts in the FOV will not significantly affect the results. What is most important is deviatons.
End of explanation
np.corrcoef(np.array(mc.shifts_rig).T, np.array(cnm_online.estimates.shifts[:1000]).T)
Explanation: The standard deviation is at a subpixel level (although it can still be significant). The high degree of similarity can also be seen from the correlation between the shifts of the two approaches.
End of explanation |
17 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
Step1: Get example data
Step2: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
Step3: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
Step4: Intermediate kinetic model
Step5: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
Step6: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
Step7: MSM
We can construct an MSM from the labeled trajectories
Step8: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python | Python Code:
# Work in a temporary directory
import tempfile
import os
os.chdir(tempfile.mkdtemp())
# Since this is running from an IPython notebook,
# we prefix all our commands with "!"
# When running on the command line, omit the leading "!"
! msmb -h
Explanation: Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
End of explanation
! msmb FsPeptide --data_home ./
! tree
Explanation: Get example data
End of explanation
# Remember '\' is the line-continuation marker
# You can enter this command on one line
! msmb DihedralFeaturizer \
--out featurizer.pkl \
--transformed diheds \
--top fs_peptide/fs-peptide.pdb \
--trjs "fs_peptide/*.xtc" \
--stride 10
Explanation: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
End of explanation
! msmb RobustScaler \
-i diheds \
--transformed scaled_diheds.h5
Explanation: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
End of explanation
! msmb tICA -i scaled_diheds.h5 \
--out tica_model.pkl \
--transformed tica_trajs.h5 \
--n_components 4 \
--lag_time 2
Explanation: Intermediate kinetic model: tICA
tICA is similar to principal component analysis (see "tICA vs. PCA" example). Note that the 84-dimensional space is reduced to 4 dimensions.
End of explanation
from msmbuilder.dataset import dataset
ds = dataset('tica_trajs.h5')
%matplotlib inline
import msmexplorer as msme
import numpy as np
txx = np.concatenate(ds)
_ = msme.plot_histogram(txx)
Explanation: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
End of explanation
! msmb MiniBatchKMeans -i tica_trajs.h5 \
--transformed labeled_trajs.h5 \
--out clusterer.pkl \
--n_clusters 100 \
--random_state 42
Explanation: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
End of explanation
! msmb MarkovStateModel -i labeled_trajs.h5 \
--out msm.pkl \
--lag_time 2
Explanation: MSM
We can construct an MSM from the labeled trajectories
End of explanation
from msmbuilder.utils import load
msm = load('msm.pkl')
clusterer = load('clusterer.pkl')
assignments = clusterer.partial_transform(txx)
assignments = msm.partial_transform(assignments)
from matplotlib import pyplot as plt
msme.plot_free_energy(txx, obs=(0, 1), n_samples=10000,
pi=msm.populations_[assignments],
xlabel='tIC 1', ylabel='tIC 2')
plt.scatter(clusterer.cluster_centers_[msm.state_labels_, 0],
clusterer.cluster_centers_[msm.state_labels_, 1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:, 1], # color by eigenvector
cmap="coolwarm",
zorder=3
)
plt.colorbar(label='First dynamical eigenvector')
plt.tight_layout()
Explanation: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python
End of explanation |
18 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a support vector machine for sweep model selection
This example is similar to demographicModelSelectionExample.ipynb in that we are going to use supervised machine learning to discriminate between three classes of simulated data. But here rather than thinking about demography we are trying to determine whether a given locus has experienced a recent selective sweep or not, and whether this sweep was driven by de novo mutation (i.e. a classic "hard sweep" [1]) or an allele that was previously segregating in the population under drift but then became beneficial after some environmental change (i.e. a "soft sweep" [2]).
This example is a little bit more practical than our other example, where we were selecting a demographic model on the basis of a single locus rather than data from many loci as is typically done. Determining whether a locus has been recently impacted by positive selection is a common problem in population genetics, and this example shows that machine learning can be used to attack this problem without an inordinate amount of coding (modulo some caveats, discussed below)
Another difference between this example and our previous one is that we will use a support vector machine (SVM) for this task rather than a random forest, simply to demonstrate the proper use of this tool using scikit-learn. As we will see, switching between ML tools is very easy in scikit-learn.
Preliminaries
The road map here will be to 1) do some simulation of three demographic models, 2) to train a classifier to distinguish among those models, 3) test that classifier with new simulation data, and 4) to graphically present how well our trained classifier works.
To do this we will use coalescent simulations as implemented in our discoal software tool and for the ML side of things we will use the scikit-learn package. As before, we will use Dick Hudson's sample_stats program to calculate summary statistics, which we have included in our tarball containing his ms software. Let's start by installing these dependencies (if you don't have them installed already)
Install and compile sample_stats
We have put a copy of the ms tarball in this repo, so the following should work upon cloning. (Note that this step is not required if you have already gone through demographicModelSelectionExample.ipynb.)
Step1: Install and compile discoal
We have to install the coalescent simulator discoal which we will use to simulate loci with and without selective sweeps. This is obtained from our github page as follows
Step2: Install scikit-learn
If you use anaconda or have gone through any of our other examples, you may already have these modules installed, but if not you can install with either of the following
Step3: or if you don't use conda, you can use pip to install scikit-learn with
Step4: Step 1
Step5: Step 2
Step6: That's it! The classifier is trained. This SVM uses a radial basis kernel function which allows for non-linear classification. The gamma parameter is a hyperparameter of this kernel function, and C is the SVM's regularization parameter, which governs the "softness" of the separating margin. (An explanation of these and other concepts integral to understanding the guts of an SVM is beyond the scope of this example, though scikit-learn provides a nice fairly accessible tutorial with more example code here
Step7: Above we can see which regions of our feature space are assigned to each class
Step8: Meh. Let's again see if we can do better by using all of statistics calculated by Hudson's sample_stats
Step9: Hmm, that didn't help all that much. But there is still room for improvement.
Step 4 | Python Code:
#untar and compile sample_stats
!tar zxf ms.tar.gz; cd msdir; gcc -o sample_stats sample_stats.c tajd.c -lm
#now move the program into the current working dir
!mv msdir/sample_stats .
Explanation: Using a support vector machine for sweep model selection
This example is similar to demographicModelSelectionExample.ipynb in that we are going to use supervised machine learning to discriminate between three classes of simulated data. But here rather than thinking about demography we are trying to determine whether a given locus has experienced a recent selective sweep or not, and whether this sweep was driven by de novo mutation (i.e. a classic "hard sweep" [1]) or an allele that was previously segregating in the population under drift but then became beneficial after some environmental change (i.e. a "soft sweep" [2]).
This example is a little bit more practical than our other example, where we were selecting a demographic model on the basis of a single locus rather than data from many loci as is typically done. Determining whether a locus has been recently impacted by positive selection is a common problem in population genetics, and this example shows that machine learning can be used to attack this problem without an inordinate amount of coding (modulo some caveats, discussed below)
Another difference between this example and our previous one is that we will use a support vector machine (SVM) for this task rather than a random forest, simply to demonstrate the proper use of this tool using scikit-learn. As we will see, switching between ML tools is very easy in scikit-learn.
Preliminaries
The road map here will be to 1) do some simulation of three demographic models, 2) to train a classifier to distinguish among those models, 3) test that classifier with new simulation data, and 4) to graphically present how well our trained classifier works.
To do this we will use coalescent simulations as implemented in our discoal software tool and for the ML side of things we will use the scikit-learn package. As before, we will use Dick Hudson's sample_stats program to calculate summary statistics, which we have included in our tarball containing his ms software. Let's start by installing these dependencies (if you don't have them installed already)
Install and compile sample_stats
We have put a copy of the ms tarball in this repo, so the following should work upon cloning. (Note that this step is not required if you have already gone through demographicModelSelectionExample.ipynb.)
End of explanation
#download discoal and compile it
!wget https://github.com/kern-lab/discoal/archive/master.zip; unzip master.zip; cd discoal-master; make
#or, for our mac OS X users and any others who use curl instead of wget
!curl -O https://github.com/kern-lab/discoal/archive/master.zip; unzip master.zip; cd discoal-master; make
#now move discoal into the current working dir
!mv discoal-master/discoal .
Explanation: Install and compile discoal
We have to install the coalescent simulator discoal which we will use to simulate loci with and without selective sweeps. This is obtained from our github page as follows:
End of explanation
!conda install scikit-learn --yes
Explanation: Install scikit-learn
If you use anaconda or have gone through any of our other examples, you may already have these modules installed, but if not you can install with either of the following:
End of explanation
!pip install -U scikit-learn
Explanation: or if you don't use conda, you can use pip to install scikit-learn with
End of explanation
#simulate under the equilibrium model -- could also do this with ms
!./discoal 20 2000 1000 -t 100 -r 100 | ./sample_stats > no_sweep.msOut.stats
#simulate under the soft sweep model with a selection coefficient 2Ns=250
#and an initial selected frequency randomly drawn from (0, 0.2]
!./discoal 20 2000 1000 -t 100 -r 100 -ws 0 -Pa 100 500 -i 4 -Pf 0 0.2 | ./sample_stats > soft_sweep.msOut.stats
#simulate under the hard sweep model with a selection coefficient 2Ns=250
!./discoal 20 2000 1000 -t 100 -r 100 -ws 0 -Pa 100 500 -i 4 | ./sample_stats > hard_sweep.msOut.stats
#now lets suck up the data columns we want for each of these files, and create one big training set; we will use numpy for this
# note that we are only using two columns of the data- these correspond to segSites and Fay & Wu's H
import numpy as np
X1 = np.loadtxt("no_sweep.msOut.stats",usecols=(5,9))
X2 = np.loadtxt("soft_sweep.msOut.stats",usecols=(5,9))
X3 = np.loadtxt("hard_sweep.msOut.stats",usecols=(5,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
#the last step in this process will be to shuffle the data, and then split it into a training set and a testing set
#the testing set will NOT be used during training, and will allow us to check how well the classifier is doing
#scikit-learn has a very convenient function for doing this shuffle and split operation
#
# will will keep out 25% of the data for testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.25)
Explanation: Step 1: create a training set and a testing set
We will create a training set using simulations under three evolutionary scenarios: 1) pure neutrality, 2) soft selective sweeps, and 3) hard selective sweeps.
These simulations use discoal (which runs very similarly to ms) and we will summarize those simulations using the sample_stats program that Hudson provides. Be patient-- the simulations with sweeps, especially soft sweeps, may take some time (~5 min on my old laptop).
End of explanation
#from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
#clf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = svm.SVC(kernel='rbf', gamma=0.1, C=1)
clf = clf.fit(X_train, Y_train)
Explanation: Step 2: train our classifier and visualize decision surface
Now that we have a training and testing set ready to go, we can move on to training our classifier. For this example we will use a Support Vector Machine (Cortes and Vapnik 1995). This is all implemented in scikit-learn and so the code is brief.
End of explanation
#These two functions (taken from scikit-learn.org) plot the decision boundaries for a classifier.
def plot_contours(ax, clf, xx, yy, **params):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def make_meshgrid(x, y, h=.05):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
#Let's do the plotting
import matplotlib.pyplot as plt
fig,ax= plt.subplots(1,1)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1, h=0.2)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X_test[:, 0], X_test[:, 1], c=Y_test, cmap=plt.cm.coolwarm, edgecolors='k')
ax.set_xlabel(r"Tajima's $D$", fontsize=14)
ax.set_ylabel(r"Fay and Wu's $H$", fontsize=14)
ax.set_xticks(())
ax.set_yticks(())
ax.set_title("Classifier decision surface", fontsize=14)
plt.show()
Explanation: That's it! The classifier is trained. This SVM uses a radial basis kernel function which allows for non-linear classification. The gamma parameter is a hyperparameter of this kernel function, and C is the SVM's regularization parameter, which governs the "softness" of the separating margin. (An explanation of these and other concepts integral to understanding the guts of an SVM is beyond the scope of this example, though scikit-learn provides a nice fairly accessible tutorial with more example code here: http://scikit-learn.org/stable/modules/svm.html.) The values of these parameters were arbitrarily chosen, but work well enough to get the job done as we will see. We will also demonstrate a straightforward approach to selecting more optimal values later on.
Confession: the real reason we are using only two summary statistics right here is because it makes it really easy to visualize that classifier's decision surface: which regions of the feature space would be assigned to which class? Let's have a look! This code is identical to the decision-surface plotting code from demographicModelSelectionExample.ipynb even though we have switched to an SVM--scikit-learn very nicely maintains abstraction of different types of classifiers which makes these sorts of changes headache-free. The SVM classifier is a bit slower in this case so it might take a couple minutes.
(Note: I have increased the h argument for the call to make_meshgrid below, coarsening the contour plot in the interest of efficiency. Decreasing this will yield a smoother plot, but may take a while and use up a lot more memory. Adjust at your own risk!)
End of explanation
from sklearn.preprocessing import normalize
#here's the confusion matrix function
def makeConfusionMatrixHeatmap(data, title, trueClassOrderLs, predictedClassOrderLs, ax):
data = np.array(data)
data = normalize(data, axis=1, norm='l1')
heatmap = ax.pcolor(data, cmap=plt.cm.Blues, vmin=0.0, vmax=1.0)
for i in range(len(predictedClassOrderLs)):
for j in reversed(range(len(trueClassOrderLs))):
val = 100*data[j, i]
if val > 50:
c = '0.9'
else:
c = 'black'
ax.text(i + 0.5, j + 0.5, '%.2f%%' % val, horizontalalignment='center', verticalalignment='center', color=c, fontsize=9)
cbar = plt.colorbar(heatmap, cmap=plt.cm.Blues, ax=ax)
cbar.set_label("Fraction of simulations assigned to class", rotation=270, labelpad=20, fontsize=11)
# put the major ticks at the middle of each cell
ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)
ax.axis('tight')
ax.set_title(title)
#labels
ax.set_xticklabels(predictedClassOrderLs, minor=False, fontsize=9, rotation=45)
ax.set_yticklabels(reversed(trueClassOrderLs), minor=False, fontsize=9)
ax.set_xlabel("Predicted class")
ax.set_ylabel("True class")
#now the actual work
#first get the predictions
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
classOrderLs=['equil','soft','hard']
#now do the plotting
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
Explanation: Above we can see which regions of our feature space are assigned to each class: dark blue shaded areas will be classified as equilibrium, faint blue as soft sweeeps, and red as hard sweeps. Note that SVMs, like random forests, are able to produce non-linear decision boundaries. Looks like this might be a fairly tough problem, so let's try to quantify our accuracy.
Step 3: benchmark our classifier
The last step of the process is to use our trained classifier to predict which models our test data are drawn from. Recall that the classifier hasn't seen these test data so this should be a fair test of how well the classifier will perform on any new data we throw at it in the future. We will visualize performance using a confusion matrix. The code is all identical to the corresponding section in demographicModelSelectionExample.ipynb, with the exception of the class names.
End of explanation
X1 = np.loadtxt("no_sweep.msOut.stats",usecols=(1,3,5,7,9))
X2 = np.loadtxt("soft_sweep.msOut.stats",usecols=(1,3,5,7,9))
X3 = np.loadtxt("hard_sweep.msOut.stats",usecols=(1,3,5,7,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
clf = svm.SVC(kernel='rbf', gamma=0.1, C=1)
clf = clf.fit(X_train, Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
Explanation: Meh. Let's again see if we can do better by using all of statistics calculated by Hudson's sample_stats
End of explanation
from sklearn.model_selection import GridSearchCV
## insert the grid search code here
param_grid = [
{'C': [0.125, 0.25, 0.5, 1, 2, 4, 8],
'gamma': [0.0125, 0.025, 0.05, 0.1, 0.2, 0.4, 0.8],
'kernel': ['rbf']},
]
clf = svm.SVC()
clf = GridSearchCV(clf, param_grid)
clf.fit(X_train,Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
Explanation: Hmm, that didn't help all that much. But there is still room for improvement.
Step 4: Improving accuracy using a grid search of SVM hyperparameters
This section title sounds fancy but this is actually pretty straightforward: we want to pick optimal values of the gamma and C hyperparameters used to train our SVM. This is typically done by examining a grid of values, which scikit-learn makes very easy for us. This will take a bit of CPU time as we are training a support vector machine for each combination of these two parameters along our grid. But hang in there, with this dataset and it should take no more than a minute or two.
End of explanation |
19 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Dawid-Skene model with priors
The Dawid-Skene model (1979) is perhaps one of the first models to discover true item states/effects from multiple noisy measurements. Since then, there have been multiple models that improve over the basic model. This notebook covers the Dawid-Skene model which has been enhanced with priors.
The model follows implementation in Rebecca J. Passonneau, Bob Carpenter, "The Benefits of a Model of Annotation", TACL, 2014.
Introduction
In healthcare, a number of patients can receive potentially noisy judgments from several professionals. In computer science, work items of different difficulty get labeled by multiple annotators of different skill. In this notebook we will attempt to recover true work item labels from noisy annotator input.
The primary goal is to recover the true item states. The secondary goal is to estimate various additional factors of potential interest. We will use probabilistic programming approach in attempt to solve the problem.
Step1: Data
Load also the data matrix with following dimensions
Step2: Let's create the necessary data structures. In particular, we will convert the data cube into triplet format. One data point with index n allows to access the following information
Step3: Comparing true item labels and majority vote estimated labels one by one is tedious. Computing accuracy gives a single performance metric but does not reveal where the mistakes are made (e.g. which categories tend to be confused) and by how much. A confusion matrix with majority vote estimates will serve as our baseline
Step4: Model
With the data loaded and baseline set, we can now start building the Dawid-Skene model. We will start by setting the top level priors
Step5: Now, the interesting part -- the definition of the model.
First, we will need two random variables to encode class prevalence (pi) and annotator confusion matrices (theta). The two random variables can be naturally modeled with Dirichlet.
Second, we will define a variable for the true/hidden category for each work item. The Categorical distribution fits well our purpose to model a work item with K possible states.
Finally, a special variable for observed data brings together all random variables. This is the variable (Categorical) where the data is injected. The parametrization of the variable needs to be explained
Step6: With model defined, we also need to set up the inference machinery. The variables of interest (pi, theta and z) will be divided in two groups
Step7: Results
Let's get a global overview of the trace. On the left side of the figure, posterior distributions; on the right - individual samples. The samples subplots should show "uniform band of noise" as the sampler locks around the true variable state. It is important to not see any jumps, switches or steady increase/decrease.
Besides the class prevalence variable ("pi"), the categories and theta posteriors, the plots are of little utility. We will explore other variables in other form.
Step8: We will take 1000 last samples from posterior for random variable ("z"). The majority vote from 1000 samples will give us our estimate of true item labels.
Step9: The confusion matrix tells us how good our estimate is with respect to the ground truth. Compare it to the baseline
Step10: Finally, let's plot the confusion matrices of annotators. Notice the dominant diagonal nature of matrices -- measure of annotator performance. Compare the first annotator (j=0) and the last one (j=4). | Python Code:
%matplotlib inline
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
Explanation: The Dawid-Skene model with priors
The Dawid-Skene model (1979) is perhaps one of the first models to discover true item states/effects from multiple noisy measurements. Since then, there have been multiple models that improve over the basic model. This notebook covers the Dawid-Skene model which has been enhanced with priors.
The model follows implementation in Rebecca J. Passonneau, Bob Carpenter, "The Benefits of a Model of Annotation", TACL, 2014.
Introduction
In healthcare, a number of patients can receive potentially noisy judgments from several professionals. In computer science, work items of different difficulty get labeled by multiple annotators of different skill. In this notebook we will attempt to recover true work item labels from noisy annotator input.
The primary goal is to recover the true item states. The secondary goal is to estimate various additional factors of potential interest. We will use probabilistic programming approach in attempt to solve the problem.
End of explanation
data = np.load(pm.get_data('extrahard_MC_500_5_4.npz.npy'))
z_true = np.load(pm.get_data('extrahard_MC_500_5_4_reference_classes.npy'))
I = data.shape[0] # number of items
J = data.shape[1] # number of annotators
K = data.shape[2] # number of classes
N = I * J
Explanation: Data
Load also the data matrix with following dimensions: work items, annotators, categories. The data for this notebook has been taken from https://github.com/abhishekmalali/questioning-strategy-classification/tree/master/data
Note: The data in this notebook is organized in matrix where each work item gets exactly one response for each work item. This is often not possible in practice. The discussed model accepts triplets of data: (work item, annotator, response) which relaxes the constraint to have all observations.
End of explanation
# create data triplets
jj = list() # annotator IDs
ii = list() # item IDs
y = list() # response
# initialize true category with majority votes
z_init = np.zeros( I, dtype=np.int64 )
# create data triplets
for i in range( I ):
ks = list()
for j in range( J ):
dat = data[ i, j, : ]
k = np.where( dat == 1 )[0][0]
ks.append( k )
ii.append( i )
jj.append( j )
y.append( k )
# getting maj vote for work item i (dealing with numpy casts)
z_init[ i ] = np.bincount( np.array( ks ) ).argmax()
Explanation: Let's create the necessary data structures. In particular, we will convert the data cube into triplet format. One data point with index n allows to access the following information: jj[n] as annotator ID, providing his/her vote y[n] for item ii[n].
At the same time, we compute the majority vote estimate. This will serve both as a baseline and as initialization for our model.
End of explanation
confMat = confusion_matrix( z_true, z_init )
print( "Majority vote estimate of true category:\n" , confMat )
Explanation: Comparing true item labels and majority vote estimated labels one by one is tedious. Computing accuracy gives a single performance metric but does not reveal where the mistakes are made (e.g. which categories tend to be confused) and by how much. A confusion matrix with majority vote estimates will serve as our baseline:
End of explanation
# class prevalence (flat prior)
alpha = np.ones( K )
# individual annotator confusion matrices - dominant diagonal
beta = np.ones( (K,K) ) + np.diag( np.ones(K) )
Explanation: Model
With the data loaded and baseline set, we can now start building the Dawid-Skene model. We will start by setting the top level priors: class prevalence and annotator-specific confusion matrices. The two priors are of secondary interest.
The class prevalence prior tells the proportion of categories in the data. Since we are completely ignorant about category proportions, it is meaningful to set a flat distribution.
The annotator-specific confusion matrices will "describe" every annotator. Notably, a confusion matrix for an annotator j tells us which categories the annotator is expert (very high value on diagonal) and where his expertise is limited (relatively small value on diagonal and relatively big values off-diagonal). We will initialize confusion matrices with uniform values with slightly dominant diagonal -- our annotators are expected to provide meaningful labels.
End of explanation
model = pm.Model()
with model:
pi = pm.Dirichlet( 'pi', a=alpha, shape=K )
theta = pm.Dirichlet( 'theta', a=beta, shape=(J,K,K) )
z = pm.Categorical( 'z', p=pi, shape=I, testval=z_init )
y_obs = pm.Categorical( 'y_obs', p=theta[ jj, z[ ii ] ], observed=y )
Explanation: Now, the interesting part -- the definition of the model.
First, we will need two random variables to encode class prevalence (pi) and annotator confusion matrices (theta). The two random variables can be naturally modeled with Dirichlet.
Second, we will define a variable for the true/hidden category for each work item. The Categorical distribution fits well our purpose to model a work item with K possible states.
Finally, a special variable for observed data brings together all random variables. This is the variable (Categorical) where the data is injected. The parametrization of the variable needs to be explained: the observation y[n] is generated according to Categorical distribution by worker y[n] for item ii[n], where the true label is z[ ii[n] ].
The following block will build the model only but won't do any inference.
End of explanation
with model:
step1 = pm.Metropolis( vars=[pi,theta] )
step2 = pm.CategoricalGibbsMetropolis( vars=[z] )
trace = pm.sample( 5000, step=[step1, step2], progressbar=True )
Explanation: With model defined, we also need to set up the inference machinery. The variables of interest (pi, theta and z) will be divided in two groups: continuous (pi,theta) and discrete (z). The step methods are different: Metropolis or NUTS for former and CategoricalGibbsMetropolis for latter.
Note: Running the following block will perform inference for our variables of interest and store results in the trace variable. The trace variable will contain a wealth of information that will be useful to perfom diagnostics and get posteriors for our three hidden variables -- class prevalence, annotator confusion matrices and true categories for all work items.
End of explanation
pm.traceplot( trace, varnames=['pi'] )
Explanation: Results
Let's get a global overview of the trace. On the left side of the figure, posterior distributions; on the right - individual samples. The samples subplots should show "uniform band of noise" as the sampler locks around the true variable state. It is important to not see any jumps, switches or steady increase/decrease.
Besides the class prevalence variable ("pi"), the categories and theta posteriors, the plots are of little utility. We will explore other variables in other form.
End of explanation
z = trace['z'][-1000:,:]
z_hat = np.zeros( I )
for i in range( I ):
z_hat[ i ] = np.bincount( z[:,i] ).argmax()
Explanation: We will take 1000 last samples from posterior for random variable ("z"). The majority vote from 1000 samples will give us our estimate of true item labels.
End of explanation
confMat = confusion_matrix( z_true, z_hat )
print( "Dawid-Skene estimate of true category:\n", confMat )
Explanation: The confusion matrix tells us how good our estimate is with respect to the ground truth. Compare it to the baseline: a better estimate has less off diagonal values (and more on main diagonal).
End of explanation
np.set_printoptions(precision=2)
for j in range( J ):
print( "Annotator j=" + str(j) )
Cj = trace['theta'][-1,j]
print( Cj )
Explanation: Finally, let's plot the confusion matrices of annotators. Notice the dominant diagonal nature of matrices -- measure of annotator performance. Compare the first annotator (j=0) and the last one (j=4).
End of explanation |
20 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Intro to Python
Math and Modules
In the space below and use Python as a calculator.
Now let's try some more advanced funtions.
Step1: The standard Python distribution only comes with the bare-bone capabilities. Other functionality can be accessed through modules using the import command. To access a function using the following syntax.
import <module>
<module>.<function>
Here's an example.
Step2: What if I don't know how to use a function, you can access the documentation.
? <module>.<function>
Let's look at the documentation of math.log10
Step3: Strings
Strings are denoted by ' or ". Set the variable my_name equal to your name full name.
Step4: Strings can all be indexed.
Step5: Try and pick out all of your initials.
Step6: SciPy and NumPy
SciPy and NumPy are two highly optimized core packages used for scientific computing in the Python community.
* NumPy - Numerical computation in Python
* SciPy - Scientific computation in Python
Let's import numpy and make a numpy array.
Step7: Use the cell below to manipulate the array we just created.
Step8: Let's do some simple matrix multiplication using np.dot.
$$ \mathbf{A} \overrightarrow{x} = \overrightarrow{y}$$
First checkout the documentation of np.dot.
Step9: Use the cell below to call another function from NumPy.
Scikit-Learn
Scikit-Learn, a.k.a. sklearn, is a scientific toolkit (there are many others) for machine learning and it built on SciPy and NumPy.
Below is an example from scikit-learn for linear regression.
This example also using the plotting library matplotlib to display the results. | Python Code:
log10(10)
Explanation: Quick Intro to Python
Math and Modules
In the space below and use Python as a calculator.
Now let's try some more advanced funtions.
End of explanation
import math
math.log10(10)
Explanation: The standard Python distribution only comes with the bare-bone capabilities. Other functionality can be accessed through modules using the import command. To access a function using the following syntax.
import <module>
<module>.<function>
Here's an example.
End of explanation
?? math.log10
Explanation: What if I don't know how to use a function, you can access the documentation.
? <module>.<function>
Let's look at the documentation of math.log10
End of explanation
my_name =
intro = 'Hello, my name is '
print(intro + my_name + '.')
Explanation: Strings
Strings are denoted by ' or ". Set the variable my_name equal to your name full name.
End of explanation
first_initial = 'My first initial is '
print(first_initial + my_name[0] + '.')
Explanation: Strings can all be indexed.
End of explanation
initials = 'My initials are '
print(initials + my_name[0] + my_name[] + ".")
Explanation: Try and pick out all of your initials.
End of explanation
import numpy as np
B = np.ones((3, 3))
print(B)
Explanation: SciPy and NumPy
SciPy and NumPy are two highly optimized core packages used for scientific computing in the Python community.
* NumPy - Numerical computation in Python
* SciPy - Scientific computation in Python
Let's import numpy and make a numpy array.
End of explanation
B + B
Explanation: Use the cell below to manipulate the array we just created.
End of explanation
? np.dot
N = 5
A = np.eye(N) * 2
x = np.arange(N)
print('A =')
print(A)
print('x =')
print(x)
= np.dot(A, x)
print('y =')
print(y)
Explanation: Let's do some simple matrix multiplication using np.dot.
$$ \mathbf{A} \overrightarrow{x} = \overrightarrow{y}$$
First checkout the documentation of np.dot.
End of explanation
%matplotlib inline
# Code source: Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis]
diabetes_X_temp = diabetes_X[:, :, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X_temp[:-20]
diabetes_X_test = diabetes_X_temp[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Predict result
y = regr.predict(diabetes_X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((y - diabetes_y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, y, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Use the cell below to call another function from NumPy.
Scikit-Learn
Scikit-Learn, a.k.a. sklearn, is a scientific toolkit (there are many others) for machine learning and it built on SciPy and NumPy.
Below is an example from scikit-learn for linear regression.
This example also using the plotting library matplotlib to display the results.
End of explanation |
21 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is Machine Learning ?
The umbrella term "machine learning" describes methods for automated data analysis, developed by computer scientists and statisticians in response to the appearance of ever larger datasets.
The goal of automation has led to a very uniform terminology, enabling multiple algorithms to be implemented and compared on an equal footing.
Machine learning can be divided into two types
Step1: In SciKit-Learn, data contains the design matrix $X$, and is a numpy array of shape $(N, P)$
target contains the response variables $y$, and is a numpy array of shape $(N)$
Step2: Splitting the data
Step3: Other Example Datasets
SciKit-Learn provides 5 "toy" datasets for tutorial purposes, all load-able in the same way | Python Code:
% matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits.images.shape
print(digits.images[0])
plt.matshow(digits.images[23], cmap=plt.cm.Greys)
digits.data.shape
digits.target.shape
digits.target[23]
Explanation: What is Machine Learning ?
The umbrella term "machine learning" describes methods for automated data analysis, developed by computer scientists and statisticians in response to the appearance of ever larger datasets.
The goal of automation has led to a very uniform terminology, enabling multiple algorithms to be implemented and compared on an equal footing.
Machine learning can be divided into two types: supervised and unsupervised.
Supervised Learning
Supervised learning is also known as predictive learning. Given inputs $X$, the goal is to construct a machine that can accurately predict a set of outputs $y$.
The "supervision" refers to the education of the machine, via a training set $D$ of input-output pairs that we provide. Prediction accuracy is then tested on validation and test sets.
At the heart of the prediction machine is a model $M$ that can be trained to give accurate predictions.
The outputs $y$ are said to be response variables - predictions of $y$ will be generated by our model. The variables $y$ can be either categorical ("labels") or nominal (real numbers). When the $y$ are categorical, the problem is one of classification ("is this an image of a kitten, or a puppy?"). When the $y$ are numerical, the problem is a regression ("how should we interpolate between these values?").
Supervised learning is about making predictions by characterizing ${\rm Pr}(y_k|x_k,D,M)$.
<img src="figures/supervised_workflow.svg" width=100%>
Unsupervised Learning
Also known as descriptive learning. Here the goal is "knowledge discovery" - detection of patterns in a dataset, that can then be used in supervised/model-based analyses.
Unsupervised learning is about density estimation - characterizing ${\rm Pr}(x|\theta,H)$.
Examples of unsupervised learning activities include:
Clustering analysis of the $x$.
Dimensionality reduction: principal component analysis, independent component analysis, etc.
In this lesson we will focus on supervised learning, since it is arguably somewhat closer to our goal of gaining understanding from data.
Data Representations
Each input $x$ is said to have $P$ features (or attributes), and represents a sample drawn from a population. Each sample input $x$ is associated with an output $y$.
Our $N$ input samples are packaged into $N \times P$ design matrix $X$ (with $N$ rows and $P$ columns).
<img src="figures/data_representation.svg" width=100%>
Dataset Split
We train our machine learning models on a subset of the data, and then test them against the remainder.
<img src="figures/train_test_split_matrix.svg" width=100%>
Simple Example: The Digits Dataset
Let's take a look at one of the SciKit-Learn example datasets, digits
End of explanation
print(digits.DESCR)
Explanation: In SciKit-Learn, data contains the design matrix $X$, and is a numpy array of shape $(N, P)$
target contains the response variables $y$, and is a numpy array of shape $(N)$
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
X_train.shape,y_train.shape
X_test.shape,y_test.shape
?train_test_split
Explanation: Splitting the data:
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
# Visualizing the Boston house price data:
import corner
X = boston.data
y = boston.target
plot = np.concatenate((X,np.atleast_2d(y).T),axis=1)
labels = np.append(boston.feature_names,'MEDV')
corner.corner(plot,labels=labels);
Explanation: Other Example Datasets
SciKit-Learn provides 5 "toy" datasets for tutorial purposes, all load-able in the same way:
Name | Description
------------|:---------------------------------------
boston | Boston house-prices, with 13 associated measurements (R)
iris | Fisher's iris classifications (based on 4 characteristics) (C)
diabetes | Diabetes (x vs y) (R)
digits | Hand-written digits, 8x8 images with classifications (C)
linnerud | Linnerud: 3 exercise and 3 physiological data (R)
"R" and "C" indicate that the problem to be solved is either a regression or a classification, respectively.
End of explanation |
22 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️"
You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
Lets get started! Run the following cell to load the package you are going to use.
Step1: 1 - Baseline model
Step2: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
Step3: 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width
Step4: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
Step5: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.
Step6: You've loaded
Step8: Exercise
Step10: Expected Output
Step11: Run the next cell to train your model and learn the softmax parameters (W,b).
Step12: Expected Output (on a subset of iterations)
Step13: Expected Output
Step14: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
Step15: <font color='blue'>
What you should remember from this part
Step17: 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement
Step18: Run the following cell to check what sentences_to_indices() does, and check your results.
Step20: Expected Output
Step22: Expected Output
Step23: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.
Step24: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics
Step25: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
Step26: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.
Step27: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
Step28: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
Step29: Now you can try it on your own example. Write your own sentence below. | Python Code:
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️"
You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
Lets get started! Run the following cell to load the package you are going to use.
End of explanation
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
Explanation: 1 - Baseline model: Emojifier-V1
1.1 - Dataset EMOJISET
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence
<img src="images/data_set.png" style="width:700px;height:300px;">
<caption><center> Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
End of explanation
index = 1
print(X_train[index], label_to_emoji(Y_train[index]))
Explanation: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
End of explanation
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
Explanation: 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width:900px;height:300px;">
<caption><center> Figure 2: Baseline model (Emojifier-V1).</center></caption>
</center>
The input of the model is a string corresponding to a sentence (e.g. "I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.
To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for "Y-one-hot" in the variable names Y_oh_train and Y_oh_test:
End of explanation
index = 50
print(Y_train[index], "is converted into one hot", Y_oh_train[index])
Explanation: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
End of explanation
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
Explanation: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.
End of explanation
word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
Explanation: You've loaded:
- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.
Run the following cell to check if it works.
End of explanation
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50, ))
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg / len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg)
Explanation: Exercise: Implement sentence_to_avg(). You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values.
End of explanation
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = - np.dot(Y_oh[i], np.log(a))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
Explanation: Expected Output:
<table>
<tr>
<td>
**avg= **
</td>
<td>
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
</td>
</tr>
</table>
Model
You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters.
Exercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:
$$ z^{(i)} = W . avg^{(i)} + b$$
$$ a^{(i)} = softmax(z^{(i)})$$
$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$
It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time.
We provided you a function softmax().
End of explanation
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
Explanation: Run the next cell to train your model and learn the softmax parameters (W,b).
End of explanation
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
Explanation: Expected Output (on a subset of iterations):
<table>
<tr>
<td>
**Epoch: 0**
</td>
<td>
cost = 1.95204988128
</td>
<td>
Accuracy: 0.348484848485
</td>
</tr>
<tr>
<td>
**Epoch: 100**
</td>
<td>
cost = 0.0797181872601
</td>
<td>
Accuracy: 0.931818181818
</td>
</tr>
<tr>
<td>
**Epoch: 200**
</td>
<td>
cost = 0.0445636924368
</td>
<td>
Accuracy: 0.954545454545
</td>
</tr>
<tr>
<td>
**Epoch: 300**
</td>
<td>
cost = 0.0343226737879
</td>
<td>
Accuracy: 0.969696969697
</td>
</tr>
</table>
Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.
1.4 - Examining test set performance
End of explanation
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
Explanation: Expected Output:
<table>
<tr>
<td>
**Train set accuracy**
</td>
<td>
97.7
</td>
</tr>
<tr>
<td>
**Test set accuracy**
</td>
<td>
85.7
</td>
</tr>
</table>
Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.
In the training set, the algorithm saw the sentence "I love you" with the label ❤️. You can check however that the word "adore" does not appear in the training set. Nonetheless, lets see what happens if you write "I adore you."
End of explanation
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
Explanation: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
End of explanation
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
Explanation: <font color='blue'>
What you should remember from this part:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as "This movie is not good and not enjoyable" because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.
2 - Emojifier-V2: Using LSTMs in Keras:
Let's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.
Run the following cell to load the Keras packages.
End of explanation
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = X[i].lower().split()
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j + 1
### END CODE HERE ###
return X_indices
Explanation: 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement:
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>
2.2 Keras and mini-batching
In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.
The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.
2.3 - The Embedding layer
In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer.
The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.
<img src="images/embedding1.png" style="width:700px;height:250px;">
<caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption>
The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).
The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.
Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).
End of explanation
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices)
Explanation: Run the following cell to check what sentences_to_indices() does, and check your results.
End of explanation
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix = np.zeros((vocab_len, emb_dim))
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec_map[word]
# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False.
embedding_layer = Embedding(vocab_len, emb_dim, trainable = False)
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
Explanation: Expected Output:
<table>
<tr>
<td>
**X1 =**
</td>
<td>
['funny lol' 'lets play football' 'food is ready for you']
</td>
</tr>
<tr>
<td>
**X1_indices =**
</td>
<td>
[[ 155345. 225122. 0. 0. 0.] <br>
[ 220930. 286375. 151266. 0. 0.] <br>
[ 151204. 192973. 302254. 151349. 394475.]]
</td>
</tr>
</table>
Let's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence.
Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.
3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix
End of explanation
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(shape = input_shape, dtype = 'int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences = True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences = False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5, activation='softmax')(X)
# Add a softmax activation
X = Activation(activation='softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices, outputs=X)
### END CODE HERE ###
return model
Explanation: Expected Output:
<table>
<tr>
<td>
**weights[0][1][3] =**
</td>
<td>
-0.3403
</td>
</tr>
</table>
2.3 Building the Emojifier-V2
Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>
Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().
End of explanation
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
Explanation: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.
End of explanation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Explanation: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics:
End of explanation
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
Explanation: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
End of explanation
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
Explanation: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.
End of explanation
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
Explanation: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
End of explanation
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
Explanation: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
End of explanation
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
Explanation: Now you can try it on your own example. Write your own sentence below.
End of explanation |
23 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On-Axis Field Due to a Current Loop
This simple formula uses the Law of Biot Savart, integrated over a circular current loop to obtain the magnetic field at any point along the axis of the loop.
$B = \frac {\mu_o i r^2}{2(r^2 + x^2)^{\frac 3 2}}$
B is the magnetic field, in teslas, at any point on the axis of the current loop. The direction of the field is perpendicular to the plane of the loop.
$\mathbf \mu_o$ is the permeability constant (1.26x10<sup>-6</sup> Hm<sup>-1</sup>)
i is the current in the wire, in amperes.
r is the radius of the current loop, in meters.
x is the distance, on axis, from the center of the current loop to the location where the magnetic field is calculated, in meters.
Special Case
Step1: Use the Baxial function to compute the central field of a unit loop (1 meter radius, 1 ampere of current), in teslas
Step2: You can try selecting your own current (a), radius (m) and axial position (m) combination to see what the resulting field is
Step3: Now plot the field intensity, as a fraction of the central field, at various positions along the axis (measured as multiples of the coil radius) | Python Code:
%matplotlib inline
from scipy.special import ellipk, ellipe, ellipkm1
from numpy import pi, sqrt, linspace
from pylab import plot, xlabel, ylabel, suptitle, legend, show
uo = 4E-7*pi # Permeability constant - units of H/m
# On-Axis field = f(current and radius of loop, x of measurement point)
def Baxial(i, a, x, u=uo):
if a == 0:
if x == 0:
return NaN
else:
return 0.0
else:
return (u*i*a**2)/2.0/(a**2 + x**2)**(1.5)
Explanation: On-Axis Field Due to a Current Loop
This simple formula uses the Law of Biot Savart, integrated over a circular current loop to obtain the magnetic field at any point along the axis of the loop.
$B = \frac {\mu_o i r^2}{2(r^2 + x^2)^{\frac 3 2}}$
B is the magnetic field, in teslas, at any point on the axis of the current loop. The direction of the field is perpendicular to the plane of the loop.
$\mathbf \mu_o$ is the permeability constant (1.26x10<sup>-6</sup> Hm<sup>-1</sup>)
i is the current in the wire, in amperes.
r is the radius of the current loop, in meters.
x is the distance, on axis, from the center of the current loop to the location where the magnetic field is calculated, in meters.
Special Case: x = 0
$B = \frac {\mu_o i}{2 r}$
Special Case: x >> 0
$B = \frac {\mu_o i r^2}{2 x^3}$
Note that this is equivalent to the expression for on-axis magnetic field due to a magnetic dipole:
$B = \frac {\mu_o i A}{2 \pi x^3}$
where A is the area of the current loop, or $\pi r^2$.
Code Example
The following IPython code illustrates how to compute the on-axis field due to a simple current loop.
End of explanation
print("{:.3} T".format(Baxial(1, 1, 0)))
Explanation: Use the Baxial function to compute the central field of a unit loop (1 meter radius, 1 ampere of current), in teslas:
End of explanation
from ipywidgets import interactive
from IPython.display import display
def B(i, a, x):
return "{:.3} T".format(Baxial(i,a,x))
v = interactive(B, i=(0.0, 20.0), a=(0.0, 10.0), x=(0.0, 10.0))
display(v)
Explanation: You can try selecting your own current (a), radius (m) and axial position (m) combination to see what the resulting field is:
End of explanation
axiallimit = 5.0 # meters from center
radius = 1.0 # loop radius in meters
X = linspace(0,axiallimit)
Bcenter = Baxial(1,1,0)
plot(X, [Baxial(1,1,x)/Bcenter for x in X])
xlabel("Axial Position (multiples of radius)")
ylabel("Axial B field / Bo (unitless)")
suptitle("Axial B field of simple loop")
show()
Explanation: Now plot the field intensity, as a fraction of the central field, at various positions along the axis (measured as multiples of the coil radius):
End of explanation |
24 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
★ Ordinary Differential Equations ★
Step1: 6.1 Initial Value Problem
Euler's Method
Step2: Example
Apply Euler's Method to initial value problem
$
\begin{cases}
& y' = ty + t^3\
& y(0) = 1\
& t\
Step3: Example
Apply Euler's method to the initial value problem
$$
\left{\begin{matrix}
\begin{align}
& y' = -4t^3y^2 \
& y(-10) = 1 / 10001 \
& t \
Step4: Explicit Trapezoid Method
Step5: Example
Apply the Explicit Trapezoid Method to the initial value problem with initial condition $y(0) = 1$
$$
\begin{cases}
\begin{align}
& y' = ty + t^3\
& y(0) = 1\
& t\
Step6: Taylor Method for order k
$w_0 = y_0$
$w_{i+1} = w_i + hf(t_i,w_i) + \frac{h^2}{2}f'(t_i,w_i) + \cdots + \frac{h^k}{k!}f^{(k-1)}(t_i,w_i)$
6.3 Systems of ordinary differential equations
Example
Apply Euler's Method to the first-order system of two equations
$$
\left{\begin{matrix}\begin{align}
y_1' &= y_2^2 - 2y_1 \
y_2' &= y_1 - y_2 - ty_2^2 \
y_1(0) &= 0 \
y_2(0) &= 1
\end{align}\end{matrix}\right.
$$
Step7: 6.4 Runge-Kutta Methods And Applications
Midpoint Method
$$
\begin{align}
w_0 &= y_0 \
w_{i+1} &= w_i + hf(t_i + \frac{h}{2},w_i + \frac{h}{2}f(t_i,w_i))
\end{align}
$$
Step8: Runge-Kutta Method of order four (RK4)
Step9: Example
Apply Runge-Kutta of order four to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = ty + t^3 \
& y(0) = 1
\end{align}\end{matrix}\right.
$$
Step10: 6.5 Variable Step-Size Methods
Runge-Kutta order 2 / order 3 embedded pair
$$
\begin{align}
w_{i+1} &= w_i + h\frac{s_1 + s_2}{2} \
z_{i+1} &= w_i + h\frac{s_1 + 4s_3 + s2}{6} \
\end{align}
$$
where
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + h, w_i + hs_1) \
s_3 &= f(t_i + \frac{1}{2}h, w_i + \frac{1}{2}h\frac{s_1 + s_2}{2}) \
e_{i+1} &\approx |w_{i+1} - z_{i+1}| = |h\frac{s_1 - 2s_3 + s_2}{3}|
\end{align}
$$
Bogacki-Shampine order 2 / order 3 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{2}h, w_i + \frac{1}{2}hs_1) \
s_3 &= f(t_i + \frac{3}{4}h, w_i + \frac{3}{4}hs_2) \
z_{i+1} &= w_i + \frac{h}{9}(2s_1 + 3s_2 + 4s_3) \
s_4 &= f(t + h,z_{i+1}) \
w_{i+1} &= w_{i} + \frac{h}{24}(7s_1 + 6s_2 + 8s_3 + 3s_4) \
e_{i+1} &= |z_{i+1} - w_{i+1}| = \frac{h}{72}|-5s_1 + 6s_2 + 8s_3 - 9s_4|
\end{align}
$$
Runge-Kutta-Fehlberg order 4 / order 5 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{4}h, w_i + \frac{1}{4}hs_1) \
s_3 &= f(t_i + \frac{3}{8}h, w_i + \frac{3}{32}hs_1 + \frac{9}{32}hs_2) \
s_4 &= f(t_i + \frac{12}{13}h, w_i + \frac{1932}{2197}hs_1 - \frac{7200}{2197}hs_2 + \frac{7296}{2197}hs_3) \
s_5 &= f(t_i + h, w_i + \frac{439}{216}hs_1 - 8hs_2 + \frac{3680}{513}hs_3 - \frac{845}{4104}hs_4) \
s_6 &= f(t_i + \frac{1}{2}h, w_i - \frac{8}{27}hs_1 + 2hs_2 - \frac{3544}{2565}hs_3 + \frac{1859}{4104}hs_4 -\frac{11}{40}hs_5) \
w_{i+1} &= w_i + h(\frac{25}{216}s_1 + \frac{1408}{4275}s_3 + \frac{2197}{4104}s_4 - \frac{1}{5}s_5) \
z_{i+1} &= w_i + h(\frac{16}{135}s_1 + \frac{6656}{12825}s_3 + \frac{28561}{56430}s_4 - \frac{9}{50}s_5 + \frac{2}{55}s_6) \
e_{i + 1} &= |z_{i+1} - w_{i+1}| = h|\frac{1}{360}s_1 - \frac{128}{4275}s_3 - \frac{2197}{75240}s_4 + \frac{1}{50}s_5 + \frac{2}{55}s_6|
\end{align}
$$
Step11: Dormand-Prince order 4 / order 5 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{5}h, w_i + \frac{1}{5}hs_i) \
s_3 &= f(t_i + \frac{3}{10}h, w_i + \frac{3}{40}hs_i + \frac{9}{40}hs_2) \
s_4 &= f(t_i + \frac{4}{5}h, w_i + \frac{44}{45}hs_i - \frac{56}{15}hs_2 + \frac{32}{9}hs_3) \
s_5 &= f(t_i + \frac{8}{9}h, w_i + h(\frac{19372}{6561}s_1 - \frac{25360}{2187}s_2 + \frac{64448}{6561}s_3 - \frac{212}{729}s_4)) \
s_6 &= f(t_i + h, w_i + h(\frac{9017}{3168}s_1 - \frac{355}{33}s_2 + \frac{46732}{5247}s_3 + \frac{49}{176}s_4 - \frac{5103}{18656}s_5)) \
z_{i+1} &= w_i +h(\frac{35}{384}s_1 + \frac{500}{1113}s_3 + \frac{125}{192}s_4 - \frac{2187}{6784}s_5 + \frac{11}{84}s_6) \
s_7 &= f(t_i + h, z_{i+1}) \
w_{i+1} &= w_i + h(\frac{5179}{57600}s_1 + \frac{7571}{16695}s_3 + \frac{393}{640}s_4 - \frac{92097}{339200}s_5 + \frac{187}{2100}s_6 + \frac{1}{40}s_7) \
e_{i+1} &= |z_{i+1} - w_{i+1}| = h|\frac{71}{57600}s_1 - \frac{71}{16695}s_3 + \frac{71}{1920}s_4 - \frac{17253}{339200}s_5 + \frac{22}{525}s_6 - \frac{1}{40}s_7|
\end{align}
$$
Example
Use ode45 to solve the initial value problem within a relative tolerance of $10^{-4}$
$
\left{\begin{matrix}\begin{align}
& y' = ty + t^3 \
& y(0) = 1 \
& t\
Step12: 6.6 Implicit Methods And Stiff Equations
Backward Euler Method
$$
\begin{align}
w_0 &= y_0 \
w_{i+1} &= w_{i} + hf(t_{i+1}, w_{i+1})
\end{align}
$$
Example
Apply the Backward Euler Method to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = y + 8y^2 - 9y^3 \
& y(0) = 1 / 2 \
& t\
Step13: 6.7 Multistep Methods
Adams-Bashforth Two-Step Method
$w_{i + 1} = w_i + h [\frac{3}{2}f(t_i, w_i) - \frac{1}{2}f(t_{i - 1}, w_{i - 1})]$
Step14: Example
Apply strongly stable method, weakly stable method, and unstable method to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = -3y \
& y(0) = 1 \
&t\ | Python Code:
# Import modules
import math
import numpy as np
import scipy
from scipy.integrate import ode
from matplotlib import pyplot as plt
Explanation: ★ Ordinary Differential Equations ★
End of explanation
def euler_method(f, a, b, y0, step=10):
t = a
w = y0
ws = np.zeros(step + 1)
ws[0] = y0
h = (b - a) / step
for i in range(step):
w += h * f(t, w)
t += h
ws[i + 1] = w
return w, ws
Explanation: 6.1 Initial Value Problem
Euler's Method
End of explanation
f = lambda t, y : t * y + np.power(t, 3)
w = euler_method(f, 0, 1, 1)
print(w[0])
Explanation: Example
Apply Euler's Method to initial value problem
$
\begin{cases}
& y' = ty + t^3\
& y(0) = 1\
& t\:in\:[0,1]
\end{cases}
$
End of explanation
f = lambda t, y : -4 * np.power(t, 3) * np.power(y, 2)
y = lambda t : 1 / (np.power(t, 4) + 1)
_, ws4 = euler_method(f, -10, 0, 1.0 / 10001.0, int(1e4))
_, ws5 = euler_method(f, -10, 0, 1.0 / 10001.0, int(1e5))
w, ws6 = euler_method(f, -10, 0, 1.0 / 10001.0, int(1e6))
x4 = np.linspace(-10, 0 , int(1e4) + 1)
x5 = np.linspace(-10, 0 , int(1e5) + 1)
x6 = np.linspace(-10, 0 , int(1e6) + 1)
plt.plot(x4, ws4, linewidth=3, label='$h = 10^{-3}$')
plt.plot(x5, ws5, linewidth=3, label='$h = 10^{-4}$')
plt.plot(x6, ws6, linewidth=3, label='$h = 10^{-5}$')
plt.axhline(1.0, color='gray', linewidth=3, linestyle='--')
plt.axvline(0, color='black')
plt.axhline(0, color='black')
plt.legend()
plt.show()
Explanation: Example
Apply Euler's method to the initial value problem
$$
\left{\begin{matrix}
\begin{align}
& y' = -4t^3y^2 \
& y(-10) = 1 / 10001 \
& t \:\: in \:\: [-10,0]
\end{align}
\end{matrix}\right.
$$
End of explanation
def explicit_trapezoid_method(f, a, b, y0, step = 10):
t = a
w = y0
ws = np.zeros(step + 1)
ws[0] = y0
h = (b - a) / step
for i in range(step):
w += ( h / 2 ) * ( f(t, w) + f(t + h, w + h * f(t, w) ) )
t += h
ws[i + 1] = w
return w, ws
Explanation: Explicit Trapezoid Method
End of explanation
f = lambda t, y : t * y + np.power(t, 3)
w, _ = explicit_trapezoid_method(f, 0, 1, 1, step = int(1e1) )
print(w)
Explanation: Example
Apply the Explicit Trapezoid Method to the initial value problem with initial condition $y(0) = 1$
$$
\begin{cases}
\begin{align}
& y' = ty + t^3\
& y(0) = 1\
& t\:\:in\:\:[0,1]
\end{align}
\end{cases}
$$
End of explanation
def euler_method_vec(f1, f2, a, b, y0, step=10):
t = a
ws1 = np.zeros(step + 1)
ws2 = np.zeros(step + 1)
ws1[0] = y0[0]
ws2[0] = y0[1]
h = (b - a) / step
for i in range(step):
ws1[i + 1] = ws1[i] + h * f1(ws1[i], ws2[i])
ws2[i + 1] = ws2[i] + h * f2(t, ws1[i], ws2[i])
t += h
return ws1, ws2
f1 = lambda y1, y2 : np.power(y2, 2) - 2 * y1
f2 = lambda t, y1, y2 : y1 - y2 - t * np.power(y2, 2)
euler_method_vec(f1, f2, 0, 1, np.array([0, 1]), 10)
Explanation: Taylor Method for order k
$w_0 = y_0$
$w_{i+1} = w_i + hf(t_i,w_i) + \frac{h^2}{2}f'(t_i,w_i) + \cdots + \frac{h^k}{k!}f^{(k-1)}(t_i,w_i)$
6.3 Systems of ordinary differential equations
Example
Apply Euler's Method to the first-order system of two equations
$$
\left{\begin{matrix}\begin{align}
y_1' &= y_2^2 - 2y_1 \
y_2' &= y_1 - y_2 - ty_2^2 \
y_1(0) &= 0 \
y_2(0) &= 1
\end{align}\end{matrix}\right.
$$
End of explanation
def midpoint_method(f, a, b, y0, step = 10):
t = a
w = y0
ws = np.zeros(step + 1)
ws[0] = y0
h = (b - a) / step
for i in range(step):
w += h * f(t + h / 2, w + h / 2 * f(t, w))
t += h
ws[i + 1] = w
return ws
Explanation: 6.4 Runge-Kutta Methods And Applications
Midpoint Method
$$
\begin{align}
w_0 &= y_0 \
w_{i+1} &= w_i + hf(t_i + \frac{h}{2},w_i + \frac{h}{2}f(t_i,w_i))
\end{align}
$$
End of explanation
def runge_kutta_method(f, a, b, y0, step = 10):
t = a
h = (b - a) / step
w_data = np.zeros(step + 1)
w = w_data[0] = y0
for i in range(step):
s1 = f(t, w)
s2 = f(t + h / 2, w + s1 * h / 2)
s3 = f(t + h / 2, w + s2 * h / 2)
s4 = f(t + h, w + h * s3)
t += h
w += h / 6 * (s1 + 2 * s2 + 2 * s3 + s4)
w_data[i + 1] = w
return w_data
Explanation: Runge-Kutta Method of order four (RK4)
End of explanation
f = lambda t, y : t * y + np.power(t, 3)
ans = lambda t : - pow(t, 2) - 2 + 3 * math.exp(pow(t, 2) / 2)
w_data = runge_kutta_method(f, 0, 1, 1, step = 10)
print(' answer:%.15f' %ans(1) )
print('predict:%.15f' %w_data[-1] )
Explanation: Example
Apply Runge-Kutta of order four to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = ty + t^3 \
& y(0) = 1
\end{align}\end{matrix}\right.
$$
End of explanation
def ode_rkf45(f, t0, b, y0, h = 1e-3, tol = 1e-6):
w = y0
t = t0
while(t < b):
w_this, t_this = w, t
s1 = f(t, w)
hs1 = h * s1
s2 = f(t + h / 4, w + hs1 / 4)
hs2 = h * s2
s3 = f(t + 3 / 8 * h, w + 3 / 32 * hs1 + 9 / 32 * hs2)
hs3 = h * s3
s4 = f(t + 12 / 13 * h, w + 1932 / 2197 * hs1 - 7200 / 2197 * hs2 + 7296 / 2197 * hs3)
hs4 = h * s4
s5 = f(t + h, w + 439 / 216 * hs1 - 8 * hs2 + 3680 / 513 * hs3 - 845 / 4104 * hs4)
hs5 = s5 * h
s6 = f(t + h / 2, w - 8 / 27 * hs1 + 2 * hs2 - 3544 / 2565 * hs3 + 1859 / 4104 * hs4 - 11 / 40 * hs5)
z = w + h * (16 / 135 * s1 + 6656 / 12825 * s3 + 28561 / 56430 * s4 - 9 / 50 * s5 + 2 / 55 * s6)
w += h * (25 / 216 * s1 + 1408 / 2565 * s3 + 2197 / 4104 * s4 - s5 / 5)
t += h
e = abs(z - w)
if e / abs(w) < tol:
w = z
else:
h = 0.8 * pow(tol * abs(w) / e, 1 / 5) * h
return w
Explanation: 6.5 Variable Step-Size Methods
Runge-Kutta order 2 / order 3 embedded pair
$$
\begin{align}
w_{i+1} &= w_i + h\frac{s_1 + s_2}{2} \
z_{i+1} &= w_i + h\frac{s_1 + 4s_3 + s2}{6} \
\end{align}
$$
where
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + h, w_i + hs_1) \
s_3 &= f(t_i + \frac{1}{2}h, w_i + \frac{1}{2}h\frac{s_1 + s_2}{2}) \
e_{i+1} &\approx |w_{i+1} - z_{i+1}| = |h\frac{s_1 - 2s_3 + s_2}{3}|
\end{align}
$$
Bogacki-Shampine order 2 / order 3 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{2}h, w_i + \frac{1}{2}hs_1) \
s_3 &= f(t_i + \frac{3}{4}h, w_i + \frac{3}{4}hs_2) \
z_{i+1} &= w_i + \frac{h}{9}(2s_1 + 3s_2 + 4s_3) \
s_4 &= f(t + h,z_{i+1}) \
w_{i+1} &= w_{i} + \frac{h}{24}(7s_1 + 6s_2 + 8s_3 + 3s_4) \
e_{i+1} &= |z_{i+1} - w_{i+1}| = \frac{h}{72}|-5s_1 + 6s_2 + 8s_3 - 9s_4|
\end{align}
$$
Runge-Kutta-Fehlberg order 4 / order 5 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{4}h, w_i + \frac{1}{4}hs_1) \
s_3 &= f(t_i + \frac{3}{8}h, w_i + \frac{3}{32}hs_1 + \frac{9}{32}hs_2) \
s_4 &= f(t_i + \frac{12}{13}h, w_i + \frac{1932}{2197}hs_1 - \frac{7200}{2197}hs_2 + \frac{7296}{2197}hs_3) \
s_5 &= f(t_i + h, w_i + \frac{439}{216}hs_1 - 8hs_2 + \frac{3680}{513}hs_3 - \frac{845}{4104}hs_4) \
s_6 &= f(t_i + \frac{1}{2}h, w_i - \frac{8}{27}hs_1 + 2hs_2 - \frac{3544}{2565}hs_3 + \frac{1859}{4104}hs_4 -\frac{11}{40}hs_5) \
w_{i+1} &= w_i + h(\frac{25}{216}s_1 + \frac{1408}{4275}s_3 + \frac{2197}{4104}s_4 - \frac{1}{5}s_5) \
z_{i+1} &= w_i + h(\frac{16}{135}s_1 + \frac{6656}{12825}s_3 + \frac{28561}{56430}s_4 - \frac{9}{50}s_5 + \frac{2}{55}s_6) \
e_{i + 1} &= |z_{i+1} - w_{i+1}| = h|\frac{1}{360}s_1 - \frac{128}{4275}s_3 - \frac{2197}{75240}s_4 + \frac{1}{50}s_5 + \frac{2}{55}s_6|
\end{align}
$$
End of explanation
f = lambda t, y : t * y + np.power(t, 3)
ans = lambda t : - pow(t, 2) - 2 + 3 * math.exp(pow(t, 2) / 2)
print('%.15f' %ans(1) )
print('%.15f' %ode_rkf45(f, 0, 1, 1, tol=1e-13) )
f = lambda t, y : t * y + np.power(t, 3)
r = ode(f).set_integrator('dopri5')
r.set_initial_value(1, 0)
terminate = 0.9
dt = 0.1
while r.successful() and r.t <= terminate:
print(r.t + dt, r.integrate(r.t + dt))
Explanation: Dormand-Prince order 4 / order 5 embedded pair
$$
\begin{align}
s_1 &= f(t_i, w_i) \
s_2 &= f(t_i + \frac{1}{5}h, w_i + \frac{1}{5}hs_i) \
s_3 &= f(t_i + \frac{3}{10}h, w_i + \frac{3}{40}hs_i + \frac{9}{40}hs_2) \
s_4 &= f(t_i + \frac{4}{5}h, w_i + \frac{44}{45}hs_i - \frac{56}{15}hs_2 + \frac{32}{9}hs_3) \
s_5 &= f(t_i + \frac{8}{9}h, w_i + h(\frac{19372}{6561}s_1 - \frac{25360}{2187}s_2 + \frac{64448}{6561}s_3 - \frac{212}{729}s_4)) \
s_6 &= f(t_i + h, w_i + h(\frac{9017}{3168}s_1 - \frac{355}{33}s_2 + \frac{46732}{5247}s_3 + \frac{49}{176}s_4 - \frac{5103}{18656}s_5)) \
z_{i+1} &= w_i +h(\frac{35}{384}s_1 + \frac{500}{1113}s_3 + \frac{125}{192}s_4 - \frac{2187}{6784}s_5 + \frac{11}{84}s_6) \
s_7 &= f(t_i + h, z_{i+1}) \
w_{i+1} &= w_i + h(\frac{5179}{57600}s_1 + \frac{7571}{16695}s_3 + \frac{393}{640}s_4 - \frac{92097}{339200}s_5 + \frac{187}{2100}s_6 + \frac{1}{40}s_7) \
e_{i+1} &= |z_{i+1} - w_{i+1}| = h|\frac{71}{57600}s_1 - \frac{71}{16695}s_3 + \frac{71}{1920}s_4 - \frac{17253}{339200}s_5 + \frac{22}{525}s_6 - \frac{1}{40}s_7|
\end{align}
$$
Example
Use ode45 to solve the initial value problem within a relative tolerance of $10^{-4}$
$
\left{\begin{matrix}\begin{align}
& y' = ty + t^3 \
& y(0) = 1 \
& t\:in\:[0,1]
\end{align}\end{matrix}\right.
$
End of explanation
step = 20
f = lambda z, h, d : 9 * h * np.power(z, 3) - 8 * h * np.power(z, 2) + (1 - h) * z - d
x0 = 0.5
h = 0.15
y0 =0.5
for _ in range(step):
z = scipy.optimize.newton(f, x0, args=(h, y0))
x0 = y0 = z
print(z)
Explanation: 6.6 Implicit Methods And Stiff Equations
Backward Euler Method
$$
\begin{align}
w_0 &= y_0 \
w_{i+1} &= w_{i} + hf(t_{i+1}, w_{i+1})
\end{align}
$$
Example
Apply the Backward Euler Method to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = y + 8y^2 - 9y^3 \
& y(0) = 1 / 2 \
& t\:in\:[0,3]
\end{align}\end{matrix}\right.
$$
$$
\begin{align}
& w_{i+1} = w_i + h(t_{i+1}, w_{i+1}) = w_i + h(w_{i+1} + 8w_{i+1}^2 - 9w_{i+1}^3) \
& Let\:z = w_i + h(z + 8z^2 -9z^3) \
& \Rightarrow 9hz^3 - 8hz^2 + (1 - h)z - w_i = 0
\end{align}
$$
End of explanation
def adams_bashforth(f, a, b, y0, step = 10):
h = (b - a) / step
w = y0
w_n = explicit_trapezoid_method(f, a, b, y0)[0]
t = a
t_n = a + h
for _ in range(step - 1):
tmp_w_n = w_n
w_n += h * (1.5 * f(t_n, w_n) - 0.5 * f(t, w))
w = tmp_w_n
t_n += h
t += h
return w_n
Explanation: 6.7 Multistep Methods
Adams-Bashforth Two-Step Method
$w_{i + 1} = w_i + h [\frac{3}{2}f(t_i, w_i) - \frac{1}{2}f(t_{i - 1}, w_{i - 1})]$
End of explanation
f = lambda t, w : -3 * w
w = adams_bashforth(f, 0, 2, 1, step = 20)
print(w)
Explanation: Example
Apply strongly stable method, weakly stable method, and unstable method to the initial value problem
$$
\left{\begin{matrix}\begin{align}
& y' = -3y \
& y(0) = 1 \
&t\:in\:[0,2]
\end{align}\end{matrix}\right.
$$
End of explanation |
25 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Relational Tutorial
cf. https
Step1: The return value of create_engine() is an instance of Engine, and it represents the core interface to the database.
The first time a method like Engine.execute() or Engine.connect() is called, the Engine establishes a real DBAPI connection to the databse, which is ten used to emit the SQL.
- When using the ORM, we typically don't use the Engine directly once created; instead it's used behind the scenes by the ORM.
- lazy connecting, Engine, when first returned by create_engine(), hasn't actually tried to connect to the database yet; that happens only the first time it's asked to perform a task against the databse.
- See also Database Urls examples of create_engine()
Declare a Mapping (between database tables and our own classes)
When using ORM, the configurational process starts by describing
database tables we're dealing with, and define own classes that'll be mapped to those tables.
These 2 tasks are usually performed together in modern SQLAlchemy, using a system known as Declarative, which allows us to create classes that include directives to describe the actual database table they'll be mapped to.
Classes mapped using the Declarative system are defined in terms of a base class which maintains a catalog of classes and tables relative to that base, known as the declarative base class. Our application will usually have just 1 instance of this base in a commonly imported module.
Create this base class using declarative_base()
Step2: cf. https
Step3: Create your own automated conventions using helper functions and mixin classes, described in Mixin and Custom Base Classes
When class constructed, Declarative replaces all Column objects with special Python accessors known as descriptors, process known as instrumentation.
- The "instrumented" mapped class will provide us with the means to refer to our table in a SQL context as well as to persist and load values of columns from the database.
Create a Schema
With User class constructed via Declarative system, we have defined information about our table, known as table metadata.
Object used by SQLAlchemy to represent this information for a specific table is called Table object, and here Declarative has made 1 for us. We see this by inspecting __table__ attribute
Step4: When we declared our class, Declarative also created Table object according to our specifications, and associated it with class by constructing Mapper object.
Classical Mappings, any plain Python class can be mapped to any Table using mapper() function, described in Classical Mappings
Table object is a member of larger collection known as MetaData. When using Declarative, this object is available using .metadata attribute.
Step5: Minimal Table Descriptions vs. Full Descriptions
VARCHAR columns were generated without length on SQLite and PostgreSQL, this is a valid datatype, but not on others.
Length may be provided to String type
Step6: Even though we didn't specify it in ctor, id attribute produces value None when we access it (as opposed to Python's raising AttributeError for undefined attribute). SQLAlchemy's instrumentation normally produces this default value for column-mapped attributes when 1st accessed.
Creating a Session
Start talking to database; ORM's "handle" to db is the Session. When we first set up the application, at same level as our create_engine(), we define Session class which will serve as factor for new Session objects
Step7: In the case where your application doesn't have an Engine when you define your module-level objects, just set it up like this
Step8: Adding and Updating Objects
To persist our User object, add() it to our Session
Step9: At this point, instance is pending; no SQL has yet been issued and object isn't represented yet by row in the database.
- The Session will issue SQL to persist Ed Jones as soon as needed, process known as flush
- If we query database for Ed Jones, all pending information will be first be flushed, and query immediately issued.
e.g. below, we create a new `Query` object which loads instances of `Users`. We "filter by" `name` attribute of `ed`
Step10: ORM concept of identity map ensures that all operations upon a particular row within a Session operate upon same set of data.
Step11: We tell Session we'd like to issue all remaining changes to the database and commit the transaction, which has been in progress throughout.
- do this via commit()
- e.g. Session emits UPDATE statement for the nickname change on "ed", as well as INSERT statements for 3 new User objects added
Step12: Session Object States
transient - User object moved from being outside the Session,
pending - inside the Session without primary key, to
persistent - actually being inserted
read Quickie Intro to Object States
Rolling Back
Step13: Rolling back, we can see that ed_user's name is back to ed, and fake_user has been kicked out of the session
Step14: Querying
Step15: The name given to a full entity such as User, assuming that multiple entities are present in the call to query(), can be controlled using aliased()
Step16: Basic operations with Query (i.e. actual SQL commands) include LIMIT and OFFSET, most conveniently using Python array slices, in conjunction with ORDER BY
Step17: or filter(), which uses more flexible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class
Step18: Common Filter Operators
Step19: Building a Relationship | Python Code:
# Version Check
import sqlalchemy
print(sqlalchemy.__version__)
sqlite_engine_prefix_for_relative_paths = 'sqlite://'
sqlite_engine_prefix_for_absolute_paths = 'sqlite:///'
print(Path.cwd())
print(str(Path.cwd() / "example.db"))
# Create subdirectory if it doesn't exists
data_path = Path.cwd() / "data"
if not data_path.exists():
data_path.mkdir(mode=0o777)
print(data_path.exists())
data_path.resolve()
from sqlalchemy import create_engine
create_engine_input = \
sqlite_engine_prefix_for_absolute_paths + \
str((data_path / "example.db").resolve())
print(create_engine_input)
# Works
#create_engine_input = \
# sqlite_engine_prefix_for_relative_paths + \
# "/example.db"
print(create_engine_input)
engine = \
create_engine(create_engine_input,
echo=True)
Explanation: Object Relational Tutorial
cf. https://docs.sqlalchemy.org/en/13/orm/tutorial.html
End of explanation
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Explanation: The return value of create_engine() is an instance of Engine, and it represents the core interface to the database.
The first time a method like Engine.execute() or Engine.connect() is called, the Engine establishes a real DBAPI connection to the databse, which is ten used to emit the SQL.
- When using the ORM, we typically don't use the Engine directly once created; instead it's used behind the scenes by the ORM.
- lazy connecting, Engine, when first returned by create_engine(), hasn't actually tried to connect to the database yet; that happens only the first time it's asked to perform a task against the databse.
- See also Database Urls examples of create_engine()
Declare a Mapping (between database tables and our own classes)
When using ORM, the configurational process starts by describing
database tables we're dealing with, and define own classes that'll be mapped to those tables.
These 2 tasks are usually performed together in modern SQLAlchemy, using a system known as Declarative, which allows us to create classes that include directives to describe the actual database table they'll be mapped to.
Classes mapped using the Declarative system are defined in terms of a base class which maintains a catalog of classes and tables relative to that base, known as the declarative base class. Our application will usually have just 1 instance of this base in a commonly imported module.
Create this base class using declarative_base():
End of explanation
from sqlalchemy import Column, Integer, String
class User(Base):
# class using Declarative at minimum needs __tablename__ atttribute
__tablename__ = 'users'
# class using Declarative needs at least 1 Column which is part of a
# primary key.
#
# SQLAlchemy never makes any assumptions by itself about table to which
# a class refers, including that it has no built-in conventions for
# names, datatypes, or constraints.
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
nickname = Column(String)
def __repr__(self):
return "<User(name='%s', fullname='%s', nickname='%s')>" % (
self.name, self.fullname, self.nickname)
Explanation: cf. https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/api.html
sqlalchemy.ext.declarative.declarative_base(bind=None, metadata=None, mapper=None, cls=<class 'object'>, name='Base', constructor=<function_declarative_constructor>, class_registry=None, metaclass=<class 'sqlalchemy.ext.declarative.api.DeclarativeMeta'>
new base class will be given a metaclass that produces appropriate Table objects and makes the appropriate mapper() calls based on information provided declaratively in the class and any subclasses of the class.
Parameters:
* bind - optional Connectable, will be assigned the bind attribute on MetaData instance.
* metadata - optional MetaData instance. All Table objects implicitly declared by subclasses of base will share this MetaData.
class_registry - optional dictionary that'll serve as the registry of class names->mapped classes when string names are used to identify classes inside of relationship(), and others.
Now that we have a "base", define any number of mapped classes in terms of it.
start with
table called "users" <-> class User map to this table "users"
End of explanation
User.__table__
Explanation: Create your own automated conventions using helper functions and mixin classes, described in Mixin and Custom Base Classes
When class constructed, Declarative replaces all Column objects with special Python accessors known as descriptors, process known as instrumentation.
- The "instrumented" mapped class will provide us with the means to refer to our table in a SQL context as well as to persist and load values of columns from the database.
Create a Schema
With User class constructed via Declarative system, we have defined information about our table, known as table metadata.
Object used by SQLAlchemy to represent this information for a specific table is called Table object, and here Declarative has made 1 for us. We see this by inspecting __table__ attribute:
End of explanation
Base.metadata
User.metadata
# MetaData.create_all() checks first presence of tables, in actual
# CREATE TABLE statement
# Note, if the database hadn't been created yet, this will literally create the database in the file system.
Base.metadata.create_all(engine)
Explanation: When we declared our class, Declarative also created Table object according to our specifications, and associated it with class by constructing Mapper object.
Classical Mappings, any plain Python class can be mapped to any Table using mapper() function, described in Classical Mappings
Table object is a member of larger collection known as MetaData. When using Declarative, this object is available using .metadata attribute.
End of explanation
ed_user = User(name='ed', fullname='Ed Jones', nickname='edsnickname')
print(ed_user.name)
ed_user.nickname
str(ed_user.id)
Explanation: Minimal Table Descriptions vs. Full Descriptions
VARCHAR columns were generated without length on SQLite and PostgreSQL, this is a valid datatype, but not on others.
Length may be provided to String type:
Column(String(50))
The length field on String as well as similar precision/scale fields available on Integer, Numeric, etc. aren't referenced by SQLAlchemy other than when creating tables.
Additionally, Firebird and Oracle require sequences to generate primary key identifiers, and SQLAlchemy doesn't generate or assume these without being instructed.
If otherwise, use Sequence construct
```
from sqlalchemy import Sequence
Column(Integer, sequence('user_id_seq'), primary_key=True)
class User(Base):
tablename = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
name = Column(string(50))
fullname = Column(String(50))
nickname = Column(String(50))
def __repr__(self):
return "<Username>='%s', fullname='%s', nickname='%s')>" % (
self.name, self.fullname, self.nickname)
```
Create Instance of the Mapped Class
End of explanation
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
Explanation: Even though we didn't specify it in ctor, id attribute produces value None when we access it (as opposed to Python's raising AttributeError for undefined attribute). SQLAlchemy's instrumentation normally produces this default value for column-mapped attributes when 1st accessed.
Creating a Session
Start talking to database; ORM's "handle" to db is the Session. When we first set up the application, at same level as our create_engine(), we define Session class which will serve as factor for new Session objects:
End of explanation
session = Session()
Explanation: In the case where your application doesn't have an Engine when you define your module-level objects, just set it up like this:
Session = sessionmaker()
Later, when you create your engine with create_engine(), connect it to Session using configure():
Session.configure(bind=engine)
When do I construct a Session when do I commit it, and when do I close it?
When you need to have a conversation with the database, instantiate a Session; Session associated with our SQLite-enabled Engine, but hasn't opened any connections yet.
End of explanation
ed_user = User(name='ed', fullname='Ed Jones', nickname='edsnickname')
session.add(ed_user)
Explanation: Adding and Updating Objects
To persist our User object, add() it to our Session:
End of explanation
our_user = session.query(User).filter_by(name='ed').first()
our_user
# Session identified that row returned is the same row as 1 already
# represented within its internal map of objects
ed_user is our_user
Explanation: At this point, instance is pending; no SQL has yet been issued and object isn't represented yet by row in the database.
- The Session will issue SQL to persist Ed Jones as soon as needed, process known as flush
- If we query database for Ed Jones, all pending information will be first be flushed, and query immediately issued.
e.g. below, we create a new `Query` object which loads instances of `Users`. We "filter by" `name` attribute of `ed`
End of explanation
# Add more `User` objects at once using `add_all()`
session.add_all([
User(name='wendy', fullname='Wendy Williams', nickname='windy'),
User(name='mary', fullname='Mary Contrary', nickname='mary'),
User(name='fred', fullname='Fred Flintstone', nickname='freddy')
])
# change Ed's nickname
ed_user.nickname = 'eddie'
# Session is paying attention, e.g.
session.dirty
# Sessino knows 3 new User objects are pending
session.new
Explanation: ORM concept of identity map ensures that all operations upon a particular row within a Session operate upon same set of data.
End of explanation
session.commit()
ed_user.id # Now is 1
ed_user
Explanation: We tell Session we'd like to issue all remaining changes to the database and commit the transaction, which has been in progress throughout.
- do this via commit()
- e.g. Session emits UPDATE statement for the nickname change on "ed", as well as INSERT statements for 3 new User objects added
End of explanation
ed_user.name = 'Edwardo'
fake_user = User(name='fakeuser', fullname='Invalid', nickname='12345')
session.add(fake_user)
# Querying session, they're flushed into current transaction
session.query(User).filter(User.name.in_(['Edwardo', 'fakeuser'])).all()
Explanation: Session Object States
transient - User object moved from being outside the Session,
pending - inside the Session without primary key, to
persistent - actually being inserted
read Quickie Intro to Object States
Rolling Back
End of explanation
session.rollback()
ed_user.name
fake_user in session
session.query(User).filter(User.name.in_(['ed', 'fakeuser'])).all()
Explanation: Rolling back, we can see that ed_user's name is back to ed, and fake_user has been kicked out of the session
End of explanation
for instance in session.query(User).order_by(User.id):
print(instance.name, instance.fullname)
for name, fullname in session.query(User.name, User.fullname):
print(name, fullname)
for row in session.query(User, User.name).all():
print(row.User, row.name)
for row in session.query(User.name.label('name_label')).all():
print(row.name_label)
Explanation: Querying
End of explanation
from sqlalchemy.orm import aliased
user_alias = aliased(User, name='user_alias')
for row in session.query(user_alias, user_alias.name).all():
print(row.user_alias)
Explanation: The name given to a full entity such as User, assuming that multiple entities are present in the call to query(), can be controlled using aliased()
End of explanation
for u in session.query(User).order_by(User.id)[1:3]:
print(u)
# filtering results
for name, in session.query(User.name).filter_by(fullname='Ed Jones'):
print(name)
Explanation: Basic operations with Query (i.e. actual SQL commands) include LIMIT and OFFSET, most conveniently using Python array slices, in conjunction with ORDER BY:
End of explanation
for name, in session.query(User.name).filter(User.fullname=='Ed Jones'):
print(name)
# further criteria may be added
for user in session.query(User).filter(User.name=='ed').filter(User.fullname=='Ed Jones'):
print(user)
Explanation: or filter(), which uses more flexible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class:
End of explanation
# equals
list(session.query(User).filter(User.name == 'ed'))
# not equals
list(session.query(User).filter(User.name != 'ed'))
# LIKE
list(session.query(User).filter(User.name.like('%ed%')))
# ILIKE (case-insensitive LIKE)
list(session.query(User).filter(User.name.ilike('%ed')))
# IN
list(session.query(User).filter(User.name.in_(['ed', 'wendy', 'jack'])))
Explanation: Common Filter Operators
End of explanation
from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship
class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String, nullable=False)
# ForeignKey here expresses that values in the addresses.user_id
# column should be constrained to those values in the users.id column,
# i.e. its primary key.
user_id = Column(Integer, ForeignKey('users.id'))
user = relationship("User", back_populates="addresses")
def __repr__(self):
return "<Address(email_address='%s')>" % self.email_address
# relationship tells the ORM that Address class itself should be linked to
# User class, using the attribute Address.user
User.addresses = relationship(
"Address", order_by=Address.id, back_populates="user")
Explanation: Building a Relationship
End of explanation |
26 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UrbanAccess Demo
Author
Step1: The settings object
The settings object is a global urbanaccess_config object that can be used to set default options in UrbanAccess. In general, these options do not need to be changed.
Step2: For example, you can stop printing in notebooks and only print to console by setting
Step3: turn on printing for now
Step4: The feeds object
The GTFS feeds object is a global urbanaccess_gtfsfeeds object that allows you to save and manage information needed to download multiple GTFS feeds. This object is a dictionary of the names of GTFS feeds or agencies and the URLs to use to download the corresponding feeds.
Step5: Searching for GTFS feeds
You can use the search function to find feeds on the GTFS Data Exchange (Note
Step6: Now that we see what can be found on the GTFS Data Exchange. Let's run this again but this time let's add the feed from your search to the feed download list
Step7: If you know of a GTFS feed located elsewhere or one that is more up to date, you can add additional feeds located at custom URLs by adding a dictionary with the key as the name of the service/agency and the value as the URL.
Let's do this for AC Transit which also operates in Oakland, CA.
The link to their feed is here
Step8: Note the two GTFS feeds now in your feeds object ready to download
Step9: Downloading GTFS data
Use the download function to download all the feeds in your feeds object at once. If no parameters are specified the existing feeds object will be used to acquire the data.
By default, your data will be downloaded into the directory of this notebook in the folder
Step10: Load GTFS data into an UrbanAccess transit data object
Now that we have downloaded our data let's load our individual GTFS feeds (currently a series of text files stored on disk) into a combined network of Pandas DataFrames.
You can specify one feed or multiple feeds that are inside a root folder using the gtfsfeed_path parameter. If you want to aggregate multiple transit networks together, all the GTFS feeds you want to aggregate must be inside of a single root folder.
Turn on validation and set a bounding box with the remove_stops_outsidebbox parameter turned on to ensure all your GTFS feed data are within a specified area.
Let's specify a bounding box of coordinates for the City of Oakland to subset the GTFS data to. You can generate a bounding box by going to http
Step11: The transit data object
The output is a global urbanaccess_gtfs_df object that can be accessed with the specified variable loaded_feeds. This object holds all the individual GTFS feed files aggregated together with each GTFS feed file type in separate Pandas DataFrames to represent all the loaded transit feeds in a metropolitan area.
Step12: Note the two transit services we have aggregated into one regional table
Step13: Quickly view the transit stop locations
Step14: Create a transit network
Now that we have loaded and standardized our GTFS data, let's create a travel time weighted graph from the GTFS feeds we have loaded.
Create a network for weekday monday service between 7 am and 10 am (['07
Step15: The UrbanAccess network object
The output is a global urbanaccess_network object. This object holds the resulting graph comprised of nodes and edges for the processed GTFS network data for services operating at the day and time you specified inside of transit_edges and transit_nodes.
Let's set the global network object to a variable called urbanaccess_net that we can then inspect
Step16: Download OSM data
Now let's download OpenStreetMap (OSM) pedestrian street network data to produce a graph network of nodes and edges for Oakland, CA. We will use the same bounding box as before.
Step17: Create a pedestrian network
Now that we have our pedestrian network data let's create a travel time weighted graph from the pedestrian network we have loaded and add it to our existing UrbanAccess network object. We will assume a pedestrian travels on average at 3 mph.
The resulting weighted network will be added to your UrbanAccess network object inside osm_nodes and osm_edges
Step18: Let's inspect the results which we can access inside of the existing urbanaccess_net variable
Step19: Create an integrated transit and pedestrian network
Now let's integrate the two networks together. The resulting graph will be added to your existing UrbanAccess network object. After running this step, your network will be ready to be used with Pandana.
The resulting integrated network will be added to your UrbanAccess network object inside net_nodes and net_edges
Step20: Let's inspect the results which we can access inside of the existing urbanaccess_net variable
Step21: Save the network to disk
You can save the final processed integrated network net_nodes and net_edges to disk inside of a HDF5 file. By default the file will be saved to the directory of this notebook in the folder data
Step22: Load saved network from disk
You can load an existing processed integrated network HDF5 file from disk into a UrbanAccess network object.
Step23: Visualize the network
You can visualize the network you just created using basic UrbanAccess plot functions
Integrated network
Step24: Integrated network by travel time
Use the col_colors function to color edges by travel time. In this case the darker red the higher the travel times.
Note the ability to see AC Transit's major bus arterial routes (in darker red) and transfer locations and BART rail network (rail stations are visible by the multiple bus connections at certain junctions in the network most visible in downtown Oakland at 19th, 12th Street, and Lake Merritt stations and Fruitvale and Coliseum stations) with the underlying pedestrian network. Downtown Oakland is located near the white cutout in the northeast middle section of the network which represents Lake Merritt.
Step25: Let's zoom in closer to downtown Oakland using a new smaller extent bbox. Note the bus routes on the major arterials and the BART routes from station to station.
Step26: Transit network
You can also slice the network by network type
Step27: Pedestrian network
Step28: Transit network
Step29: Transit network
Step30: Add average headways to network travel time
Calculate route stop level headways
The network we have generated so far only contains pure travel times. UrbanAccess allows for the calculation of and addition of route stop level average headways to the network. This is used as a proxy for passenger wait times at stops and stations. The route stop level average headway are added to the pedestrian to transit connector edges.
Let's calculate headways for the same AM Peak time period. Statistics on route stop level headways will be added to your GTFS transit data object inside of headways
Step31: Add the route stop level average headways to your integrated network
Now that headways have been calculated and added to your GTFS transit feed object, you can use them to generate a new integrated network that incorporates the headways within the pedestrian to transit connector edge travel times.
Step32: Integrated network by travel time with average headways
Step33: Using an UrbanAccess network with Pandana
Pandana (Pandas Network Analysis) is a tool to compute network accessibility metrics.
Now that we have an integrated transit and pedestrian network that has been formatted for use with Pandana, we can now use Pandana right away to compute accessibility metrics.
There are a couple of things to remember about UrbanAccess and Pandana
Step34: Let's subset the Census data to just be the bounding box for Oakland
Step35: Initialize the Pandana network
Let's initialize our Pandana network object using our transit and pedestrian network we created. Note
Step36: Now let's set our blocks on to the network
Step37: Calculate cumulative accessibility
Now let's compute an accessibility metric, in this case a cumulative accessibility metric. See Pandana for other metrics that can be calculated.
Let's set the block variables we want to use as our accessibly metric on the Pandana network. In this case let's use jobs
Step38: Now let's run an cumulative accessibility query using our network and the jobs variable for three different travel time thresholds
Step39: Quickly visualize the accessibility query results. As expected, note that a travel time of 15 minutes results in a lower number of jobs accessible at each network node.
Step40: Jobs accessible within 15 minutes
Note how the radius of the number of jobs accessible expands as the time threshold increases where high accessibility is indicated in dark red. You can easily see downtown Oakland has the highest accessibility due to a convergence of transit routes and because downtown is where the majority of jobs in the area are located. Other high accessibility areas are visible elsewhere directly adjacent to BART metro rail stations of West Oakland, Fruitvale, and Coliseum and AC Transit bus routes on the main arterial road corridors.
Step41: Jobs accessible within 30 minutes
Step42: Jobs accessible within 45 minutes | Python Code:
import matplotlib
matplotlib.use('agg') # allows notebook to be tested in Travis
import pandas as pd
import cartopy.crs as ccrs
import cartopy
import matplotlib.pyplot as plt
import pandana as pdna
import time
import urbanaccess as ua
from urbanaccess.config import settings
from urbanaccess.gtfsfeeds import feeds
from urbanaccess import gtfsfeeds
from urbanaccess.gtfs.gtfsfeeds_dataframe import gtfsfeeds_dfs
from urbanaccess.network import ua_network, load_network
%matplotlib inline
Explanation: UrbanAccess Demo
Author: UrbanSim
This notebook provides a brief overview of the main functionality of UrbanAccess with examples using AC Transit and BART GTFS data and OpenStreetMap (OSM) pedestrian network data to create an integrated transit and pedestrian network for Oakland, CA for use in Pandana network accessibility queries.
UrbanAccess on UDST: https://github.com/UDST/urbanaccess
UrbanAccess documentation: https://udst.github.io/urbanaccess/index.html
UrbanAccess citation:
Samuel D. Blanchard and Paul Waddell, 2017, "UrbanAccess: Generalized Methodology for Measuring Regional Accessibility with an Integrated Pedestrian and Transit Network" Transportation Research Record: Journal of the Transportation Research Board, 2653: 35–44.
Notes:
- GTFS feeds are constantly updated. The feeds in this notebook may change over time which may result in slight differences in results.
- Output cells in this notebook have been cleared to reduce file size.
Installation:
For UrbanAccess installation instructions see: https://udst.github.io/urbanaccess/installation.html
This notebook contains optional Pandana examples which require the installation of Pandana, for instructions see here: http://udst.github.io/pandana/installation.html
Outline:
The settings object
The feeds object and searching for GTFS feeds
Downloading GTFS data
Loading GTFS data into a UrbanAccess transit data object
Creating a transit network
Downloading OSM data
Creating a pedestrian network
Creating an integrated transit and pedestrian network
Saving a network to disk
Loading a network from disk
Visualizing the network
Adding average headways to network travel time
Using an UrbanAccess network with Pandana
End of explanation
settings.to_dict()
Explanation: The settings object
The settings object is a global urbanaccess_config object that can be used to set default options in UrbanAccess. In general, these options do not need to be changed.
End of explanation
settings.log_console = True
Explanation: For example, you can stop printing in notebooks and only print to console by setting:
End of explanation
settings.log_console = False
Explanation: turn on printing for now
End of explanation
feeds.to_dict()
Explanation: The feeds object
The GTFS feeds object is a global urbanaccess_gtfsfeeds object that allows you to save and manage information needed to download multiple GTFS feeds. This object is a dictionary of the names of GTFS feeds or agencies and the URLs to use to download the corresponding feeds.
End of explanation
gtfsfeeds.search(search_text='Bay Area Rapid Transit',
search_field=None,
match='contains')
Explanation: Searching for GTFS feeds
You can use the search function to find feeds on the GTFS Data Exchange (Note: the GTFS Data Exchange is no longer being maintained as of Summer 2016 so feeds here may be out of date)
Let's search for feeds for transit agencies in the GTFS Data Exchange that we know serve Oakland, CA: 1) Bay Area Rapid Transit District (BART) which runs the metro rail service and 2) AC Transit which runs bus services.
Let's start by finding the feed for the Bay Area Rapid Transit District (BART) by using the search term Bay Area Rapid Transit:
End of explanation
gtfsfeeds.search(search_text='Bay Area Rapid Transit',
search_field=None,
match='contains',
add_feed=True)
Explanation: Now that we see what can be found on the GTFS Data Exchange. Let's run this again but this time let's add the feed from your search to the feed download list
End of explanation
feeds.add_feed(add_dict={'ac transit': 'http://www.actransit.org/wp-content/uploads/GTFSJune182017B.zip'})
Explanation: If you know of a GTFS feed located elsewhere or one that is more up to date, you can add additional feeds located at custom URLs by adding a dictionary with the key as the name of the service/agency and the value as the URL.
Let's do this for AC Transit which also operates in Oakland, CA.
The link to their feed is here: http://www.actransit.org/planning-focus/data-resource-center/ and let's get the latest version as of June 18, 2017
End of explanation
feeds.to_dict()
Explanation: Note the two GTFS feeds now in your feeds object ready to download
End of explanation
gtfsfeeds.download()
Explanation: Downloading GTFS data
Use the download function to download all the feeds in your feeds object at once. If no parameters are specified the existing feeds object will be used to acquire the data.
By default, your data will be downloaded into the directory of this notebook in the folder: data
End of explanation
validation = True
verbose = True
# bbox for City of Oakland
bbox = (-122.355881,37.632226,-122.114775,37.884725)
remove_stops_outsidebbox = True
append_definitions = True
loaded_feeds = ua.gtfs.load.gtfsfeed_to_df(gtfsfeed_path=None,
validation=validation,
verbose=verbose,
bbox=bbox,
remove_stops_outsidebbox=remove_stops_outsidebbox,
append_definitions=append_definitions)
Explanation: Load GTFS data into an UrbanAccess transit data object
Now that we have downloaded our data let's load our individual GTFS feeds (currently a series of text files stored on disk) into a combined network of Pandas DataFrames.
You can specify one feed or multiple feeds that are inside a root folder using the gtfsfeed_path parameter. If you want to aggregate multiple transit networks together, all the GTFS feeds you want to aggregate must be inside of a single root folder.
Turn on validation and set a bounding box with the remove_stops_outsidebbox parameter turned on to ensure all your GTFS feed data are within a specified area.
Let's specify a bounding box of coordinates for the City of Oakland to subset the GTFS data to. You can generate a bounding box by going to http://boundingbox.klokantech.com/ and selecting the CSV format.
End of explanation
loaded_feeds.stops.head()
Explanation: The transit data object
The output is a global urbanaccess_gtfs_df object that can be accessed with the specified variable loaded_feeds. This object holds all the individual GTFS feed files aggregated together with each GTFS feed file type in separate Pandas DataFrames to represent all the loaded transit feeds in a metropolitan area.
End of explanation
loaded_feeds.stops.unique_agency_id.unique()
Explanation: Note the two transit services we have aggregated into one regional table
End of explanation
loaded_feeds.stops.plot(kind='scatter', x='stop_lon', y='stop_lat', s=0.1)
loaded_feeds.routes.head()
loaded_feeds.stop_times.head()
loaded_feeds.trips.head()
loaded_feeds.calendar.head()
Explanation: Quickly view the transit stop locations
End of explanation
ua.gtfs.network.create_transit_net(gtfsfeeds_dfs=loaded_feeds,
day='monday',
timerange=['07:00:00', '10:00:00'],
calendar_dates_lookup=None)
Explanation: Create a transit network
Now that we have loaded and standardized our GTFS data, let's create a travel time weighted graph from the GTFS feeds we have loaded.
Create a network for weekday monday service between 7 am and 10 am (['07:00:00', '10:00:00']) to represent travel times during the AM Peak period.
Assumptions: We are using the service ids in the calendar file to subset the day of week, however if your feed uses the calendar_dates file and not the calendar file then you can use the calendar_dates_lookup parameter. This is not required for AC Transit and BART.
End of explanation
urbanaccess_net = ua.network.ua_network
urbanaccess_net.transit_edges.head()
urbanaccess_net.transit_nodes.head()
urbanaccess_net.transit_nodes.plot(kind='scatter', x='x', y='y', s=0.1)
Explanation: The UrbanAccess network object
The output is a global urbanaccess_network object. This object holds the resulting graph comprised of nodes and edges for the processed GTFS network data for services operating at the day and time you specified inside of transit_edges and transit_nodes.
Let's set the global network object to a variable called urbanaccess_net that we can then inspect:
End of explanation
nodes, edges = ua.osm.load.ua_network_from_bbox(bbox=bbox,
remove_lcn=True)
Explanation: Download OSM data
Now let's download OpenStreetMap (OSM) pedestrian street network data to produce a graph network of nodes and edges for Oakland, CA. We will use the same bounding box as before.
End of explanation
ua.osm.network.create_osm_net(osm_edges=edges,
osm_nodes=nodes,
travel_speed_mph=3)
Explanation: Create a pedestrian network
Now that we have our pedestrian network data let's create a travel time weighted graph from the pedestrian network we have loaded and add it to our existing UrbanAccess network object. We will assume a pedestrian travels on average at 3 mph.
The resulting weighted network will be added to your UrbanAccess network object inside osm_nodes and osm_edges
End of explanation
urbanaccess_net.osm_nodes.head()
urbanaccess_net.osm_edges.head()
urbanaccess_net.osm_nodes.plot(kind='scatter', x='x', y='y', s=0.1)
Explanation: Let's inspect the results which we can access inside of the existing urbanaccess_net variable:
End of explanation
ua.network.integrate_network(urbanaccess_network=urbanaccess_net,
headways=False)
Explanation: Create an integrated transit and pedestrian network
Now let's integrate the two networks together. The resulting graph will be added to your existing UrbanAccess network object. After running this step, your network will be ready to be used with Pandana.
The resulting integrated network will be added to your UrbanAccess network object inside net_nodes and net_edges
End of explanation
urbanaccess_net.net_nodes.head()
urbanaccess_net.net_edges.head()
urbanaccess_net.net_edges[urbanaccess_net.net_edges['net_type'] == 'transit'].head()
Explanation: Let's inspect the results which we can access inside of the existing urbanaccess_net variable:
End of explanation
ua.network.save_network(urbanaccess_network=urbanaccess_net,
filename='final_net.h5',
overwrite_key = True)
Explanation: Save the network to disk
You can save the final processed integrated network net_nodes and net_edges to disk inside of a HDF5 file. By default the file will be saved to the directory of this notebook in the folder data
End of explanation
urbanaccess_net = ua.network.load_network(filename='final_net.h5')
Explanation: Load saved network from disk
You can load an existing processed integrated network HDF5 file from disk into a UrbanAccess network object.
End of explanation
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges,
bbox=bbox,
fig_height=30, margin=0.02,
edge_color='#999999', edge_linewidth=1, edge_alpha=1,
node_color='black', node_size=1.1, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Visualize the network
You can visualize the network you just created using basic UrbanAccess plot functions
Integrated network
End of explanation
edgecolor = ua.plot.col_colors(df=urbanaccess_net.net_edges, col='weight', cmap='gist_heat_r', num_bins=5)
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges,
bbox=bbox,
fig_height=30, margin=0.02,
edge_color=edgecolor, edge_linewidth=1, edge_alpha=0.7,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Integrated network by travel time
Use the col_colors function to color edges by travel time. In this case the darker red the higher the travel times.
Note the ability to see AC Transit's major bus arterial routes (in darker red) and transfer locations and BART rail network (rail stations are visible by the multiple bus connections at certain junctions in the network most visible in downtown Oakland at 19th, 12th Street, and Lake Merritt stations and Fruitvale and Coliseum stations) with the underlying pedestrian network. Downtown Oakland is located near the white cutout in the northeast middle section of the network which represents Lake Merritt.
End of explanation
edgecolor = ua.plot.col_colors(df=urbanaccess_net.net_edges, col='weight', cmap='gist_heat_r', num_bins=5)
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges,
bbox=(-122.282295, 37.795, -122.258434, 37.816022),
fig_height=30, margin=0.02,
edge_color=edgecolor, edge_linewidth=1, edge_alpha=0.7,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Let's zoom in closer to downtown Oakland using a new smaller extent bbox. Note the bus routes on the major arterials and the BART routes from station to station.
End of explanation
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges[urbanaccess_net.net_edges['net_type']=='transit'],
bbox=None,
fig_height=30, margin=0.02,
edge_color='#999999', edge_linewidth=1, edge_alpha=1,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Transit network
You can also slice the network by network type
End of explanation
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges[urbanaccess_net.net_edges['net_type']=='walk'],
bbox=None,
fig_height=30, margin=0.02,
edge_color='#999999', edge_linewidth=1, edge_alpha=1,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Pedestrian network
End of explanation
urbanaccess_net.net_edges['unique_route_id'].unique()
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges[urbanaccess_net.net_edges['unique_route_id']=='51A-141_ac_transit'],
bbox=bbox,
fig_height=30, margin=0.02,
edge_color='#999999', edge_linewidth=1, edge_alpha=1,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Transit network: AC Transit Route 51A
You can slice the network using any attribute in edges. In this case let's examine one route for AC Transit route 51A.
Looking at what routes are in the network for 51A we see route id: 51A-141_ac_transit
End of explanation
urbanaccess_net.net_edges['unique_agency_id'].unique()
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges[urbanaccess_net.net_edges['unique_agency_id']=='bay_area_rapid_transit'],
bbox=bbox,
fig_height=30, margin=0.02,
edge_color='#999999', edge_linewidth=1, edge_alpha=1,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Transit network: BART network
We can also slice the data by agency. In this case let's view all BART routes.
Looking at what agencies are in the network for BART we see agency id: bay_area_rapid_transit
End of explanation
ua.gtfs.headways.headways(gtfsfeeds_df=loaded_feeds,
headway_timerange=['07:00:00','10:00:00'])
loaded_feeds.headways.head()
Explanation: Add average headways to network travel time
Calculate route stop level headways
The network we have generated so far only contains pure travel times. UrbanAccess allows for the calculation of and addition of route stop level average headways to the network. This is used as a proxy for passenger wait times at stops and stations. The route stop level average headway are added to the pedestrian to transit connector edges.
Let's calculate headways for the same AM Peak time period. Statistics on route stop level headways will be added to your GTFS transit data object inside of headways
End of explanation
ua.network.integrate_network(urbanaccess_network=urbanaccess_net,
headways=True,
urbanaccess_gtfsfeeds_df=loaded_feeds,
headway_statistic='mean')
Explanation: Add the route stop level average headways to your integrated network
Now that headways have been calculated and added to your GTFS transit feed object, you can use them to generate a new integrated network that incorporates the headways within the pedestrian to transit connector edge travel times.
End of explanation
edgecolor = ua.plot.col_colors(df=urbanaccess_net.net_edges, col='weight', cmap='gist_heat_r', num_bins=5)
ua.plot.plot_net(nodes=urbanaccess_net.net_nodes,
edges=urbanaccess_net.net_edges,
bbox=bbox,
fig_height=30, margin=0.02,
edge_color=edgecolor, edge_linewidth=1, edge_alpha=0.7,
node_color='black', node_size=0, node_alpha=1, node_edgecolor='none', node_zorder=3, nodes_only=False)
Explanation: Integrated network by travel time with average headways
End of explanation
blocks = pd.read_hdf('bay_area_demo_data.h5','blocks')
# remove blocks that contain all water
blocks = blocks[blocks['square_meters_land'] != 0]
print('Total number of blocks: {:,}'.format(len(blocks)))
blocks.head()
Explanation: Using an UrbanAccess network with Pandana
Pandana (Pandas Network Analysis) is a tool to compute network accessibility metrics.
Now that we have an integrated transit and pedestrian network that has been formatted for use with Pandana, we can now use Pandana right away to compute accessibility metrics.
There are a couple of things to remember about UrbanAccess and Pandana:
- UrbanAccess generates by default a one way network. One way means there is an explicit edge for each direction in the edge table. Where applicable, it is important to set any Pandana two_way parameters to False (they are True by default) to indicate that the network is a one way network.
- As of Pandana v0.3.0, node ids and from and to columns in your network must be integer type and not string. UrbanAccess automatically generates both string and integer types so use the from_int and to_int columns in edges and the index in nodes id_int.
- UrbanAccess by default will generate edge weights that represent travel time in units of minutes.
For more on Pandana see the:
Pandana repo: https://github.com/UDST/pandana
Pandana documentation: http://udst.github.io/pandana/
Load Census block data
Let's load 2010 Census block data for the 9 county Bay Area. Note: These data have been processed from original Census and LEHD data.
The data is located in the demo folder on the repo with this notebook.
End of explanation
lng_max, lat_min, lng_min, lat_max = bbox
outside_bbox = blocks.loc[~(((lng_max < blocks["x"]) & (blocks["x"] < lng_min)) & ((lat_min < blocks["y"]) & (blocks["y"] < lat_max)))]
blocks_subset = blocks.drop(outside_bbox.index)
print('Total number of subset blocks: {:,}'.format(len(blocks_subset)))
blocks_subset.plot(kind='scatter', x='x', y='y', s=0.1)
Explanation: Let's subset the Census data to just be the bounding box for Oakland
End of explanation
s_time = time.time()
transit_ped_net = pdna.Network(urbanaccess_net.net_nodes["x"],
urbanaccess_net.net_nodes["y"],
urbanaccess_net.net_edges["from_int"],
urbanaccess_net.net_edges["to_int"],
urbanaccess_net.net_edges[["weight"]],
twoway=False)
print('Took {:,.2f} seconds'.format(time.time() - s_time))
Explanation: Initialize the Pandana network
Let's initialize our Pandana network object using our transit and pedestrian network we created. Note: the from_int and to_int as well as the twoway=False denoting this is a explicit one way network.
End of explanation
blocks_subset['node_id'] = transit_ped_net.get_node_ids(blocks_subset['x'], blocks_subset['y'])
Explanation: Now let's set our blocks on to the network
End of explanation
transit_ped_net.set(blocks_subset.node_id, variable = blocks_subset.jobs, name='jobs')
Explanation: Calculate cumulative accessibility
Now let's compute an accessibility metric, in this case a cumulative accessibility metric. See Pandana for other metrics that can be calculated.
Let's set the block variables we want to use as our accessibly metric on the Pandana network. In this case let's use jobs
End of explanation
s_time = time.time()
jobs_45 = transit_ped_net.aggregate(45, type='sum', decay='linear', name='jobs')
jobs_30 = transit_ped_net.aggregate(30, type='sum', decay='linear', name='jobs')
jobs_15 = transit_ped_net.aggregate(15, type='sum', decay='linear', name='jobs')
print('Took {:,.2f} seconds'.format(time.time() - s_time))
Explanation: Now let's run an cumulative accessibility query using our network and the jobs variable for three different travel time thresholds: 15, 30, 45 minutes.
Note: Depending on network size, radius threshold, computer processing power, and whether or not you are using multiple cores the compute process may take some time.
End of explanation
print(jobs_45.head())
print(jobs_30.head())
print(jobs_15.head())
Explanation: Quickly visualize the accessibility query results. As expected, note that a travel time of 15 minutes results in a lower number of jobs accessible at each network node.
End of explanation
s_time = time.time()
fig = plt.subplots(figsize=(20,20))
data_crs = ccrs.PlateCarree()
ax = plt.axes(projection=ccrs.epsg(26943))
ax.add_feature(cartopy.feature.GSHHSFeature(scale='full'), edgecolor='grey')
plt.scatter(transit_ped_net.nodes_df.x, transit_ped_net.nodes_df.y,
c=jobs_15, s=4, cmap='gist_heat_r', edgecolor='none', transform=data_crs)
cb = plt.colorbar()
print('Took {:,.2f} seconds'.format(time.time() - s_time))
Explanation: Jobs accessible within 15 minutes
Note how the radius of the number of jobs accessible expands as the time threshold increases where high accessibility is indicated in dark red. You can easily see downtown Oakland has the highest accessibility due to a convergence of transit routes and because downtown is where the majority of jobs in the area are located. Other high accessibility areas are visible elsewhere directly adjacent to BART metro rail stations of West Oakland, Fruitvale, and Coliseum and AC Transit bus routes on the main arterial road corridors.
End of explanation
s_time = time.time()
fig = plt.subplots(figsize=(20,20))
data_crs = ccrs.PlateCarree()
ax = plt.axes(projection=ccrs.epsg(26943))
ax.add_feature(cartopy.feature.GSHHSFeature(scale='full'), edgecolor='grey')
plt.scatter(transit_ped_net.nodes_df.x, transit_ped_net.nodes_df.y,
c=jobs_30, s=4, cmap='gist_heat_r', edgecolor='none', transform=data_crs)
cb = plt.colorbar()
print('Took {:,.2f} seconds'.format(time.time() - s_time))
Explanation: Jobs accessible within 30 minutes
End of explanation
s_time = time.time()
fig = plt.subplots(figsize=(20,20))
data_crs = ccrs.PlateCarree()
ax = plt.axes(projection=ccrs.epsg(26943))
ax.add_feature(cartopy.feature.GSHHSFeature(scale='full'), edgecolor='grey')
plt.scatter(transit_ped_net.nodes_df.x, transit_ped_net.nodes_df.y,
c=jobs_45, s=4, cmap='gist_heat_r', edgecolor='none', transform=data_crs)
cb = plt.colorbar()
print('Took {:,.2f} seconds'.format(time.time() - s_time))
Explanation: Jobs accessible within 45 minutes
End of explanation |
27 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
28 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-means with SDSS data
Machine learning exercise by Group 1 at AstroHackWeek 2017, Day 1.
First, we blatantly copy some of the code from the demo-SDSS notebook...
Step1: Pull color information out of the photoPosPlate data file for u-g, g-r, r-i, and i-z colors
Step2: Let's take a look at how the spectral data was classified by SDSS into galaxies, QSOs, or stars
Step3: Hopefully our K-means clustering will show us that the dataset breaks into somewhat similarly-shaped pieces in color-color space.
Running K-means
To get an idea for how our data for K-means should be structured, we refer to the example at http
Step4: At a glance, it looks like clusters 0, 4, and 6 are mostly galaxies, clusters 1 and 2 are weird outliers, cluster 3 is QSOs (plus some stellar contamination?), and clusters 5 and 7 are mostly stars. We could almost certainly refine this better given more time.
Troubleshooting the outliers in clusters 1 and 2
Get the indices corresponding to each K-means label 0, 1, and 2 for comparison | Python Code:
from os import path
from astropy.table import Table
import h5py
import matplotlib.pyplot as plt
#plt.style.use('notebook.mplstyle')
%matplotlib inline
import numpy as np
from sklearn.cluster import KMeans
data_path = '/Users/Meredith/Astronomy/astrohack/ahw2017-ml-data/' # specific to my computer
photoPos = Table.read(path.join(data_path, 'sdss', 'photoPosPlate-merged.hdf5'),
path='photoPosPlate')
len(photoPos)
Explanation: K-means with SDSS data
Machine learning exercise by Group 1 at AstroHackWeek 2017, Day 1.
First, we blatantly copy some of the code from the demo-SDSS notebook...
End of explanation
# 01234 = ugriz filters
u_g = photoPos['PSFMAG'][:,0] - photoPos['PSFMAG'][:,1]
g_r = photoPos['PSFMAG'][:,1] - photoPos['PSFMAG'][:,2]
r_i = photoPos['PSFMAG'][:,2] - photoPos['PSFMAG'][:,3]
i_z = photoPos['PSFMAG'][:,3] - photoPos['PSFMAG'][:,4]
Explanation: Pull color information out of the photoPosPlate data file for u-g, g-r, r-i, and i-z colors
End of explanation
specObj = Table.read(path.join(data_path, 'sdss', 'specObj-merged.hdf5'),
path='specObj')
spec_class = specObj['CLASS'].astype(str)
spec_classes = np.unique(spec_class)
for cls in spec_classes:
print(cls, (spec_class == cls).sum())
fig, axes = plt.subplots(1, len(spec_classes), figsize=(12.5,5),
sharex=True, sharey=True)
for i, cls in enumerate(spec_classes):
axes[i].plot(g_r[spec_class == cls], r_i[spec_class == cls],
marker='.', linestyle='none', alpha=0.1)
axes[i].set_title(cls)
axes[i].set_xlabel('$g-r$ [mag]')
axes[0].set_xlim(-0.5, 2.5)
axes[0].set_ylim(-1, 2)
axes[0].set_ylabel('$r-i$ [mag]')
fig.tight_layout()
Explanation: Let's take a look at how the spectral data was classified by SDSS into galaxies, QSOs, or stars
End of explanation
X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [4, 4], [4, 0]])
X.shape # (number of data points X number of things per data point)
colors = np.array([u_g, g_r, r_i, i_z]).T # put into the same shape as X
colors.shape
n_clusters = 8 # the number of clusters to use
kmeans = KMeans(n_clusters=n_clusters, random_state=0).fit(colors) # run the K-means analysis
print(kmeans.labels_) # the label from 0 to n_clusters assigned to each point
print(kmeans.cluster_centers_)
# make a new plot for each cluster center
for k in range(n_clusters):
plt.figure(figsize=(5,5))
idx = (kmeans.labels_ == k)
plt.scatter(g_r[idx], r_i[idx], alpha=0.1, marker='.')
plt.xlabel('$g - r$ [mag]')
plt.ylabel('$r - i$ [mag]')
plt.xlim([-0.5, 2.5])
plt.ylim([-1, 2])
plt.title('cluster label ' + str(k))
Explanation: Hopefully our K-means clustering will show us that the dataset breaks into somewhat similarly-shaped pieces in color-color space.
Running K-means
To get an idea for how our data for K-means should be structured, we refer to the example at http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
End of explanation
zeroidx = np.where((kmeans.labels_ == 0))
oneidx = np.where((kmeans.labels_ == 1))
twoidx = np.where((kmeans.labels_ == 2))
print(len(zeroidx[0]))
print(len(oneidx[0]))
print(len(twoidx[0]))
plt.figure(figsize=(5,5))
plt.plot(g_r, r_i, alpha=0.1, ls='None', marker='.') # full dataset
plt.plot(g_r[oneidx], r_i[oneidx], ls='None', marker='o', mec='k') # problem outlier 1
plt.plot(g_r[twoidx], r_i[twoidx], ls='None', marker='o', mec='k') # problem outlier 2
plt.xlabel('$g - r$ [mag]')
plt.ylabel('$r - i$ [mag]')
plt.xlim([-0.5, 2.5])
plt.ylim([-1, 2])
Explanation: At a glance, it looks like clusters 0, 4, and 6 are mostly galaxies, clusters 1 and 2 are weird outliers, cluster 3 is QSOs (plus some stellar contamination?), and clusters 5 and 7 are mostly stars. We could almost certainly refine this better given more time.
Troubleshooting the outliers in clusters 1 and 2
Get the indices corresponding to each K-means label 0, 1, and 2 for comparison
End of explanation |
29 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding degrees of freedom for a single mixture
$$
-\psi \bigg(\frac{v}{2} \bigg) + log \bigg(\frac{v}{2} \bigg) + 1 + \psi \bigg(\frac{v^{(k)} + p}{2} \bigg) - log \bigg(\frac{v^{(k)} + p}{2} \bigg) + \frac{1}{n}\sum_{j=1}^n \bigg[ log(u_j^{(k)}) - u_j^{(k)} \bigg] = 0
$$
Source <br />
* G. J. McLachlan, T. Krishnan; The EM Algorithm and Extensions; 5.8.2; pg. 177.
Finding degrees of freedom with n mixtures
$$
-\psi \bigg(\frac{v_i}{2} \bigg) + log \bigg(\frac{v_i}{2} \bigg) + 1 + \psi \bigg(\frac{v_i^{(k)} + p}{2} \bigg) - log \bigg(\frac{v_i^{(k)} + p}{2} \bigg) + \
\frac{1}{n_i^{(k)}}\sum_{j=1}^n \tau_{ij}^{(k)} \bigg[ log(u_{ij}^{(k)}) - u_{ij}^{(k)} \bigg] = 0
$$
Where
$$
n_i^{(k)} = \Sigma_{j=1}^n \tau_{ij}^{(k)}
$$
Source<br />
* D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
7 M-Step; pg. 343.
Step1: Expectation Maximization with Mixtures
Implementation of a mixture model using the t distribution.
Source
D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
Generating a sample
I'll generate two samples with distinct parameters and merge them into one.
Step2: Plotting the sample with actual parameters
Step3: Estimating parameters | Python Code:
def find_df(v, p, u, tau):
return -digamma(v/2.) + log(v/2.) + (tau * (log(u) - u)).sum()/tau.sum() + 1 + (digamma((v+p)/2.)-log((v+p)/2.))
u_test = np.array([[1,1], [2,2], [3,3]])
tau_test = np.array([[4,4], [5,5], [6,6]])
find_df(1, 2, u_test, tau_test)
def get_random(X):
size = len(X)
idx = np.random.choice(range(size))
return X[idx]
Explanation: Finding degrees of freedom for a single mixture
$$
-\psi \bigg(\frac{v}{2} \bigg) + log \bigg(\frac{v}{2} \bigg) + 1 + \psi \bigg(\frac{v^{(k)} + p}{2} \bigg) - log \bigg(\frac{v^{(k)} + p}{2} \bigg) + \frac{1}{n}\sum_{j=1}^n \bigg[ log(u_j^{(k)}) - u_j^{(k)} \bigg] = 0
$$
Source <br />
* G. J. McLachlan, T. Krishnan; The EM Algorithm and Extensions; 5.8.2; pg. 177.
Finding degrees of freedom with n mixtures
$$
-\psi \bigg(\frac{v_i}{2} \bigg) + log \bigg(\frac{v_i}{2} \bigg) + 1 + \psi \bigg(\frac{v_i^{(k)} + p}{2} \bigg) - log \bigg(\frac{v_i^{(k)} + p}{2} \bigg) + \
\frac{1}{n_i^{(k)}}\sum_{j=1}^n \tau_{ij}^{(k)} \bigg[ log(u_{ij}^{(k)}) - u_{ij}^{(k)} \bigg] = 0
$$
Where
$$
n_i^{(k)} = \Sigma_{j=1}^n \tau_{ij}^{(k)}
$$
Source<br />
* D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
7 M-Step; pg. 343.
End of explanation
actual_mu01 = [-.2, .45]
actual_cov01 = [[.40, 0], [.7, 1.55]]
actual_df01 = 27
actual_mu02 = [.9, -.5]
actual_cov02 = [[1.5, 0.7], [0, 0.5]]
actual_df02 = 47
size = 300
x01 = multivariate_t_rvs(m=actual_mu01, S=actual_cov01, df=actual_df01, n=size)
x02 = multivariate_t_rvs(m=actual_mu02, S=actual_cov02, df=actual_df02, n=size)
X = np.concatenate([x01, x02])
X.shape
Explanation: Expectation Maximization with Mixtures
Implementation of a mixture model using the t distribution.
Source
D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
Generating a sample
I'll generate two samples with distinct parameters and merge them into one.
End of explanation
xmin, xmax = min(X.T[0]), max(X.T[0])
ymin, ymax = min(X.T[1]), max(X.T[1])
x, y = np.mgrid[xmin:xmax:.1, ymin:ymax:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(actual_mu01, actual_cov01, actual_df01)
t02 = multivariate_t(actual_mu02, actual_cov02, actual_df02)
z01 = []
z02 = []
for _ in xy:
z01.append(t01.pdf(_.reshape(1, -1)))
z02.append(t02.pdf(_.reshape(1, -1)))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
# Plotting
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01, cmap='ocean')
plt.contour(x, y, z02, cmap='hot')
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01+z02)
fig.savefig('draft05 - actual.png')
plt.show()
Explanation: Plotting the sample with actual parameters
End of explanation
n_iter = 50 # number of iterations
# guessing mixture 01
mu01 = get_random(X)
cov01 = np.cov(X.T.copy())
# known variables mix01
df01 = 4
p01 = 2
# guessing mixture 02
mu02 = get_random(X)
cov02 = np.cov(X.T.copy())
# known variables mix 02
df02 = 4
p02 = 2
# guessing the pi parameter
pi = .5
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
start = time.time()
for i in range(n_iter):
# E-step: Calculating tau
wp1 = t01.pdf(X) * pi
wp2 = t02.pdf(X) * (1 - pi)
wp_total = wp1 + wp2
wp1 /= wp_total; wp1 = wp1.reshape(-1, 1)
wp2 /= wp_total; wp2 = wp2.reshape(-1, 1)
# E-Step: Calculating u
u01 = []
for delta in X-mu01:
u01.append(delta.dot(inv(cov01)).dot(delta))
u01 = np.array(u01)
u01 = (df01 + p01)/(df01 + u01); u01 = u01.reshape(-1, 1)
u02 = []
for delta in X-mu02:
u02.append(delta.dot(inv(cov02)).dot(delta))
u02 = np.array(u02)
u02 = (df02 + p02)/(df02 + u02); u02 = u02.reshape(-1, 1)
# CM-Step 01
mu01, cov01 = m_step(X, mu01, cov01, u01, wp1)
mu02, cov02 = m_step(X, mu02, cov02, u02, wp2)
# E-Step 02
u01 = []
for delta in X-mu01:
u01.append(delta.dot(inv(cov01)).dot(delta))
u01 = np.array(u01)
u01 = (df01 + p01)/(df01 + u01); u01 = u01.reshape(-1, 1)
u02 = []
for delta in X-mu02:
u02.append(delta.dot(inv(cov02)).dot(delta))
u02 = np.array(u02)
u02 = (df02 + p02)/(df02 + u02); u02 = u02.reshape(-1, 1)
# CM-Step 02
## Finding mix01 degrees of freedom
v01 = 0
my_range = np.arange(df01, df01+3, .01)
for _ in my_range:
solution = find_df(_, p01, u01, wp1)
if solution < 0+1e-4 and solution > 0-1e-4:
v01 = _
break
## Finding mix01 degrees of freedom
v02 = 0
my_range = np.arange(df02, df02+3, .01)
for _ in my_range:
solution = find_df(_, p02, u02, wp2)
if solution < 0+1e-4 and solution > 0-1e-4:
v02 = _
break
# Assigning parameters
t01.mu = mu01; t01.sigma = cov01
t02.mu = mu02; t02.sigma = cov02
df01 = v01; df02 = v02
pi = wp1.sum()/len(wp1)
print 'elapsed time: %s' % (time.time() - start)
print 'pi: {0:4.06}'.format(pi)
print 'mu01: {0}; mu02: {1}'.format(mu01, mu02)
print 'cov01\n%s' % cov01
print 'cov02\n%s' % cov02
print 'df01: %.6f; df02: %.6f;' % (df01, df02)
xmin, xmax = min(X.T[0]), max(X.T[0])
ymin, ymax = min(X.T[1]), max(X.T[1])
x, y = np.mgrid[xmin:xmax:.1, ymin:ymax:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
z01 = []
z02 = []
z03 = []
for _ in xy:
_ = _.reshape(1, -1)
z01.append(t01.pdf(_))
z02.append(t02.pdf(_))
z03.append(pi*t01.pdf(_) + (1-pi)*t02.pdf(_))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
z03 = np.reshape(z03, x.shape)
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01, cmap='ocean')
plt.contour(x, y, z02, cmap='hot')
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z03)
fig.savefig('draft05 - estimated.png')
plt.show()
Explanation: Estimating parameters
End of explanation |
30 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
Learning Objectives
Access and explore a public BigQuery dataset on NYC Taxi Cab rides
Visualize your dataset using the Seaborn library
Inspect and clean-up the dataset for future ML model training
Create a benchmark to judge future ML model performance off of
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
Step1: <h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https
Step2: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
Step3: <h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
Step4: Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
Step5: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
Step6: Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
Step7: Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
Step8: As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data
Step9: The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Let's move on to creating the ML datasets.
<h3> Create ML datasets </h3>
Let's split the QCed data randomly into training, validation and test sets.
Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.
Step10: Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
Step11: <h3> Verify that datasets exist </h3>
Step12: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
Step13: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
Step15: <h2>Benchmark on same dataset</h2>
The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs | Python Code:
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
Explanation: Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
Learning Objectives
Access and explore a public BigQuery dataset on NYC Taxi Cab rides
Visualize your dataset using the Seaborn library
Inspect and clean-up the dataset for future ML model training
Create a benchmark to judge future ML model performance off of
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
End of explanation
%%bigquery
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
FROM
`nyc-tlc.yellow.trips`
LIMIT 10
Explanation: <h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
End of explanation
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
Explanation: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
End of explanation
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
Explanation: <h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
End of explanation
%%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
AND trip_distance > 0
AND fare_amount >= 2.5
print(len(trips))
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
Explanation: Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
End of explanation
tollrides = trips[trips["tolls_amount"] > 0]
tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notollrides = trips[trips["tolls_amount"] == 0]
notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
Explanation: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
End of explanation
trips.describe()
Explanation: Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
End of explanation
def showrides(df, numlines):
lats = []
lons = []
for iter, row in df[:numlines].iterrows():
lons.append(row["pickup_longitude"])
lons.append(row["dropoff_longitude"])
lons.append(None)
lats.append(row["pickup_latitude"])
lats.append(row["dropoff_latitude"])
lats.append(None)
sns.set_style("darkgrid")
plt.figure(figsize=(10, 8))
plt.plot(lons, lats)
showrides(notollrides, 10)
showrides(tollrides, 10)
Explanation: Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
End of explanation
def preprocess(trips_in):
trips = trips_in.copy(deep=True)
trips.fare_amount = trips.fare_amount + trips.tolls_amount
del trips["tolls_amount"]
del trips["total_amount"]
del trips["trip_distance"] # we won't know this in advance!
qc = np.all([
trips["pickup_longitude"] > -78,
trips["pickup_longitude"] < -70,
trips["dropoff_longitude"] > -78,
trips["dropoff_longitude"] < -70,
trips["pickup_latitude"] > 37,
trips["pickup_latitude"] < 45,
trips["dropoff_latitude"] > 37,
trips["dropoff_latitude"] < 45,
trips["passenger_count"] > 0
], axis=0)
return trips[qc]
tripsqc = preprocess(trips)
tripsqc.describe()
Explanation: As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
<li>Discard the timestamp</li>
</ol>
We could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data.
This sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.
End of explanation
shuffled = tripsqc.sample(frac=1)
trainsize = int(len(shuffled["fare_amount"]) * 0.70)
validsize = int(len(shuffled["fare_amount"]) * 0.15)
df_train = shuffled.iloc[:trainsize, :]
df_valid = shuffled.iloc[trainsize:(trainsize + validsize), :]
df_test = shuffled.iloc[(trainsize + validsize):, :]
df_train.head(n=1)
df_train.describe()
df_valid.describe()
df_test.describe()
Explanation: The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Let's move on to creating the ML datasets.
<h3> Create ML datasets </h3>
Let's split the QCed data randomly into training, validation and test sets.
Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.
End of explanation
def to_csv(df, filename):
outdf = df.copy(deep=False)
outdf.loc[:, "key"] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove("fare_amount")
cols.insert(0, "fare_amount")
print (cols) # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header=False, index_label=False, index=False)
to_csv(df_train, "taxi-train.csv")
to_csv(df_valid, "taxi-valid.csv")
to_csv(df_test, "taxi-test.csv")
!head -10 taxi-valid.csv
Explanation: Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
End of explanation
!ls -l *.csv
Explanation: <h3> Verify that datasets exist </h3>
End of explanation
%%bash
head taxi-train.csv
Explanation: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
End of explanation
def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies".
lat1_r = np.radians(lat1)
lat2_r = np.radians(lat2)
lon_diff_r = np.radians(lon2 - lon1)
sin_prod = np.sin(lat1_r) * np.sin(lat2_r)
cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r)
minimum = np.minimum(1, sin_prod + cos_prod)
dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(
df["pickuplat"], df["pickuplon"], df["dropofflat"], df["dropofflon"])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted) ** 2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(
compute_rmse(df["fare_amount"], rate * estimate_distance(df)), name))
FEATURES = ["pickuplon", "pickuplat", "dropofflon", "dropofflat", "passengers"]
TARGET = "fare_amount"
columns = list([TARGET])
columns.append("pickup_datetime")
columns.extend(FEATURES) # in CSV, target is first column, after the features
columns.append("key")
df_train = pd.read_csv("taxi-train.csv", header=None, names=columns)
df_valid = pd.read_csv("taxi-valid.csv", header=None, names=columns)
df_test = pd.read_csv("taxi-test.csv", header=None, names=columns)
rate = df_train["fare_amount"].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, "Train")
print_rmse(df_valid, rate, "Valid")
print_rmse(df_test, rate, "Test")
Explanation: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
End of explanation
validation_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
"unused" AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
client = bigquery.Client()
df_valid = client.query(validation_query).to_dataframe()
print_rmse(df_valid, 2.59988, "Final Validation Set")
Explanation: <h2>Benchmark on same dataset</h2>
The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:
End of explanation |
31 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Permutation explainer
This notebooks demonstrates how to use the Permutation explainer on some simple datasets. The Permutation explainer is model-agnostic, so it can compute Shapley values and Owen values for any model. It works by iterating over complete permutations of the features forward and the reversed. By doing this, changing one feature at a time we can minimize the number of model evaluations that are required, and always ensure we satisfy efficiency no matter how many executions of the original model we choose to use for appoximation the feature attribution values. So the SHAP values computed, while approximate, do exactly sum up to the difference between the base value of the model and the output of the model for each explained instance.
Because the Permutation explainer has important performance optimizations, and does not require regularization parameter tuning like Kernel explainer, the Permutation explainer is the default model agnostic explainer used for tabular datasets that have more features than would be appropriate for the Exact explainer.
Below we domonstrate how to use the Permutation explainer on a simple adult income classification dataset and model.
Step1: Tabular data with independent (Shapley value) masking
Step2: Plot a global summary
Step3: Plot a single instance
Step4: Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below
Step5: Plot a global summary
Note that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting
Step6: Plot a single instance
Note that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together). | Python Code:
import shap
import xgboost
# get a dataset on income prediction
X,y = shap.datasets.adult()
# train an XGBoost model (but any other model type would also work)
model = xgboost.XGBClassifier()
model.fit(X, y);
Explanation: Permutation explainer
This notebooks demonstrates how to use the Permutation explainer on some simple datasets. The Permutation explainer is model-agnostic, so it can compute Shapley values and Owen values for any model. It works by iterating over complete permutations of the features forward and the reversed. By doing this, changing one feature at a time we can minimize the number of model evaluations that are required, and always ensure we satisfy efficiency no matter how many executions of the original model we choose to use for appoximation the feature attribution values. So the SHAP values computed, while approximate, do exactly sum up to the difference between the base value of the model and the output of the model for each explained instance.
Because the Permutation explainer has important performance optimizations, and does not require regularization parameter tuning like Kernel explainer, the Permutation explainer is the default model agnostic explainer used for tabular datasets that have more features than would be appropriate for the Exact explainer.
Below we domonstrate how to use the Permutation explainer on a simple adult income classification dataset and model.
End of explanation
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, X)
shap_values = explainer(X[:100])
# get just the explanations for the positive class
shap_values = shap_values[...,1]
Explanation: Tabular data with independent (Shapley value) masking
End of explanation
shap.plots.bar(shap_values)
Explanation: Plot a global summary
End of explanation
shap.plots.waterfall(shap_values[0])
Explanation: Plot a single instance
End of explanation
# build a clustering of the features based on shared information about y
clustering = shap.utils.hclust(X, y)
# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker
# now we explicitly use a Partition masker that uses the clustering we just computed
masker = shap.maskers.Partition(X, clustering=clustering)
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, masker)
shap_values2 = explainer(X[:100])
# get just the explanations for the positive class
shap_values2 = shap_values2[...,1]
Explanation: Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below:
End of explanation
shap.plots.bar(shap_values2)
Explanation: Plot a global summary
Note that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting:
End of explanation
shap.plots.waterfall(shap_values2[0])
Explanation: Plot a single instance
Note that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together).
End of explanation |
32 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's scrape the IRE homepage
Our goal
Step1: Target the headlines
View source on the IRE homepage and find the headlines. What's the pattern? | Python Code:
# use the `get()` method to fetch a copy of the IRE home page
# feed the text of the web page to a BeautifulSoup object
Explanation: Let's scrape the IRE homepage
Our goal: Print out the headlines from the IRE home page.
requests is a handy third-party library for making HTTP requests. It does the same thing your browser does when you type in a URL and hit enter -- sends a message to a server and requests a copy of the page -- but it allows us to do this programatically instead of pointing and clicking. For our purposes today, we're interested in the library's get() method.
Import the libraries
Fetch and parse the HTML
End of explanation
# get a list of headlines we're interested in
Explanation: Target the headlines
View source on the IRE homepage and find the headlines. What's the pattern?
End of explanation |
33 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Linear Programming with Python - Part 2
Introduction to PuLP
PuLP is an open source linear programming package for python. PuLP can be installed using pip, instructions here.
In this notebook, we'll explore how to construct and solve the linear programming problem described in Part 1 using PuLP.
A brief reminder of our linear programming problem
Step1: Then instantiate a problem class, we'll name it "My LP problem" and we're looking for an optimal maximum so we use LpMaximize
Step2: We then model our decision variables using the LpVariable class. In our example, x had a lower bound of 0 and y had a lower bound of 2.
Upper bounds can be assigned using the upBound parameter.
Step3: The objective function and constraints are added using the += operator to our model.
The objective function is added first, then the individual constraints.
Step4: We have now constructed our problem and can have a look at it.
Step5: PuLP supports open source linear programming solvers such as CBC and GLPK, as well as commercial solvers such as Gurobi and IBM's CPLEX.
The default solver is CBC, which comes packaged with PuLP upon installation.
For most applications, the open source CBC from COIN-OR will be enough for most simple linear programming optimisation algorithms.
Step6: We have also checked the status of the solver, there are 5 status codes | Python Code:
import pulp
Explanation: Introduction to Linear Programming with Python - Part 2
Introduction to PuLP
PuLP is an open source linear programming package for python. PuLP can be installed using pip, instructions here.
In this notebook, we'll explore how to construct and solve the linear programming problem described in Part 1 using PuLP.
A brief reminder of our linear programming problem:
We want to find the maximum solution to the objective function:
Z = 4x + 3y
Subject to the following constraints:
x ≥ 0
y ≥ 2
2y ≤ 25 - x
4y ≥ 2x - 8
y ≤ 2x - 5
We'll begin by importing PuLP
End of explanation
my_lp_problem = pulp.LpProblem("My LP Problem", pulp.LpMaximize)
Explanation: Then instantiate a problem class, we'll name it "My LP problem" and we're looking for an optimal maximum so we use LpMaximize
End of explanation
x = pulp.LpVariable('x', lowBound=0, cat='Continuous')
y = pulp.LpVariable('y', lowBound=2, cat='Continuous')
Explanation: We then model our decision variables using the LpVariable class. In our example, x had a lower bound of 0 and y had a lower bound of 2.
Upper bounds can be assigned using the upBound parameter.
End of explanation
# Objective function
my_lp_problem += 4 * x + 3 * y, "Z"
# Constraints
my_lp_problem += 2 * y <= 25 - x
my_lp_problem += 4 * y >= 2 * x - 8
my_lp_problem += y <= 2 * x - 5
Explanation: The objective function and constraints are added using the += operator to our model.
The objective function is added first, then the individual constraints.
End of explanation
my_lp_problem
Explanation: We have now constructed our problem and can have a look at it.
End of explanation
my_lp_problem.solve()
pulp.LpStatus[my_lp_problem.status]
Explanation: PuLP supports open source linear programming solvers such as CBC and GLPK, as well as commercial solvers such as Gurobi and IBM's CPLEX.
The default solver is CBC, which comes packaged with PuLP upon installation.
For most applications, the open source CBC from COIN-OR will be enough for most simple linear programming optimisation algorithms.
End of explanation
for variable in my_lp_problem.variables():
print "{} = {}".format(variable.name, variable.varValue)
print pulp.value(my_lp_problem.objective)
Explanation: We have also checked the status of the solver, there are 5 status codes:
* Not Solved: Status prior to solving the problem.
* Optimal: An optimal solution has been found.
* Infeasible: There are no feasible solutions (e.g. if you set the constraints x <= 1 and x >=2).
* Unbounded: The constraints are not bounded, maximising the solution will tend towards infinity (e.g. if the only constraint was x >= 3).
* Undefined: The optimal solution may exist but may not have been found.
We can now view our maximal variable values and the maximum value of Z.
We can use the varValue method to retrieve the values of our variables x and y, and the pulp.value function to view the maximum value of the objective function.
End of explanation |
34 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial on how to use S-grids with time-evolving depth dimensions
Some hydrodynamic models (such as SWASH) have time-evolving depth dimensions, for example because they follow the waves on the free surface. Parcels can work with these types of models, but it is a bit involved to set up. That is why we explain here how to run Parcels on FieldSets with time-evoloving depth dimensions
Step1: Here, we use sample data from the SWASH model. We first set the filenames and variables
Step2: Now, the first key step when reading time-evolving depth dimensions is that we specify depth as 'not_yet_set' in the dimensions dictionary
Step3: Then, after we create the FieldSet object, we set the depth dimension of the relevant Fields to fieldset.depth_u and fieldset.depth_w, using the Field.set_depth_from_field() method
Step4: Now, we can create a ParticleSet, run those and plot them | Python Code:
%matplotlib inline
from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ParticleFile, plotTrajectoriesFile
import numpy as np
from datetime import timedelta as delta
from os import path
Explanation: Tutorial on how to use S-grids with time-evolving depth dimensions
Some hydrodynamic models (such as SWASH) have time-evolving depth dimensions, for example because they follow the waves on the free surface. Parcels can work with these types of models, but it is a bit involved to set up. That is why we explain here how to run Parcels on FieldSets with time-evoloving depth dimensions
End of explanation
filenames = path.join('SWASH_data', 'field_*.nc')
variables = {'U': 'cross-shore velocity',
'V': 'along-shore velocity',
'depth_u': 'time varying depth_u'}
Explanation: Here, we use sample data from the SWASH model. We first set the filenames and variables
End of explanation
dimensions = {'U': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'},
'V': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'},
'depth_u': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'}}
Explanation: Now, the first key step when reading time-evolving depth dimensions is that we specify depth as 'not_yet_set' in the dimensions dictionary
End of explanation
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, mesh='flat', allow_time_extrapolation=True)
fieldset.U.set_depth_from_field(fieldset.depth_u)
fieldset.V.set_depth_from_field(fieldset.depth_u)
Explanation: Then, after we create the FieldSet object, we set the depth dimension of the relevant Fields to fieldset.depth_u and fieldset.depth_w, using the Field.set_depth_from_field() method
End of explanation
pset = ParticleSet(fieldset, JITParticle, lon=9.5, lat=12.5, depth=-0.1)
pfile = pset.ParticleFile("SwashParticles", outputdt=delta(seconds=0.05))
pset.execute(AdvectionRK4, dt=delta(seconds=0.005), output_file=pfile)
pfile.export() # export the trajectory data to a netcdf file
plotTrajectoriesFile('SwashParticles.nc');
Explanation: Now, we can create a ParticleSet, run those and plot them
End of explanation |
35 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TFP Probabilistic Layers
Step2: Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
Step3: Note
Step4: Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
Step5: Case 1
Step6: Case 2
Step7: Case 3
Step8: Case 4
Step9: Case 5 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
Explanation: TFP Probabilistic Layers: Regression
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this example we show how to fit regression models using TFP's "probabilistic layers."
Dependencies & Prerequisites
End of explanation
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
Explanation: Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
End of explanation
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
Explanation: Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.)
Motivation
Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
End of explanation
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
Explanation: Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
End of explanation
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
Explanation: Case 1: No Uncertainty
End of explanation
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
Explanation: Case 2: Aleatoric Uncertainty
End of explanation
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
Explanation: Case 3: Epistemic Uncertainty
End of explanation
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
Explanation: Case 4: Aleatoric & Epistemic Uncertainty
End of explanation
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
Explanation: Case 5: Functional Uncertainty
End of explanation |
36 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Python Tour of Data Science
Step1: 2 Categories
Categorical data is best represented by bar or pie charts. Reproduce the plots below using the object-oriented API of matplotlib, which is recommended for programming.
Question
Step2: 3 Frequency
A frequency plot is a graph that shows the pattern in a set of data by plotting how often particular values of a measure occur. They often take the form of an histogram or a box plot.
Reproduce the plots with the following three libraries, which provide high-level declarative syntax for statistical visualization as well as a convenient interface to pandas
Step3: 4 Correlation
Scatter plots are very much used to assess the correlation between 2 variables. Pair plots are then a useful way of displaying the pairwise relations between variables in a dataset.
Use the seaborn pairplot() function to analyze how separable is the iris dataset.
Step4: 5 Dimensionality reduction
Humans can only comprehend up to 3 dimensions (in space, then there is e.g. color or size), so dimensionality reduction is often needed to explore high dimensional datasets. Analyze how separable is the iris dataset by visualizing it in a 2D scatter plot after reduction from 4 to 2 dimensions with two popular methods | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Random time series.
n = 1000
rs = np.random.RandomState(42)
data = rs.randn(n, 4).cumsum(axis=0)
# plt.figure(figsize=(15,5))
# plt.plot(data[:, 0])
# df = pd.DataFrame(...)
# df.plot(...)
Explanation: A Python Tour of Data Science: Data Visualization
Michaël Defferrard, PhD student, EPFL LTS2
Exercise
Data visualization is a key aspect of exploratory data analysis.
During this exercise we'll gradually build more and more complex vizualisations. We'll do this by replicating plots. Try to reproduce the lines but also the axis labels, legends or titles.
Goal of data visualization: clearly and efficiently communicate information through visual representations. While tables are generally used to look up a specific measurement, charts are used to show patterns or relationships.
Means: mainly statistical graphics for exploratory analysis, e.g. scatter plots, histograms, probability plots, box plots, residual plots, but also infographics for communication.
Data visualization is both an art and a science. It should combine both aesthetic form and functionality.
1 Time series
To start slowly, let's make a static line plot from some time series. Reproduce the plots below using:
1. The procedural API of matplotlib, the main data visualization library for Python. Its procedural API is similar to matlab and convenient for interactive work.
2. Pandas, which wraps matplotlib around his DataFrame format and makes many standard plots easy to code. It offers many helpers for data visualization.
Hint: to plot with pandas, you first need to create a DataFrame, pandas' tabular data format.
End of explanation
data = [10, 40, 25, 15, 10]
categories = list('ABCDE')
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Right plot.
# axes[1].
# axes[1].
# Left plot.
# axes[0].
# axes[0].
Explanation: 2 Categories
Categorical data is best represented by bar or pie charts. Reproduce the plots below using the object-oriented API of matplotlib, which is recommended for programming.
Question: What are the pros / cons of each plot ?
Tip: the matplotlib gallery is a convenient starting point.
End of explanation
import seaborn as sns
import os
df = sns.load_dataset('iris', data_home=os.path.join('..', 'data'))
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Your code for Seaborn: distplot() and boxplot().
import ggplot
# Your code for ggplot.
import altair
# altair.Chart(df).mark_bar(opacity=.75).encode(
# x=...,
# y=...,
# color=...
# )
Explanation: 3 Frequency
A frequency plot is a graph that shows the pattern in a set of data by plotting how often particular values of a measure occur. They often take the form of an histogram or a box plot.
Reproduce the plots with the following three libraries, which provide high-level declarative syntax for statistical visualization as well as a convenient interface to pandas:
* Seaborn is a statistical visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Its advantage is that you can modify the produced plots with matplotlib, so you loose nothing.
* ggplot is a (partial) port of the popular ggplot2 for R. It has his roots in the influencial book the grammar of graphics. Convenient if you know ggplot2 already.
* Vega is a declarative format for statistical visualization based on D3.js, a low-level javascript library for interactive visualization. Vincent (discontinued) and altair are Python libraries to vega. Altair is quite new and does not provide all the needed functionality yet, but it is promising !
Hints:
* Seaborn, look at distplot() and boxplot().
* ggplot, we are interested by the geom_histogram geometry.
End of explanation
# One line with Seaborn.
Explanation: 4 Correlation
Scatter plots are very much used to assess the correlation between 2 variables. Pair plots are then a useful way of displaying the pairwise relations between variables in a dataset.
Use the seaborn pairplot() function to analyze how separable is the iris dataset.
End of explanation
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# df['pca1'] =
# df['pca2'] =
# df['tsne1'] =
# df['tsne2'] =
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
sns.swarmplot(x='pca1', y='pca2', data=df, hue='species', ax=axes[0])
sns.swarmplot(x='tsne1', y='tsne2', data=df, hue='species', ax=axes[1]);
Explanation: 5 Dimensionality reduction
Humans can only comprehend up to 3 dimensions (in space, then there is e.g. color or size), so dimensionality reduction is often needed to explore high dimensional datasets. Analyze how separable is the iris dataset by visualizing it in a 2D scatter plot after reduction from 4 to 2 dimensions with two popular methods:
1. The classical principal componant analysis (PCA).
2. t-distributed stochastic neighbor embedding (t-SNE).
Hints:
* t-SNE is a stochastic method, so you may want to run it multiple times.
* The easiest way to create the scatter plot is to add columns to the pandas DataFrame, then use the Seaborn swarmplot().
End of explanation |
37 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute a sparse inverse solution using the Gamma-MAP empirical Bayesian method
See
Step1: Plot dipole activations
Step2: Show the evoked response and the residual for gradiometers
Step3: Generate stc from dipoles
Step4: View in 2D and 3D ("glass" brain like 3D plot)
Show the sources as spheres scaled by their strength | Python Code:
# Author: Martin Luessi <[email protected]>
# Daniel Strohmeier <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.inverse_sparse import gamma_map, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif'
# Read the evoked response and crop it
condition = 'Left visual'
evoked = mne.read_evokeds(evoked_fname, condition=condition,
baseline=(None, 0))
evoked.crop(tmin=-50e-3, tmax=300e-3)
# Read the forward solution
forward = mne.read_forward_solution(fwd_fname)
# Read noise noise covariance matrix and regularize it
cov = mne.read_cov(cov_fname)
cov = mne.cov.regularize(cov, evoked.info, rank=None)
# Run the Gamma-MAP method with dipole output
alpha = 0.5
dipoles, residual = gamma_map(
evoked, forward, cov, alpha, xyz_same_gamma=True, return_residual=True,
return_as_dipoles=True)
Explanation: Compute a sparse inverse solution using the Gamma-MAP empirical Bayesian method
See :footcite:WipfNagarajan2009 for details.
End of explanation
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
Explanation: Plot dipole activations
End of explanation
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
scale_factors = np.max(np.abs(stc.data), axis=1)
scale_factors = 0.5 * (1 + scale_factors / np.max(scale_factors))
plot_sparse_source_estimates(
forward['src'], stc, bgcolor=(1, 1, 1),
modes=['sphere'], opacity=0.1, scale_factors=(scale_factors, None),
fig_name="Gamma-MAP")
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
Show the sources as spheres scaled by their strength
End of explanation |
38 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Relative position and orientation between nucleobases
The relative position of a nucleobase i in the reference frame constructed on the base j carries interesting information, as described in Bottaro, Di Palma Bussi. Nucleic acids research (2014). It is possible to calculate the all the position vectors between all pairs in a molecule using the function
rvecs,res = bb.dump_rvec(pdb,cutoff=2.0)
rvecs is a matrix with dimensions (nframes,n,n,3), where nsamples is the number of samples in the PDB/trajectory file, and n the sequence lenght. The position of base j in the reference frame constructed on base i in sample k is therefore stored in rvecs[k,i,j]. Note that $r_{i,j} \ne r_{j,i}$ and that $r_{j,j}= (0,0,0)$. Additionally, all pairs of bases with ellipsoidal distance larger than cutoff are set to zero. The meaning and rationale for this ellipsoidal distance will be clarified in the example below.
res contains the list of residues. The naming convention is RESNAME_RESNUMBER_CHAININDEX, where RESNAME and RESNUMBER are as in the PDB/topology file and CHAININDEX is the index of the chain starting from zero, in the same order as it appears in the PDB/topology file. It is not possible to get the chain name.
We here analyze the crystal structure of the large ribosomal subunit (PDB 1S72)
Step1: We remove all zero-vectors and scatter plot $\rho = \sqrt{x^2+y^2}$ versus $z$
Step2: We can see that high-density points are observed around $(0,0.6)$ (base-pairing), $(0.3,\pm 0.33)$ (base stacking).
Note also the ellipsoid with major axis $a=b=0.5 nm$ and minor axis $c=0.3 nm$ defines a natural metric.
For values of the scaled distance $|\tilde{r}| = (a^2x^2 + b^2y^2 + c^2z^2)^(1/2) $ smaller than 1 (cutoff=1), no points are observed. Base-stacking and base-pairings are observed for cutoff distances smaller than 2.
This is also confirmed by looking at the histogram alon the $z$ coordinate
Step3: Another interesting excercise is to consider only the points in the pairing slice and project them on the $(x,y)$ plane.
Step4: The scatterplot above contains all contributions from all types of base-pairs. Still, we can clearly see many points around $(0.4,0.5)$, corresponding to watson-crick base-pairs, wobble GU, hoogsteen and sugar interactions, as labeled.
We can also scatterplot pairs at a fixed "sequence", for example A-U base pairing only
Step5: Note that the distributions shown here are at the core of the eSCORE scoring function.
Another possible application of the dump_rvec function is to analyze trajectories. For example, we can monitor the distance between the center of two six-membered rings during a simulation. To do so, we use a very large cutoff, so that the only zero vectors are on the diagonal. | Python Code:
# import barnaba
import barnaba as bb
pdb = "../test/data/1S72.pdb"
rvecs,res = bb.dump_rvec(pdb,cutoff=3.5)
Explanation: Relative position and orientation between nucleobases
The relative position of a nucleobase i in the reference frame constructed on the base j carries interesting information, as described in Bottaro, Di Palma Bussi. Nucleic acids research (2014). It is possible to calculate the all the position vectors between all pairs in a molecule using the function
rvecs,res = bb.dump_rvec(pdb,cutoff=2.0)
rvecs is a matrix with dimensions (nframes,n,n,3), where nsamples is the number of samples in the PDB/trajectory file, and n the sequence lenght. The position of base j in the reference frame constructed on base i in sample k is therefore stored in rvecs[k,i,j]. Note that $r_{i,j} \ne r_{j,i}$ and that $r_{j,j}= (0,0,0)$. Additionally, all pairs of bases with ellipsoidal distance larger than cutoff are set to zero. The meaning and rationale for this ellipsoidal distance will be clarified in the example below.
res contains the list of residues. The naming convention is RESNAME_RESNUMBER_CHAININDEX, where RESNAME and RESNUMBER are as in the PDB/topology file and CHAININDEX is the index of the chain starting from zero, in the same order as it appears in the PDB/topology file. It is not possible to get the chain name.
We here analyze the crystal structure of the large ribosomal subunit (PDB 1S72)
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
import seaborn as sns
# find all zero-elements
nonzero = np.where(np.sum(rvecs**2,axis=3)>0.01)
rr = rvecs[nonzero]
# calculate rho and zeta
z = rr[:,2]
rho = np.sqrt(rr[:,0]**2 + rr[:,1]**2)
# make a scatter plot
fig,ax = plt.subplots(figsize=(9,9))
ax.scatter(rho,z,s=0.15)
#ax.set_aspect(1)
patches = []
f1 = 0.5
f2 = 0.3
el = mpatches.Ellipse([0,0], 2*f1,2*f2, fc="none",ec='r',ls="--",lw=1.75)
ax.text(-0.08,f2*1.05,"cutoff=1")
patches.append(el)
el = mpatches.Ellipse([0,0], 4*f1,4*f2, fc="none",ec='orange',ls="--",lw=1.75)
ax.text(-0.08,2*f2*1.05,"cutoff=2")
patches.append(el)
el = mpatches.Ellipse([0,0], 6*f1,6*f2, fc="none",ec='y',ls="--",lw=1.75)
ax.text(-0.08,3*f2*1.005,"cutoff=3")
patches.append(el)
collection = PatchCollection(patches,match_original=True)
ax.add_collection(collection)
ax.set_xlabel(r'$\rho$ (nm)')
ax.set_ylabel('z (nm)')
ax.set_yticks([-1.2,-0.9,-0.6,-0.3,0,0.3,0.6,0.9,1.2])
ax.set_xticks([0,0.5,1.0,1.5])
plt.show()
Explanation: We remove all zero-vectors and scatter plot $\rho = \sqrt{x^2+y^2}$ versus $z$
End of explanation
fig,ax = plt.subplots(figsize=(9,6))
plt.hist(z,bins=100,density=True)
plt.axvline(0.18,ls="--",c='k')
plt.axvline(-0.18,ls="--",c='k')
plt.axvline(0.52,ls="--",c='k')
plt.axvline(-0.52,ls="--",c='k')
plt.text(0,1.6,"Pairing",ha ="center")
plt.text(-0.4,1.6,"Stacking",ha ="center")
plt.text(0.4,1.6,"Stacking",ha ="center")
plt.xlabel("z coordinate (nm)")
plt.show()
Explanation: We can see that high-density points are observed around $(0,0.6)$ (base-pairing), $(0.3,\pm 0.33)$ (base stacking).
Note also the ellipsoid with major axis $a=b=0.5 nm$ and minor axis $c=0.3 nm$ defines a natural metric.
For values of the scaled distance $|\tilde{r}| = (a^2x^2 + b^2y^2 + c^2z^2)^(1/2) $ smaller than 1 (cutoff=1), no points are observed. Base-stacking and base-pairings are observed for cutoff distances smaller than 2.
This is also confirmed by looking at the histogram alon the $z$ coordinate:
End of explanation
# define an helper function to plot the nucleobase and some distances, as a reference.
def plot_grid():
patches = []
polygon = mpatches.RegularPolygon([0,0], 6, 0.28,fc='none',ec='k',lw=3,orientation=+np.pi/2)
patches.append(polygon)
polygon = mpatches.RegularPolygon([-0.375,-0.225], 5, 0.24,fc='none',ec='k',lw=3,orientation=-0.42)
patches.append(polygon)
circle = mpatches.Circle([0,0], 0.5, fc="none",ec='k',ls="--",lw=0.75)
plt.text(-0.53,0,"r=0.5 nm",rotation=90,ha="center",va='center',fontsize=13)
patches.append(circle)
circle = mpatches.Circle([0,0], 0.75, fc="none",ec='k',ls="--",lw=0.75)
plt.text(-0.78,0,"r=0.75 nm ",rotation=90,ha="center",va='center',fontsize=13)
patches.append(circle)
circle = mpatches.Circle([0,0], 1.0, fc="none",ec='k',ls="--",lw=0.75)
plt.text(-1.03,0,"r=1.0 nm ",rotation=90,ha="center",va='center',fontsize=13)
patches.append(circle)
collection = PatchCollection(patches,match_original=True)
ax.add_collection(collection)
plt.plot([0,1.],[0,0],c='gray',lw=1,ls="--")
plt.text(1.1,0,r"$\theta=0^\circ$",ha="center",va='center',fontsize=13)
plt.plot([0,-np.cos(np.pi/3)],[0,np.sin(np.pi/3)],c='gray',lw=1,ls="--")
plt.text(-np.cos(np.pi/3)*1.1,np.sin(np.pi/3)*1.1,r"$\theta=120^\circ$",ha="center",va='center',fontsize=13)
plt.plot([0,-np.cos(np.pi/3)],[0,-np.sin(np.pi/3)],c='gray',lw=1,ls="--")
plt.text(-np.cos(np.pi/3)*1.1,-np.sin(np.pi/3)*1.1,r"$\theta=240^\circ$",ha="center",va='center',fontsize=13)
ax.set_aspect(1)
ax.set_ylim(-1.1,1.1)
ax.set_xlim(-1.1,1.1)
ax.set_xlabel("x (nm)")
ax.set_ylabel("y (nm)")
# slice and take only where |z| is smaller than 0.18 nm
pairs = rr[np.where(np.abs(rr[:,2])<0.18)]
fig,ax = plt.subplots(figsize=(10,10))
# do a KDE
ax = sns.kdeplot(pairs[:,0],pairs[:,1], shade=True,bw=0.12)
# scatter plot x and y
ax.scatter(pairs[:,0],pairs[:,1],s=0.5,c='r')
# make labels
ax.text(0.35,0.45,"Watson-Crick",fontsize=17,ha='center',va='center',color='k')
ax.text(0.1,0.6,"GU",fontsize=17,ha='center',va='center',color='k')
ax.text(0.5,0.3,"GU",fontsize=17,ha='center',va='center',color='k')
ax.text(-0.6,0.4,"Hoogsteen",fontsize=17,ha='center',va='center',color='k')
ax.text(0.6,-0.4,"Sugar",fontsize=17,ha='center',va='center',color='k')
plot_grid()
plt.show()
Explanation: Another interesting excercise is to consider only the points in the pairing slice and project them on the $(x,y)$ plane.
End of explanation
# take only au-pairs. Need an explicit loop.
pp1 = []
for j in range(len(nonzero[0])):
z = rvecs[0,nonzero[1][j],nonzero[2][j]][2]
r1 = res[nonzero[1][j]][0]
r2 = res[nonzero[2][j]][0]
if(np.abs(z) < 0.18):
if((r1=="A" and r2 =="U")):
pp1.append(rvecs[0,nonzero[1][j],nonzero[2][j]])
# plot KDE and scatter
fig,ax = plt.subplots(figsize=(10,10))
pp1 = np.array(pp1)
ax = sns.kdeplot(pairs[:,0],pairs[:,1], shade=True,bw=0.12)
ax.scatter(pp1[:,0],pp1[:,1],s=10,c='orange')
plot_grid()
plt.show()
Explanation: The scatterplot above contains all contributions from all types of base-pairs. Still, we can clearly see many points around $(0.4,0.5)$, corresponding to watson-crick base-pairs, wobble GU, hoogsteen and sugar interactions, as labeled.
We can also scatterplot pairs at a fixed "sequence", for example A-U base pairing only:
End of explanation
traj = "../test/data/UUCG.xtc"
top = "../test/data/UUCG.pdb"
rvecs_traj,res_traj = bb.dump_rvec(traj,topology=top,cutoff=100.0)
fig,ax = plt.subplots(figsize=(10,10))
dist = np.sqrt(np.sum(rvecs_traj[:,1,6]**2,axis=1))
ax.scatter(np.arange(len(dist)),dist,s=1)
ax.set_xlabel("Frame number")
ax.set_ylabel("%s/%s distance (nm)" % (res[1][:-2],res[6][:-2]))
plt.show()
Explanation: Note that the distributions shown here are at the core of the eSCORE scoring function.
Another possible application of the dump_rvec function is to analyze trajectories. For example, we can monitor the distance between the center of two six-membered rings during a simulation. To do so, we use a very large cutoff, so that the only zero vectors are on the diagonal.
End of explanation |
39 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System
This example problem is the fourth example problem in the SWI2 documentation (http
Step1: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model.
Step2: Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
Step3: Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package.
Step4: Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive.
Step5: Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package.
Step6: Define the boundary condition data for the model
Step7: Create output control (OC) data using words
Step8: Create the model with the freshwater well (Simulation 1)
Step9: Write the simulation 1 MODFLOW input files and run the model
Step10: Create the model with the saltwater well (Simulation 2)
Step11: Write the simulation 2 MODFLOW input files and run the model
Step12: Load the simulation 1 ZETA data and ZETA observations.
Step13: Load the simulation 2 ZETA data and ZETA observations.
Step14: Create arrays for the x-coordinates and the output years
Step15: Define figure dimensions and colors used for plotting ZETA surfaces
Step16: Recreate Figure 9 from the SWI2 documentation (http | Python Code:
%matplotlib inline
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy.modflow as mf
import flopy.utils as fu
Explanation: SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System
This example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.
The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (DELR), 50 m (DELC), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively.
The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).
The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a TOESLOPE and TIPSLOPE of 0.005, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 ISOURCE parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. ISOURCE in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 ISOURCE parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active ZETA surface in the cell.
A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawing
saltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface from
upconing into the upper aquifer (model layer).
Import numpy and matplotlib, set all figures to be inline, import flopy.modflow and flopy.utils.
End of explanation
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
Explanation: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model.
End of explanation
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
Explanation: Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
End of explanation
#--dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
Explanation: Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package.
End of explanation
#--bas data
#--ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
#--initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
Explanation: Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive.
End of explanation
#--lpf data
laytyp=0
hk=10.
vka=0.2
Explanation: Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package.
End of explanation
#--boundary condition data
#--ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
#--create ghb dictionary
ghb_data = {0:lrchc}
#--recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
#--create recharge dictionary
rch_data = {0: rch}
#--well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
#--create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
#--swi2 data
adaptive = False
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
#--swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[1, 31, 36], [2, 31, 36]]
nobs = len(obsnam)
iswiobs = 1051
Explanation: Define the boundary condition data for the model
End of explanation
#--oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
Explanation: Create output control (OC) data using words
End of explanation
modelname = 'swiex4_s1'
ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = mf.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = mf.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = mf.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = mf.ModflowWel(ml, stress_period_data=base_well_data)
ghb = mf.ModflowGhb(ml, stress_period_data=ghb_data)
rch = mf.ModflowRch(ml, rech=rch_data)
swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
adaptive=adaptive, nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc)
oc = mf.ModflowOc(ml, stress_period_data=spd)
pcg = mf.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
Explanation: Create the model with the freshwater well (Simulation 1)
End of explanation
ml.write_input()
ml.run_model(silent=True)
Explanation: Write the simulation 1 MODFLOW input files and run the model
End of explanation
modelname2 = 'swiex4_s2'
ml2 = mf.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = mf.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = mf.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = mf.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = mf.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = mf.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = mf.ModflowRch(ml2, rech=rch_data)
swi = mf.ModflowSwi2(ml2, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
adaptive=adaptive, nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc)
oc = mf.ModflowOc(ml2, stress_period_data=spd)
pcg = mf.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
Explanation: Create the model with the saltwater well (Simulation 2)
End of explanation
ml2.write_input()
ml2.run_model(silent=True)
Explanation: Write the simulation 2 MODFLOW input files and run the model
End of explanation
#--read base model zeta
zfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
#--read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs'), names=True)
Explanation: Load the simulation 1 ZETA data and ZETA observations.
End of explanation
#--read saltwater well model zeta
zfile2 = fu.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
#--read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs'), names=True)
Explanation: Load the simulation 2 ZETA data and ZETA observations.
End of explanation
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
Explanation: Create arrays for the x-coordinates and the output years
End of explanation
#--figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
#--line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
Explanation: Define figure dimensions and colors used for plotting ZETA surfaces
End of explanation
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
#--first plot
ax = fig.add_subplot(2, 2, 1)
#--axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in xrange(5):
#--layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
#--layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
#--legend
plt.legend(loc='lower left')
#--axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
#--second plot
ax = fig.add_subplot(2, 2, 2)
#--axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in xrange(5, len(years)):
#--layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
#--layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
#--legend
plt.legend(loc='lower left')
#--axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
#--third plot
ax = fig.add_subplot(2, 2, 3)
#--axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in xrange(5, len(years)):
#--layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
#--layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
#--legend
plt.legend(loc='lower left')
#--axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
#--fourth plot
ax = fig.add_subplot(2, 2, 4)
#--axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in xrange(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
#--legend
leg = plt.legend(loc='lower right', numpoints=1)
#--axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
Explanation: Recreate Figure 9 from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
End of explanation |
40 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reciprocal Best Blast CDS Feature Comparisons
Introduction
We often wish to establish an equivalence between the CDS features on two genomes - by which we mean some assertion that sequence A on genome 1 is the "same thing" (in some sense) as sequence B on genome 2. This equivalence can take many practical forms (same catalytic function, same binding interaction, same role in a pathway, and so on) but, given the volume of sequence data available today, is usually established on the basis of sequence similarity. This similarity is then taken as a proxy for the actual equivalence we're interested in.
When sequencing a new pathogen genome, or obtaining a novel transcriptome, we may want to annotate the coding sequences in that genome by determining orthologs - the equivalent sequences - in some other genome.
In this notebook, we will look at three methods (there are many others, but we are constrained by time!) of identifying equivalent sequence features in genomes, in bulk.
All three methods we will consider involve BLASTP comparisons between the protein complements of a plant pathogen genome and a related non-pathogenic isolate. They can be considered to fall under three categories, and all depend on initial BLASTP comparisons.
one-way pairwise comparison - best BLASTP match
two-way pairwise comparison - reciprocal best BLASTP match
clustering - Markov clustering (MCL) of BLASTP matches
We will also need to run some Python code to process and visualise the clustering output.
Learning outcomes
Conduct BLASTP comparisons between protein complements for prokaryotes
Using Python and Pandas to collect, examine and visualise tabular format data
Identify reciprocal best BLAST matches
Visualise and interpret genome-wide reciprocal best BLAST matches.
Running cells in this notebook
<div class="alert alert-info" role="alert">
This is an interactive notebook, which means you are able to run the code that is written in each of the cells.
<br /><br />
To run the code in a cell, you should
Step1: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
Step2: <div class="alert alert-warning">
<b>NOTE
Step3: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
We can look at the distribution of values in the dataframe rows using the .hist() method for any column of interest. For example, data_fwd.subject_length.hist() plots a histogram of the values in the subject_length column.
<div class="alert alert-warning">
<b>NOTE
Step4: <div class="alert alert-warning">
<b>QUESTIONS
Step5: <div class="alert alert-warning">
<b>QUESTIONS
Step6: <div class="alert alert-warning">
<b>NOTE
Step7: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
Step8: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
Step9: <div class="alert alert-warning">
<b>Approximately what proportion of best <b>BLAST</b> matches have been discarded?</b>
</div>
Visualising RBBH output
We can get a better idea of what this processing has done by looking at a visual representation of the percentage identity and coverage of RBBH, compared to the (forward) one-way matches. We can do this by executing the cells below.
First, let's look at the percentage identity of best BLAST matches
Step10: <div class="alert alert-warning">
<b>What has been the effect of excluding best matches that do not have an RBBH reverse match?</b>
</div>
Next, we can inspect the query and subject coverage of RBBH results, compared to the one-way forward BLAST matches by executing the cell below.
Step11: <div class="alert alert-warning">
<ul>
<li><b>Which one-way matches have been excluded by carrying out RBBH?</b><br />
<li><b>What is the biological significance of excluding those matches?</b>
<li><b>What would be a reasonable filter to exclude the remaining suspect matches?</b>
</ul>
</div>
Filtering RBBH output
The find_rbbh() function allows us to apply cutoff filters on percentage identity or coverage (or both) for an RBBH match - this, and visualisation of the results is done in the cells below.
<div class="alert alert-warning">
<b>NOTE
Step12: Visualising RBBH with ACT
Finally for this exercise, we will visualise the RBBH between P. syringae B728a and P. fluorescens NCIMB 11764 using ACT (as in exercise 01), comparing the output to that obtained by a BLASTN comparison of the chromosomes.
First, we need to generate an output file describing our (filtered) RBBH that ACT can read. We do this by executing the cell below. This does two things | Python Code:
%pylab inline
# Import helper module
from helpers import rbbh
Explanation: Reciprocal Best Blast CDS Feature Comparisons
Introduction
We often wish to establish an equivalence between the CDS features on two genomes - by which we mean some assertion that sequence A on genome 1 is the "same thing" (in some sense) as sequence B on genome 2. This equivalence can take many practical forms (same catalytic function, same binding interaction, same role in a pathway, and so on) but, given the volume of sequence data available today, is usually established on the basis of sequence similarity. This similarity is then taken as a proxy for the actual equivalence we're interested in.
When sequencing a new pathogen genome, or obtaining a novel transcriptome, we may want to annotate the coding sequences in that genome by determining orthologs - the equivalent sequences - in some other genome.
In this notebook, we will look at three methods (there are many others, but we are constrained by time!) of identifying equivalent sequence features in genomes, in bulk.
All three methods we will consider involve BLASTP comparisons between the protein complements of a plant pathogen genome and a related non-pathogenic isolate. They can be considered to fall under three categories, and all depend on initial BLASTP comparisons.
one-way pairwise comparison - best BLASTP match
two-way pairwise comparison - reciprocal best BLASTP match
clustering - Markov clustering (MCL) of BLASTP matches
We will also need to run some Python code to process and visualise the clustering output.
Learning outcomes
Conduct BLASTP comparisons between protein complements for prokaryotes
Using Python and Pandas to collect, examine and visualise tabular format data
Identify reciprocal best BLAST matches
Visualise and interpret genome-wide reciprocal best BLAST matches.
Running cells in this notebook
<div class="alert alert-info" role="alert">
This is an interactive notebook, which means you are able to run the code that is written in each of the cells.
<br /><br />
To run the code in a cell, you should:
<br /><br />
<ol>
<li>Place your mouse cursor in the cell, and click (this gives the cell *focus*) to make it active
<li>Hold down the <b>Shift</b> key, and press the <b>Return</b> key.
</ol>
</div>
If this is successful, you should see the input marker to the left of the cell change from
In [ ]:
to (for example)
In [1]:
and you may see output appear below the cell.
Requirements
<div class="alert alert-success">
To complete this exercise, you will need:
<ul>
<li>an active internet connection
<li>a local installation of <a href="https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE_TYPE=BlastDocs&DOC_TYPE=Download"><b>BLAST+</b></a>
</ul>
</div>
Related online documentation/publications/software
Software
* CRB-BLAST - conditional reciprocal best BLAST
* OrthoMCL - a database of predicted orthologs obtained using MCL.
* OrthoFinder - a program for finding orthologous protein sequence families
Publications
* Aubrey et al. (2014) PLoS Genet. doi:10.1371/journal.pgen.1004365
Blogs
* On Reciprocal Best Blast Hits
One-Way Best BLAST matches (BBH)
It is still common to see one-way matches used - even if only informally, or as a first attempt - as a means of identifying equivalent proteins/features in a genome. In this section, we'll carry out a one-way BLAST search between the protein complements of the plant pathogen P. syringae B728a and its non-pathogenic relative P. fluorescens NCIMB 11764, and inspect the results graphically.
Performing the BLASTP query
We will use the blastp command at the terminal to use every protein sequence in the P. syringae B728a annotation as a query against the predicted proteome of P. fluorescens NCIMB 11764.
The BLAST databases have already been created for you to save time (using the scripts/02-cds_feature_comparisons.sh script), and the results are in the pseudomonas_blastp directory:
$ tree ./data/pseudomonas_blastp
./data/pseudomonas_blastp
├── GCF_000012245.1_ASM1224v1_protein.phr
├── GCF_000012245.1_ASM1224v1_protein.pin
├── GCF_000012245.1_ASM1224v1_protein.psq
├── GCF_000293885.2_ASM29388v3_protein.phr
├── GCF_000293885.2_ASM29388v3_protein.pin
├── GCF_000293885.2_ASM29388v3_protein.psq
├── GCF_000988485.1_ASM98848v1_protein.phr
├── GCF_000988485.1_ASM98848v1_protein.pin
└── GCF_000988485.1_ASM98848v1_protein.psq
We will use some custom settings to make our analysis easier to carry out.
<div class="alert alert-warning">
<ul>
<li> We will want to limit our matches to only the best hit, so we specify <b>-max_target_seqs 1</b>
<li> We want our output in tab-separated tabular particular format so we can import it easily into other tools (like <b>R</b> and <b>Python</b>), so use <b>-outfmt 6</b>.
<li> We want some specific non-standard columns (e.g. query sequence coverage) in that table so we can carry out some useful calculations and visualisation. We therefore specify <b>-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore"</b>
<li> To make the comparisons quicker, we should create <b>BLAST</b> databases for each of the three proteomes, with the <b>makeblastdb</b> command.
</ul>
</div>
To carry out the one-way BLASTP search of P. syringae B728a against P. fluorescens NCIMB 11764, we would execute the following command in the terminal:
blastp -query data/pseudomonas/GCF_000988485.1_ASM98848v1_protein.faa \
-db data/pseudomonas_blastp/GCF_000293885.2_ASM29388v3_protein \
-max_target_seqs 1 \
-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore" \
-out data/pseudomonas_blastp/B728a_vs_NCIMB_11764.tab
This will take a few minutes to complete, so to save time the comparison has already been made for you, with the result file being placed in data/pseudomonas_blastp/B728a_vs_NCIMB_11764.tab.
Importing and visualising the results
The Python module helpers is included in this directory, to provide useful helper functions so that we can read and view the BLASTP output generated above. To make the functions available, we import it by running the Python code cell below.
<div class="alert alert-warning">
<b>NOTE:</b> The <b>%pylab inline</b> "magic" below allows us to see plots of the <b>BLAST</b> data we load, <i>inline</i> in this notebook.
</div>
End of explanation
# Load one-way BLAST results into a data frame called data_fwd
data_fwd = rbbh.read_data("data/pseudomonas_blastp/B728a_vs_NCIMB_11764.tab")
Explanation: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
End of explanation
# Show first few lines of the loaded data
data_fwd.head()
# Show descriptive statistics for the table
data_fwd.describe()
Explanation: <div class="alert alert-warning">
<b>NOTE:</b> In the cell below, the <b>data.head()</b> function shows us the first few lines of the one-way <b>BLASTP</b> results, one per match; the <b>data.describe()</b> function shows us some summary data for the table.
</div>
End of explanation
# Plot a histogram of alignment lengths for the BLAST data
data_fwd.alignment_length.hist(bins=100)
# Plot a histogram of percentage identity for the BLAST data
data_fwd.identity.hist(bins=100)
# Plot a histogram of query_coverage for the BLAST data
data_fwd.query_coverage.hist(bins=100)
# Plot a histogram of percentage coverage for the BLAST data
data_fwd.subject_coverage.hist(bins=100)
Explanation: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
We can look at the distribution of values in the dataframe rows using the .hist() method for any column of interest. For example, data_fwd.subject_length.hist() plots a histogram of the values in the subject_length column.
<div class="alert alert-warning">
<b>NOTE:</b> The <b>bins=100</b> option sets the number of value bins used in the histogram
</div>
End of explanation
# Plot 2D histogram of subject sequence (match) coverage against query
# sequence coverag
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.identity,
"one-way query COV", "one-way match PID",
"one-way coverage/identity comparison")
Explanation: <div class="alert alert-warning">
<b>QUESTIONS:</b>
<ul>
<li><b>What size are most one-way best `BLAST` alignments?</b>
<li><b>What is the typical query coverage?</b>
<li><b>What is the typical subject coverage?</b>
<li><b>What is the typical best `BLAST` match identity?</b>
</ul>
</div>
We can view the relationship between query coverage and subject coverage, and query coverage and match identity for these one-way best BLAST hits by plotting a 2D histogram, with the helper function ex02.plot_hist2d() in the cell below.
End of explanation
# Load one-way BLAST results into a data frame called data_fwd
data_rev = rbbh.read_data("data/pseudomonas_blastp/NCIMB_11764_vs_B728a.tab")
Explanation: <div class="alert alert-warning">
<b>QUESTIONS:</b>
<ul>
<li>**What is the query/subject coverage for most one-way best `BLAST` matches?**
<li>**Why do some one-way `BLAST` matches not have the same coverage for query and subject?**
<li>**What is the typical query coverage of a high percentage identity match?**
<li>**What is the typical query coverage of a low percentage identity match?**
</ul>
</div>
<div class="alert alert-danger" role="alert">
<b>QUESTION:</b><br />
<b>Do one-way best `BLAST` matches always identify equivalent proteins (<i>orthologs</i>)?</b>
</div>
Reciprocal (Two-Way) Best BLAST matches (RBBH)
To perform a reciprocal BLAST search between two sets of proteins S1 and S2 (say), we need to carry out the forward search of S1 vs S2, and the reverse search S2 vs S1.
Reciprocal best BLAST matches are those where the sequence G(S1) (a gene/CDS from sequence set S1) used as a query makes its best BLAST match to sequence G(S2) (a gene/CDS from sequence set S2), and when sequence G(S2) is used as a query it makes its best match to sequence G(S1) (see figure below).
We carried out the forward search above, for P. syringae B728a (our sequence set S1) against P. fluorescens NCIMB 11764 (our sequence set S2), and now we will carry out the corresponding reverse search by executing the command below at the terminal:
blastp -query data/pseudomonas/GCF_000293885.2_ASM29388v3_protein.faa \
-db data/pseudomonas_blastp/GCF_000988485.1_ASM98848v1_protein \
-max_target_seqs 1 \
-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore" \
-out data/pseudomonas_blastp/NCIMB_11764_vs_B728a.tab
As before, this would few minutes to complete, so to save some time the comparison has already been made for you, with the result file being placed in pseudomonas_blastp/NCIMB_11764_vs_B728a.tab.
We'll load the results into a dataframe called data_rev using the helper function ex02.read_data() in the cell below.
End of explanation
# Calculate RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1 and df2 are the forward and reverse BLAST
# results (filtered, if any filters were used), and rbbh is the dataframe of
# reciprocal best BLAST hits
df1, df2, data_rbbh = rbbh.find_rbbh(data_fwd, data_rev)
Explanation: <div class="alert alert-warning">
<b>NOTE:</b> You could inspect <b>data_rev</b> using the <b>.head()</b> and <b>.describe()</b> methods, just as you did for <b>data_fwd</b>
</div>
The ex02 module provides a function called find_rbbh() which calculates reciprocal best BLAST hits from forward and reverse BLAST searches. The calculation can be performed by executing the cell below.
End of explanation
# Peek at the first few lines of the RBBH results
data_rbbh.head()
# Show summary statistics for RBBH
data_rbbh.describe()
Explanation: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
End of explanation
# Report the size of each of the forward and reverse input, and rbbh output dataframes
s = '\n'.join(["Forward BLAST input: {0} proteins",
"Reverse BLAST input: {1} proteins",
"RBBH output: {2} proteins"])
print(s.format(len(data_fwd), len(data_rev), len(data_rbbh)))
print("(min difference = {0})".format(min(len(data_fwd), len(data_rev)) - len(data_rbbh)))
Explanation: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
End of explanation
# Histogram of forward match percentage identity (one-way)
data_fwd.identity.hist(bins=100)
# Histogram of forward match percentage identity (RBBH)
data_rbbh.identity_x.hist(bins=100)
Explanation: <div class="alert alert-warning">
<b>Approximately what proportion of best <b>BLAST</b> matches have been discarded?</b>
</div>
Visualising RBBH output
We can get a better idea of what this processing has done by looking at a visual representation of the percentage identity and coverage of RBBH, compared to the (forward) one-way matches. We can do this by executing the cells below.
First, let's look at the percentage identity of best BLAST matches:
End of explanation
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches, and those retained after calculating RBBH
rbbh.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
rbbh.plot_hist2d(data_rbbh.query_coverage_x, data_rbbh.subject_coverage_x,
"RBBH (fwd) query COV", "RBBH (fwd) subject COV",
"RBBH_comparisons.ipynbH coverage comparison")
Explanation: <div class="alert alert-warning">
<b>What has been the effect of excluding best matches that do not have an RBBH reverse match?</b>
</div>
Next, we can inspect the query and subject coverage of RBBH results, compared to the one-way forward BLAST matches by executing the cell below.
End of explanation
# Calculate ID and coverage-filtered RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1_filtered and df2_filtered are the
# filtered forward and reverse BLAST results , and rbbh_filtered is the
# dataframe of reciprocal best BLAST hits
df1_filtered, df2_filtered, rbbh_filtered = rbbh.find_rbbh(data_fwd, data_rev, pid=40, cov=70)
# Histogram of forward match percentage identity (RBBH, filtered)
rbbh_filtered.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches retained after calculating RBBH and
# filtering on percentage identity and coverage
rbbh.plot_hist2d(rbbh_filtered.query_coverage_x, rbbh_filtered.subject_coverage_x,
"filtered RBBH (fwd) query COV", "filtered_RBBH (fwd) subject COV",
"filtered RBBH coverage comparison")
Explanation: <div class="alert alert-warning">
<ul>
<li><b>Which one-way matches have been excluded by carrying out RBBH?</b><br />
<li><b>What is the biological significance of excluding those matches?</b>
<li><b>What would be a reasonable filter to exclude the remaining suspect matches?</b>
</ul>
</div>
Filtering RBBH output
The find_rbbh() function allows us to apply cutoff filters on percentage identity or coverage (or both) for an RBBH match - this, and visualisation of the results is done in the cells below.
<div class="alert alert-warning">
<b>NOTE:</b> There is a software tool (<a href="https://github.com/cboursnell/crb-blast"><b>CRB-BLAST</b></a> - Conditional Reciprocal Best BLAST) available that calculates reciprocal best matches, and statistically evaluates an 'optimal' E-value cutoff, in order to improve accuracy of ortholog assignment.
</div>
End of explanation
# Read feature locations for each Pseudomonas file
features = rbbh.read_genbank("data/pseudomonas/GCF_000988485.1_ASM98848v1_genomic.gbff",
"data/pseudomonas/GCF_000293885.2_ASM29388v3_genomic.gbff")
# Write a .crunch file of filtered RBBH for the Pseudomonas comparisons
rbbh.write_crunch(rbbh_filtered, features,
fwd="GCF_000988485.1_ASM98848v1_genomic",
rev="GCF_000293885.2_ASM29388v3_genomic",
outdir="data/pseudomonas_blastp",
filename="B728a_rbbh_NCIMB_11764.crunch")
Explanation: Visualising RBBH with ACT
Finally for this exercise, we will visualise the RBBH between P. syringae B728a and P. fluorescens NCIMB 11764 using ACT (as in exercise 01), comparing the output to that obtained by a BLASTN comparison of the chromosomes.
First, we need to generate an output file describing our (filtered) RBBH that ACT can read. We do this by executing the cell below. This does two things:
Gets the locations of protein features on the chromosome of each organism from a .gbff file, using the helper function read_genbank(), putting them in a variable called features.
Writes the RBBH to a .crunch format file (pseudomonas_blastp/B728a_rbbh_NCIMB_11764.crunch), which ACT can read.
End of explanation |
41 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poynting-Robertson drag
Here we will examine a simple orbital dynamics problem with Poynting-Robertson drag in order to characterize the slimplectic Galerkin-Gauss-Lobatto integrator. The Lagrangian for a central gravitional potential (with mass $M_\odot$ is given by
$$ L = \frac{1}{2} m \dot{\mathbf q}^2 + (1-\beta)\frac{GM_{\odot}m}{r}$$
with nonconservative potential given by (up to terms linear in $v/c$)
$$ K = -\frac{\beta G M m }{c r_+^2} \left[\left(\delta_{ij} + \frac{q_{i+} q_{j+}}{r_+^2}\right)\dot{q}+^i q-^j\right] = -\frac{\beta G M_\odot m }{c r_+^2} \left[\dot{{\mathbf q}}+ \cdot {\mathbf q}- + \frac{1}{r_+^2}(\dot{{\mathbf q}}+ \cdot {\mathbf q}+)({\mathbf q}+ \cdot {\mathbf q}-)\right]$$
where
$$\beta \simeq \frac{3L_\odot}{8\pi c \rho G M_\odot d} \simeq 0.0576906 \left(\frac{\rho}{2\, {\rm g}\,{\rm cm}^{-3}} \right)^{-1} \left(\frac{d}{10^{-3} {\rm cm}} \right)^{-1}.$$
Here, $L_\odot$ is the sun's luminosity, $c$ is the speed of light, $\rho$ is the density of the dust grain, and $d$ is the diameter of the dust grain.
We adopt Cartesian coordinates for the orbital dynamics with
${\mathbf q} = x \hat{\mathbf x} + y \hat{\mathbf y} + z \hat{\mathbf z}$, with $r = ({\mathbf q} \cdot {\mathbf q})^{1/2}$.
Step1: Slimplectic integration
Step2: We don't have an exact solution to compare our numerical results against. In lieu of this, we use a 6th order implicit slimplectic integrator as our "fiducial" solution for comparisons made below.
Step3: Runge-Kutta integration
Generate the 2nd and 4th order Runge-Kutta solutions to compare below with output from the slimplectic integrators.
Step4: Comparison plots
Plot the $x$-component of the orbital vector ${\mathbf q}(t)$ for the 2nd and 4th order slimplectic and RK integrators along with the "fiducial" 6th order slimplectic solution.
Step5: Let's plot the orbital phase of the dust particle for the different integrators.
Step6: Let's see how the oscillator's energy changes with time according to the different orders and integration schemes. The energy is
$$E = \frac{1}{2} m \dot{\mathbf q}^2 -(1-\beta)\frac{GM_{\odot}m}{r}.$$
To quantify the errors incurred by discretization and subsequent numerical integration, we define the fractional or relative energy difference as $\delta E / E = ( E_X - E_6 )/ E_6$ where $E_X$ is the energy as measured by integrator $X$ with $X \in { {\rm Slim2,~Slim4,~RK2,~RK4} }$ relative to the 6th order slimplectic result $E_6$.
Step7: We can also look at how the orbital eccentricity changes depending on the integration scheme and order. | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np, matplotlib.pyplot as plt
import slimplectic, orbit_util as orbit
plot_path = './'
# Parameters
G = 39.478758435 #(in AU^3/M_sun/yr^2))
M_Sun = 1.0 #(in solar masses)
rho = 2.0 #(in g/cm^3)
d = 5.0e-3 #(in cm)
beta = 0.0576906*(2.0/rho)*(1.0e-3/d) #(dimensionless)
c = 63241.3 #(in AU/yr)
m = 1.
Explanation: Poynting-Robertson drag
Here we will examine a simple orbital dynamics problem with Poynting-Robertson drag in order to characterize the slimplectic Galerkin-Gauss-Lobatto integrator. The Lagrangian for a central gravitional potential (with mass $M_\odot$ is given by
$$ L = \frac{1}{2} m \dot{\mathbf q}^2 + (1-\beta)\frac{GM_{\odot}m}{r}$$
with nonconservative potential given by (up to terms linear in $v/c$)
$$ K = -\frac{\beta G M m }{c r_+^2} \left[\left(\delta_{ij} + \frac{q_{i+} q_{j+}}{r_+^2}\right)\dot{q}+^i q-^j\right] = -\frac{\beta G M_\odot m }{c r_+^2} \left[\dot{{\mathbf q}}+ \cdot {\mathbf q}- + \frac{1}{r_+^2}(\dot{{\mathbf q}}+ \cdot {\mathbf q}+)({\mathbf q}+ \cdot {\mathbf q}-)\right]$$
where
$$\beta \simeq \frac{3L_\odot}{8\pi c \rho G M_\odot d} \simeq 0.0576906 \left(\frac{\rho}{2\, {\rm g}\,{\rm cm}^{-3}} \right)^{-1} \left(\frac{d}{10^{-3} {\rm cm}} \right)^{-1}.$$
Here, $L_\odot$ is the sun's luminosity, $c$ is the speed of light, $\rho$ is the density of the dust grain, and $d$ is the diameter of the dust grain.
We adopt Cartesian coordinates for the orbital dynamics with
${\mathbf q} = x \hat{\mathbf x} + y \hat{\mathbf y} + z \hat{\mathbf z}$, with $r = ({\mathbf q} \cdot {\mathbf q})^{1/2}$.
End of explanation
# Create an instance of the GalerkinGaussLobatto class and call it `pr` for Poynting-Robinson
# We will focus on motion in the x-y plane since the direction of the orbital angular momentum
# can be shown to be preserved analytically. All integrators considered here preserve this
# except for the 2nd order implicit slimplectic integrator. We shall not consider this further here.
pr = slimplectic.GalerkinGaussLobatto('t', ['x', 'y'], ['vx', 'vy'])
# Define the conservative $L$ and nonconservative $K$ parts of the total Lagrangian $\Lambda$
# We take the dust particle to have unit mass.
L = 0.5*np.dot(pr.v, pr.v) + (1.0 - beta)*G*M_Sun/np.dot(pr.q, pr.q)**0.5
K = np.dot(pr.vp, pr.qm) + np.dot(pr.vp, pr.qp)*np.dot(pr.qp, pr.qm)/np.dot(pr.qp, pr.qp)
K *= -beta*G*M_Sun/c/np.dot(pr.qp, pr.qp)
# Discretize total Lagrangian using a 2nd order (r=0) implicit scheme
pr.discretize(L, K, 0, method='implicit')
# Specify time samples at which the numerical solution is to be given and provide initial data.
# We take the initial orbital parameters to be given by:
# a=1, e=0, i=0, omega=0, Omega=0, M=0
q0, v0 = orbit.Calc_Cartesian(1.0, 0.2, 0.0, 0.0, 0.0, 0.0, (1.0-beta)*G*M_Sun)
pi0 = v0 # Dust taken to have unit mass
# Time samples (in years)
t_end = 6000
dt = 0.01
t = np.arange(0, t_end+dt, dt)
# Now integrate the 2nd order slimplectic integrator
q_slim2, pi_slim2 = pr.integrate(q0[:2], pi0[:2], t)
# For a 4th order (r=1) implicit scheme we run
pr.discretize(L, K, 1, method='implicit')
# ...and then integrate to get the corresponding numerical solution
q_slim4, pi_slim4 = pr.integrate(q0[:2], pi0[:2], t)
Explanation: Slimplectic integration
End of explanation
pr.discretize(L, K, 2, method='implicit') # 6th order is r=2
q_slim6, pi_slim6 = pr.integrate(q0[:2], pi0[:2], t)
Explanation: We don't have an exact solution to compare our numerical results against. In lieu of this, we use a 6th order implicit slimplectic integrator as our "fiducial" solution for comparisons made below.
End of explanation
# Instantiate the 2nd and 4th order Runge-Kutta classes
rk2 = slimplectic.RungeKutta2()
rk4 = slimplectic.RungeKutta4()
# Define the derivative operator
def dydt(time, y):
deriv = np.zeros(4)
[q_x, q_y, v_x, v_y] = y
r = (q_x*q_x + q_y*q_y)**0.5
deriv[0] = v_x
deriv[1] = v_y
deriv[2] = -(1. - beta)*G*M_Sun*q_x/(r*r*r)
deriv[2] -=(beta*G*M_Sun/(c*r*r))*(v_x + q_x*(q_x*v_x + q_y*v_y)/(r*r))
deriv[3] = -(1. - beta)*G*M_Sun*q_y/(r*r*r)
deriv[3] -=(beta*G*M_Sun/(c*r*r))*(v_y + q_y*(q_x*v_x + q_y*v_y)/(r*r))
return deriv
# Integrate
q_rk2, v_rk2 = rk2.integrate(q0[:2], v0[:2], t, dydt)
q_rk4, v_rk4 = rk4.integrate(q0[:2], v0[:2], t, dydt)
# Please note that q and pi are outputs of the slimplectic integration,
# while q and v are output from the Runge-Kutta integrators.
Explanation: Runge-Kutta integration
Generate the 2nd and 4th order Runge-Kutta solutions to compare below with output from the slimplectic integrators.
End of explanation
fig1 = plt.figure(figsize=(12,5), dpi=800)
fig1.subplots_adjust(wspace=0.05)
ax1a = fig1.add_subplot(1,4,1)
ax1a.set_ylim(-1.5, 1.5)
ax1a.set_xlim(0,3)
ax1a.set_xticks([0,1,2])
ax1a.plot(t, q_slim2[0], 'r-', linewidth=2.0, rasterized=True)
ax1a.plot(t, q_slim4[0], color='orange', linestyle='-', linewidth=2.0, rasterized=True)
ax1a.plot(t, q_rk2[0], 'g--', linewidth=2.0, rasterized=True)
ax1a.plot(t, q_rk4[0], 'b--', linewidth=2.0, rasterized=True)
ax1a.plot(t, q_slim6[0], 'k:', linewidth=2.0, rasterized=True)
ax1b = fig1.add_subplot(1,4,(2,3))
plt.setp(ax1b.get_yticklabels(), visible=False)
ax1b.set_ylim(-1.5, 1.5)
ax1b.set_xlim(3,5995)
ax1b.set_xticks([1000, 2000, 3000, 4000, 5000])
ax1b.plot(t, q_slim2[0], 'r-', linewidth=2.0, alpha=.5, rasterized=True)
ax1b.plot(t, q_slim4[0], color='orange', linestyle='-', linewidth=2.0, alpha=.5, rasterized=True)
ax1b.plot(t, q_rk2[0], 'g--', linewidth=2.0, alpha=.5, rasterized=True)
ax1b.plot(t, q_rk4[0], 'b--', linewidth=2.0, alpha=.5, rasterized=True)
ax1c = fig1.add_subplot(1,4,4)
plt.setp(ax1c.get_yticklabels(), visible=False)
ax1c.set_ylim(-1.5, 1.5)
ax1c.set_xlim(5997,6000)
ax1c.set_xticks([5998, 5999])
ax1c.get_xaxis().get_major_formatter().set_useOffset(False)
ax1c.plot(t, q_slim2[0], 'r-', linewidth=2.0, rasterized=True)
ax1c.plot(t, q_slim4[0], color='orange', linestyle='-', linewidth=2.0, rasterized=True)
ax1c.plot(t, q_rk2[0], 'g--', linewidth=2.0, rasterized=True)
ax1c.plot(t, q_rk4[0], 'b--', linewidth=2.0, rasterized=True)
ax1c.plot(t, q_slim6[0], 'k:', linewidth=2.0, rasterized=True)
ax1a.tick_params(axis='both', which='major', labelsize=16)
ax1b.tick_params(axis='both', which='major', labelsize=16)
ax1c.tick_params(axis='both', which='major', labelsize=16)
ax1b.set_xlabel('Time, $t$ [yr]', fontsize=18)
ax1a.set_ylabel('$x$-position [AU]', fontsize=18);
#fig1.savefig(plot_path + "PRDrag_xLong.pdf", transparent=True,bbox_inches='tight')
Explanation: Comparison plots
Plot the $x$-component of the orbital vector ${\mathbf q}(t)$ for the 2nd and 4th order slimplectic and RK integrators along with the "fiducial" 6th order slimplectic solution.
End of explanation
phi_slim2 = orbit.phase(q_slim2[0], q_slim2[1])
phi_slim4 = orbit.phase(q_slim4[0], q_slim4[1])
phi_slim6 = orbit.phase(q_slim6[0], q_slim6[1])
phi_rk2 = orbit.phase(q_rk2[0], q_rk2[1])
phi_rk4 = orbit.phase(q_rk4[0], q_rk4[1])
fig1_phase = plt.figure(figsize=(12,5), dpi=300)
ax1d = fig1_phase.add_subplot(111)
ax1d.plot(t, phi_slim2, 'r-', linewidth=2.0, rasterized = True, label='2nd order Slimplectic');
ax1d.plot(t, phi_slim4, '-', color = 'orange', linewidth=2.0, rasterized = True, label='4th order Slimplectic');
ax1d.plot(t, phi_rk2, 'g--', linewidth=2.0, rasterized = True, label='RK2');
ax1d.plot(t, phi_rk4, 'b--', linewidth=2.0, rasterized = True, label='RK4');
ax1d.plot(t, phi_slim6, 'k:', linewidth=2.0, rasterized = True, label='6th order Slimplectic');
ax1d.set_xlabel('Time, $t$ [yr]', fontsize = 18);
ax1d.set_ylabel('Orbital phase [rad]', fontsize=18);
ax1d.legend(loc='upper left', prop={'size':15});
ax1d.tick_params(axis='both', which='major', labelsize=16)
fig1_phase_errs = plt.figure(figsize=(12,5), dpi=300)
ax1e = fig1_phase_errs.add_subplot(111)
ax1e.loglog(t, np.abs(phi_slim2 - phi_slim6), 'r-', linewidth=2.0, rasterized = True);
ax1e.loglog(t, np.abs(phi_slim4 - phi_slim6), '-', color = 'orange', linewidth=2.0, rasterized = True);
ax1e.loglog(t, np.abs(phi_rk2 - phi_slim6), 'g--', linewidth=2.0, rasterized = True);
ax1e.loglog(t, np.abs(phi_rk4 - phi_slim6), 'b--', linewidth=2.0, rasterized = True);
ax1e.text(2, 2e-1, r'Slim2', fontsize = 15, color = 'r', rotation=8)
ax1e.text(2, 1e-5, r'Slim4', fontsize = 15, color = 'orange', rotation=9)
ax1e.text(2e2, 1e3, r'RK2', fontsize = 15, color = 'green', rotation=18)
ax1e.text(2, 3e-4, r'RK4', fontsize = 15, color = 'blue', rotation=17)
ax1e.set_yticks([1e-8, 1e-6, 1e-4, 1e-2, 1, 1e2, 1e4]);
ax1e.tick_params(axis='both', which='major', labelsize=16)
ax1e.set_xlabel('Time, $t$ [yr]', fontsize = 18);
ax1e.set_ylabel('Absolute phase error, $|\delta \phi|$ [rad]', fontsize = 18);
Explanation: Let's plot the orbital phase of the dust particle for the different integrators.
End of explanation
# Energy function
def Energy(q, v):
return 0.5*m*(v[0]**2+v[1]**2) - (1.-beta)*G*M_Sun*m/np.sqrt(q[0]**2+q[1]**2)
# Energies from different integrators
E_slim2 = Energy(q_slim2, pi_slim2/m)
E_slim4 = Energy(q_slim4, pi_slim4/m)
E_slim6 = Energy(q_slim6, pi_slim6/m)
E_rk2 = Energy(q_rk2, v_rk2)
E_rk4 = Energy(q_rk4, v_rk4)
fig2 = plt.figure(figsize=(12,5), dpi=500)
ax2 = fig2.add_subplot(1,1,1)
ax2.set_xlim(0.01, 6000)
ax2.loglog(t, np.abs(E_slim2/E_slim6-1.), 'r-', linewidth=2.0, rasterized=True)
ax2.loglog(t, np.abs(E_slim4/E_slim6-1.), '-', color='orange', linewidth=2.0, rasterized=True)
ax2.loglog(t, np.abs(E_rk2/E_slim6-1.), 'g--', linewidth=2.0, rasterized=True)
ax2.loglog(t, np.abs(E_rk4/E_slim6-1.), 'b--', linewidth=2.0, rasterized=True)
ax2.set_xlabel('Time, $t$ [yr]', fontsize=18)
ax2.set_ylabel('Fractional Energy Error, $\delta E/E_6$', fontsize=18)
ax2.text(15, 1.8e-4, r'$2^{nd}$ order Slimplectic', fontsize=15, color='black')
ax2.text(15, 0.5e-8, r'$4^{th}$ order Slimplectic', fontsize=15, color='black')
ax2.text(30, 1e-5, r'$4^{th}$ order RK', fontsize=15, color='blue', rotation = 10)
ax2.text(30, 1.5e-1, r'$2^{nd}$ order RK', fontsize=15, color='g', rotation=10)
ax2.text(0.015, 1e-1, r'$\Delta t = 0.01$ yr', fontsize=18, color='black')
ax2.tick_params(axis='both', which='major', labelsize=16)
ax2.set_yticks([1e-12, 1e-9, 1e-6, 1e-3, 1e0]);
#fig2.savefig(plot_path + "PRDrag_E_errorLong.pdf", transparent=True,bbox_inches='tight')
Explanation: Let's see how the oscillator's energy changes with time according to the different orders and integration schemes. The energy is
$$E = \frac{1}{2} m \dot{\mathbf q}^2 -(1-\beta)\frac{GM_{\odot}m}{r}.$$
To quantify the errors incurred by discretization and subsequent numerical integration, we define the fractional or relative energy difference as $\delta E / E = ( E_X - E_6 )/ E_6$ where $E_X$ is the energy as measured by integrator $X$ with $X \in { {\rm Slim2,~Slim4,~RK2,~RK4} }$ relative to the 6th order slimplectic result $E_6$.
End of explanation
# Compute the eccentricties
def ecc(q, v, t):
Q = np.vstack([q, np.zeros(t.size)])
V = np.vstack([v, np.zeros(t.size)])
qT, vT = np.transpose(Q), np.transpose(V)
return np.array( [orbit.Calc_e(qT[ii], vT[ii], (1.-beta)*G*M_Sun) for ii in range(t.size)])
e_slim2 = ecc(q_slim2, pi_slim2/m, t)
e_slim4 = ecc(q_slim4, pi_slim4/m, t)
e_slim6 = ecc(q_slim6, pi_slim6/m, t)
e_rk2 = ecc(q_rk2, v_rk2, t)
e_rk4 = ecc(q_rk4, v_rk4, t)
fig3 = plt.figure(figsize=(12,5), dpi=500)
ax3 = fig3.add_subplot(1,1,1)
ax3.set_xlim(0.01, 6000)
ax3.loglog(t, np.abs(e_slim2/e_slim6-1.), 'r-', linewidth=2.0, rasterized=True)
ax3.loglog(t, np.abs(e_slim4/e_slim6-1.), '-', color='orange', linewidth=2.0, rasterized=True)
ax3.loglog(t, np.abs(e_rk2/e_slim6-1.), 'g--', linewidth=2.0, rasterized=True)
ax3.loglog(t, np.abs(e_rk4/e_slim6-1.), 'b--', linewidth=2.0, rasterized=True)
ax3.set_xlabel('Time, $t$ [yr]', fontsize=18)
ax3.set_ylabel('Frac\'l Eccentricity Error, $\delta e/e_6$', fontsize=18)
ax3.text(20, 1.8e-3, r'$2^{nd}$ order Slimplectic', fontsize=15, color='black')
ax3.text(20, 0.5e-7, r'$4^{th}$ order Slimplectic', fontsize=15, color='black')
ax3.text(40, 1e-4, r'$4^{th}$ order RK', fontsize=15, color='blue', rotation = 10)
ax3.text(40, 1e0, r'$2^{nd}$ order RK', fontsize=15, color='g', rotation=10)
ax3.tick_params(axis='both', which='major', labelsize=16)
ax3.set_yticks([1e-12, 1e-9, 1e-6, 1e-3, 1e0])
ax3.text(0.015, 1e0, r'$\Delta t = 0.01$ yr', fontsize=18, color='black');
#fig3.savefig(plot_path + "PRDrag_Ecc_errorLong.pdf", transparent=True,bbox_inches='tight')
Explanation: We can also look at how the orbital eccentricity changes depending on the integration scheme and order.
End of explanation |
42 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General Instructions
For full credit, you must have the following items for each problem
Step1: zM Test because it's a single sample against a normal population with known parameters
Sample is from population distribution
$p = 0.25$ and $\alpha = 0.05$
Do not reject
Step2: zM Test because it's a single sample against a normal population with known parameters
Sample is from population distribution
$p = ~0.0$ and $\alpha = 0.05$
Reject
Step3: t-test because it's a set of samples against a normal population with known mean but unknown standard deviation
Samples are from population distribution
$p = 0.259$ and $\alpha = 0.05$
Do not reject
Step4: Sum of ranks test because it's a two unpaired sets of samples
Samples are from the same distribution
$p = 0.95$ and $\alpha = 0.05$
Do not reject
Step5: Signed rank test because it's a set of paired data
Samples are from the same distribution
$p = 0.02$ and $\alpha = 0.05$
Reject | Python Code:
import scipy.stats as ss
import numpy as np
Z = (1070 - 1064) / 7
p = 1 - (ss.norm.cdf(Z - ss.norm.cdf(-Z)))
print(p)
Explanation: General Instructions
For full credit, you must have the following items for each problem:
[1 point] Describe what and why the method you're using is applicable. For example, 'I chose the signed rank test because these are two matched datasets describing one measurement'
[1 point] Write out the null hypothesis. For example, 'The null hypothesis is that the two measurements sets came from the same population (synonymous with probability distribution)'
[1 point] Report the p-value and your alpha value (significance level)
[1 point] if you reject or not reject the null hypothesis and answer the question
Put your work into the python cell and your answer to the questions into the markdown cell
Problem 1
You have a sample of an unknown metal with a melting point of $1,070^\circ{}$ C. You know that gold has a melting point of $1,064^\circ{}$ C and your measurements have a standard deviation of $7^\circ{}$ C. Is the unknown metal likely to be gold?
End of explanation
Z = (3542 - 2341) / 120
p = 1 - (ss.norm.cdf(Z - ss.norm.cdf(-Z)))
print(p)
Explanation: zM Test because it's a single sample against a normal population with known parameters
Sample is from population distribution
$p = 0.25$ and $\alpha = 0.05$
Do not reject: could be gold
Problem 2
Historically your taxes have had a population mean of \$3,452 and a standard deviation of \$120. This year your taxes are \$2341. Should you be concerned you made a mistake or does this appear to be a usual amount?
End of explanation
d = [7.5, 10 + 20/60, 8 + 25 / 60, 7 + 45/60, 9 + 20/60]
T = (np.mean(d) - 8) / (np.std(d, ddof=1) / np.sqrt(len(d)))
p = 1 - (ss.t.cdf(T, df=len(d)) - ss.t.cdf(-T, df=len(d)))
print(p)
Explanation: zM Test because it's a single sample against a normal population with known parameters
Sample is from population distribution
$p = ~0.0$ and $\alpha = 0.05$
Reject: This is an unusual amount
Problem 3
Usually you run an 8 minute mile. After training with a new program for 8 weeks, your latest results are a 7:30 mile, a 10:20 mile, a 8:25 mile, a 7:45 mile and a 9:20 mile. Has your new program made a significant change?
End of explanation
prior = [0.96, 0.97, 0.92, 0.88, 0.99]
post = [0.97, 0.96, 0.95, 0.97, 0.95, 0.85, 0.98, 0.77, 0.99, 0.97]
### BEGIN SOLUTION
ss.ranksums(prior, post)
### END SOLUTION
Explanation: t-test because it's a set of samples against a normal population with known mean but unknown standard deviation
Samples are from population distribution
$p = 0.259$ and $\alpha = 0.05$
Do not reject: There is not a significant change
Problem 4
Your manufacturing plant has made significant investment in improving quality control to improve yields. Your job is to determine if these investment have improved yields. Results on yield for the last 10 batches and from 5 batches from prior to the changes are available. Did these quality control improvements significantly change yields?
End of explanation
control = [2, 0, 3, 4, 0, 2, 6, 3, 11, 4, 0, 4]
drug = [1, 0, 3, 2, 1, 0, 1, 2, 4, 2, 1, 2]
### BEGIN SOLUTION
import scipy.stats as ss
ss.wilcoxon(control, drug)
### END SOLUTION
Explanation: Sum of ranks test because it's a two unpaired sets of samples
Samples are from the same distribution
$p = 0.95$ and $\alpha = 0.05$
Do not reject: There is not significant difference in yields
Problem 5
You are doing the statistical analysis for efficacy of a new acne treatment. Each patient applies a control solution on half their face and drug-containing solution on the other half. After 4 weeks, they report the number pimples on both sides. Is the drug effective?
End of explanation
#11 is put into the interval of "extreme" values
#Or think about 11 is being in the interval that makes our
#estiamte more conservative
p = 1 - ss.poisson.cdf(11, 0.1 * 52)
print(p)
Explanation: Signed rank test because it's a set of paired data
Samples are from the same distribution
$p = 0.02$ and $\alpha = 0.05$
Reject: There is a significant change after applying the drug
Problem 6
9 out of 10 professor's recommend colgate. After polling 52 professors at the University of Rochester, 11 do not recommend colgate. Are the UR faculty significantly different than most other universities?
End of explanation |
43 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with the Stackdriver Monitoring API
Cloud Datalab provides an environment for working with your data. This includes data that is being managed within the Stackdriver Monitoring API. This notebook introduces some of the APIs that Cloud Datalab provides for working with the monitoring data, and allows you to try them out on your own project.
The main focus of this API is to allow you to query time series data for your monitored resources. The time series, and it's metadata are returned as pandas DataFrame objects. pandas is a widely used library for data manipulation, and is well suited to working with time series data.
Note
Step1: First, list supported options on the Stackdriver magic %sd
Step2: Let's see what we can do with the monitoring command
Step3: List names of Compute Engine CPU metrics
Here we use IPython cell magics to list the CPU metrics. The Labels column shows that instance_name is a metric label.
Step4: List monitored resource types related to GCE
Step5: Querying time series data
The Query class allows users to query and access the monitoring time series data.
Many useful methods of the Query class are actually defined by the base class, which is provided by the google-cloud-python library. These methods include
Step6: Initializing the query
During intialization, the metric type and the time interval need to be specified. For interactive use, the metric type has a default value. The simplest way to specify the time interval that ends now is to use the arguments days, hours, and minutes.
In the cell below, we initialize the query to load the time series for CPU Utilization for the last two hours.
Step7: Getting the metadata
The method metadata() returns a QueryMetadata object. It contains the following information about the time series matching the query
Step8: Reading the instance names from the metadata
Next, we read in the instance names from the metadata, and use it in filtering the time series data below. If there are no GCE instances in this project, the cells below will raise errors.
Step9: Filtering by metric label
We first filter query_cpu defined earlier to include only the first instance. Next, calling as_dataframe gets the results from the monitoring API, and converts them into a pandas DataFrame.
Step10: Displaying the time series as a linechart
We can plot the time series data by calling the plot method of the dataframe. The pandas library uses matplotlib for plotting, so you can learn more about it here.
Step11: Aggregating the query
You can aggregate or summarize time series data along various dimensions.
* In the first stage, data in a time series is aligned to a specified period.
* In the second stage, data from multiple time series is combined, or reduced, into one time series.
Not all alignment and reduction options are applicable to all time series, depending on their metric type and value type. Alignment and reduction may change the metric type or value type of a time series.
Aligning the query
For multiple time series, aligning the data is recommended. Aligned data is more compact to read from the Monitoring API, and lends itself better to visualizations.
The alignment period can be specified using the arguments hours, minutes, and seconds. In the cell below, we do the following
Step12: Reducing the query
In order to combine the data across multiple time series, the reduce() method can be used. The fields to be retained after aggregation must be specified in the method.
For example, to aggregate the results by the zone, 'resource.zone' can be specified.
Step13: Displaying the time series as a heatmap
Let us look at the time series at the instance level as a heatmap. A heatmap is a compact representation of the data, and can often highlight patterns.
The diagram below shows the instances along rows, and the timestamps along columns.
Step14: Multi-level headers
If you don't provide any labels to as_dataframe, it returns all the resource and metric labels present in the time series as a multi-level header.
This allows you to filter, and aggregate the data more easily.
Step15: Filter the dataframe
Let us filter the multi-level dataframe based on the common prefix. Applying the filter will look across all column headers.
Step16: Aggregate columns in the dataframe
Here, we aggregate the multi-level dataframe at the zone level. This is similar to applying reduction using 'REDUCE_MEAN' on the field 'resource.zone'. | Python Code:
# set_datalab_project_id('my-project-id')
Explanation: Getting started with the Stackdriver Monitoring API
Cloud Datalab provides an environment for working with your data. This includes data that is being managed within the Stackdriver Monitoring API. This notebook introduces some of the APIs that Cloud Datalab provides for working with the monitoring data, and allows you to try them out on your own project.
The main focus of this API is to allow you to query time series data for your monitored resources. The time series, and it's metadata are returned as pandas DataFrame objects. pandas is a widely used library for data manipulation, and is well suited to working with time series data.
Note: This notebook will show you how to use this API with your own project. The charts included here are from a sample project that you will not have access to. For all cells to run without errors, the following must hold:
* The default project must be set
* This project must have at least one GCE Instance. You can create an instance at the following link: https://console.cloud.google.com/compute/instances
Importing the API and setting up the default project
The Monitoring functionality is contained within the datalab.stackdriver.monitoring module.
If the default project is not already set via the environment variable $PROJECT_ID, you must do so using 'set_datalab_project_id', or using the %datalab config magic.
End of explanation
%sd -h
Explanation: First, list supported options on the Stackdriver magic %sd:
End of explanation
%sd monitoring -h
Explanation: Let's see what we can do with the monitoring command:
End of explanation
%sd monitoring metrics list --type compute*/cpu/*
Explanation: List names of Compute Engine CPU metrics
Here we use IPython cell magics to list the CPU metrics. The Labels column shows that instance_name is a metric label.
End of explanation
%sd monitoring resource_types list --type gce*
Explanation: List monitored resource types related to GCE
End of explanation
from google.datalab.stackdriver import monitoring as gcm
help(gcm.Query.select_interval)
Explanation: Querying time series data
The Query class allows users to query and access the monitoring time series data.
Many useful methods of the Query class are actually defined by the base class, which is provided by the google-cloud-python library. These methods include:
* select_metrics: filters the query based on metric labels.
* select_resources: filters the query based on resource type and labels.
* align: aligns the query along the specified time intervals.
* reduce: applies aggregation to the query.
* as_dataframe: returns the time series data as a pandas DataFrame object.
Reference documentation for the Query base class is available here. You can also get help from inside the notebook by calling the help function on any class, object or method.
End of explanation
query_cpu = gcm.Query('compute.googleapis.com/instance/cpu/utilization', hours=2)
Explanation: Initializing the query
During intialization, the metric type and the time interval need to be specified. For interactive use, the metric type has a default value. The simplest way to specify the time interval that ends now is to use the arguments days, hours, and minutes.
In the cell below, we initialize the query to load the time series for CPU Utilization for the last two hours.
End of explanation
metadata_cpu = query_cpu.metadata().as_dataframe()
metadata_cpu.head(5)
Explanation: Getting the metadata
The method metadata() returns a QueryMetadata object. It contains the following information about the time series matching the query:
* resource types
* resource labels and their values
* metric labels and their values
This helps you understand the structure of the time series data, and makes it easier to modify the query.
End of explanation
import sys
if metadata_cpu.empty:
sys.stderr.write('This project has no GCE instances. The remaining notebook '
'will raise errors!')
else:
instance_names = sorted(list(metadata_cpu['metric.labels']['instance_name']))
print('First 5 instance names: %s' % ([str(name) for name in instance_names[:5]],))
Explanation: Reading the instance names from the metadata
Next, we read in the instance names from the metadata, and use it in filtering the time series data below. If there are no GCE instances in this project, the cells below will raise errors.
End of explanation
query_cpu_single_instance = query_cpu.select_metrics(instance_name=instance_names[0])
# Get the query results as a pandas DataFrame and look at the last 5 rows.
data_single_instance = query_cpu_single_instance.as_dataframe(label='instance_name')
data_single_instance.tail(5)
Explanation: Filtering by metric label
We first filter query_cpu defined earlier to include only the first instance. Next, calling as_dataframe gets the results from the monitoring API, and converts them into a pandas DataFrame.
End of explanation
# N.B. A useful trick is to assign the return value of plot to _
# so that you don't get text printed before the plot itself.
_ = data_single_instance.plot()
Explanation: Displaying the time series as a linechart
We can plot the time series data by calling the plot method of the dataframe. The pandas library uses matplotlib for plotting, so you can learn more about it here.
End of explanation
# Filter the query by a common instance name prefix.
common_prefix = instance_names[0].split('-')[0]
query_cpu_aligned = query_cpu.select_metrics(instance_name_prefix=common_prefix)
# Align the query to have data every 5 minutes.
query_cpu_aligned = query_cpu_aligned.align(gcm.Aligner.ALIGN_MEAN, minutes=5)
data_multiple_instances = query_cpu_aligned.as_dataframe(label='instance_name')
# Display the data as a linechart, and move the legend to the right of it.
_ = data_multiple_instances.plot().legend(loc="upper left", bbox_to_anchor=(1,1))
Explanation: Aggregating the query
You can aggregate or summarize time series data along various dimensions.
* In the first stage, data in a time series is aligned to a specified period.
* In the second stage, data from multiple time series is combined, or reduced, into one time series.
Not all alignment and reduction options are applicable to all time series, depending on their metric type and value type. Alignment and reduction may change the metric type or value type of a time series.
Aligning the query
For multiple time series, aligning the data is recommended. Aligned data is more compact to read from the Monitoring API, and lends itself better to visualizations.
The alignment period can be specified using the arguments hours, minutes, and seconds. In the cell below, we do the following:
* select a subset of the instances by using a prefix of the first instance name
* align the time series to 5 minute intervals using an 'ALIGN_MEAN' method.
* plot the time series, and adjust the legend to be outside the plot. You can learn more about legend placement here.
End of explanation
query_cpu_reduced = query_cpu_aligned.reduce(gcm.Reducer.REDUCE_MEAN, 'resource.zone')
data_per_zone = query_cpu_reduced.as_dataframe('zone')
data_per_zone.tail(5)
Explanation: Reducing the query
In order to combine the data across multiple time series, the reduce() method can be used. The fields to be retained after aggregation must be specified in the method.
For example, to aggregate the results by the zone, 'resource.zone' can be specified.
End of explanation
import matplotlib
import seaborn
# Set the size of the heatmap to have a better aspect ratio.
div_ratio = 1 if len(data_multiple_instances.columns) == 1 else 2.0
width, height = (size/div_ratio for size in data_multiple_instances.shape)
matplotlib.pyplot.figure(figsize=(width, height))
# Display the data as a heatmap. The timestamps are converted to strings
# for better readbility.
_ = seaborn.heatmap(data_multiple_instances.T,
xticklabels=data_multiple_instances.index.map(str),
cmap='YlGnBu')
Explanation: Displaying the time series as a heatmap
Let us look at the time series at the instance level as a heatmap. A heatmap is a compact representation of the data, and can often highlight patterns.
The diagram below shows the instances along rows, and the timestamps along columns.
End of explanation
data_multi_level = query_cpu_aligned.as_dataframe()
data_multi_level.tail(5)
Explanation: Multi-level headers
If you don't provide any labels to as_dataframe, it returns all the resource and metric labels present in the time series as a multi-level header.
This allows you to filter, and aggregate the data more easily.
End of explanation
print('Finding pattern "%s" in the dataframe headers' % (common_prefix,))
data_multi_level.filter(regex=common_prefix).tail(5)
Explanation: Filter the dataframe
Let us filter the multi-level dataframe based on the common prefix. Applying the filter will look across all column headers.
End of explanation
data_multi_level.groupby(level='zone', axis=1).mean().tail(5)
Explanation: Aggregate columns in the dataframe
Here, we aggregate the multi-level dataframe at the zone level. This is similar to applying reduction using 'REDUCE_MEAN' on the field 'resource.zone'.
End of explanation |
44 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Monte Carlo Integration
Inspired from the following posts
Step1: What is Monte Carlo (MC) Integration?
Let us say that we want to approximate the area between the curve defined by $f(x) = x^2 + 3x + \ln{x}$ between $x\in (0,5]$ and the x-axis.
Step2: Concretely, we are interested in knowing the area of the red-shaded region in the above figure. Furthermore, I have also provided a rectangular bounding box for the range of values of $x$ and $y$. The true value of the area under the curve is $\sim{81.381}$ using its analytic integral formula (see http
Step3: As we can observe, the number of points which fall inside the region of interest, are proportional to the area of the region. The area however, marginally close to the true area of $81.38$. Let us also try with a higher value of $N=10^7$
Step4: The above figure, shows that for $N=10^7$, the region covered by the sampled points is almost as smooth as the shaded region. Furthermore, the area is closer to the true value of $81.38$.
Now, let us also analyze, how the value of the calculated area changes with the order of number of sampled points.
Step5: Clearly, as the number of points increase, the area becomres closer to the true value.
Let us further examine this change by starting with $10^3$ points and then going all the way till $10^6$ points. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numba import jit # Use it for speed
from scipy import stats
Explanation: Introduction to Monte Carlo Integration
Inspired from the following posts:
http://nbviewer.jupyter.org/github/cs109/content/blob/master/labs/lab7/GibbsSampler.ipynb
http://twiecki.github.io/blog/2015/11/10/mcmc-sampling/
https://en.wikipedia.org/wiki/Monte_Carlo_integration
End of explanation
def f(x):
return x**2 + 3*x + np.log(x)
step= 0.001
x = np.arange(1,5+step*0.1,step)
y = f(x)
print x.min(), x.max()
print y.min(), y.max()
plt.plot(x, y, lw=2., color="r")
plt.fill_between(x, 0, y, color="r", alpha=0.5)
plt.axhline(y=0, lw=1., color="k", linestyle="--")
plt.axhline(y=y.max(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.min(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.max(), lw=1., color="k", linestyle="--")
plt.xlabel("x")
plt.ylabel("y")
plt.title("$f(x) = x^2 + 3x + \ln{x}, x\in[1,5]$")
Explanation: What is Monte Carlo (MC) Integration?
Let us say that we want to approximate the area between the curve defined by $f(x) = x^2 + 3x + \ln{x}$ between $x\in (0,5]$ and the x-axis.
End of explanation
@jit
def get_MC_area(x, y, f, N=10**5, plot=False):
x_rands = x.min() + np.random.rand(N) * (x.max() - x.min())
y_rands = np.random.rand(N) * y.max()
y_true = f(x_rands)
integral_idx = (y_rands <= y_true)
if plot:
plt.plot(x_rands[integral_idx], y_rands[integral_idx],
alpha=0.3, color="r", linestyle='none',
marker='.', markersize=0.5)
plt.plot(x_rands[~integral_idx], y_rands[~integral_idx],
alpha=0.3, color="0.5", linestyle='none',
marker='.', markersize=0.5)
plt.axhline(y=0, lw=1., color="k", linestyle="--")
plt.axhline(y=y.max(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.min(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.max(), lw=1., color="k", linestyle="--")
plt.xlabel("x")
plt.ylabel("y")
plt.title("$f(x) = x^2 + 3x + \ln{x}, x\in[1,5]; N=%s$" % N)
print "Proportion points in space: %.3f" % (integral_idx).mean()
area = (integral_idx).mean() * (
(x_rands.max() - x_rands.min()) * (y_rands.max() - y_rands.min())
)
return area
area = get_MC_area(x, y, f, N=10**5, plot=True)
print "Area is: %.3f" % area
Explanation: Concretely, we are interested in knowing the area of the red-shaded region in the above figure. Furthermore, I have also provided a rectangular bounding box for the range of values of $x$ and $y$. The true value of the area under the curve is $\sim{81.381}$ using its analytic integral formula (see http://www.wolframalpha.com/input/?i=integrate+x%5E2+%2B+3x+%2B+ln(x),+x+in+%5B1,5%5D).
The most accurate way to get the value of the area is to find the value of the definite integral $\int_{1}^{5} f(x) dx$. However, in many cases analytically finding this integral is very tough, especially if the function is not easily integrable. This is where numerical methods for approximating the integral come handy. Monte Carlo (MC) techniques are one of the most popular form of numerical solution used for definite integral calculation.
A basic intuition of the Monte Carlo Integration is as follows:
* Define the input domain $[a, b]$ of the integral $\int_{a}^{b} f(x) dx$.
* Uniformly, sample $N$ points from rectangular region between $[a, b)$ and $[\min(f(x)), \max(f(x)))$
* Find the proportion of points that lie in the region included in the area of $f(x)$, call it $p$
* Multiply the area of the rectangular region ($A$) by $p$ to get the area under the curve $A^=pA$
* As $N \to \infty$, the area of the shaded region $A^* \to \int_{a}^{b} f(x) dx$
* Usually, a much smaller value of $N$ will give approximate value within a reasonable error span.
Below, we will try to approximate the area of the curve using the MC integration method described above. We will use $N = 10^5$, and plot the points which fall in the region of the area in red and the other points in grey.
End of explanation
area = get_MC_area(x, y, f, N=10**7, plot=True)
print "Area is: %.3f" % area
Explanation: As we can observe, the number of points which fall inside the region of interest, are proportional to the area of the region. The area however, marginally close to the true area of $81.38$. Let us also try with a higher value of $N=10^7$
End of explanation
for i in xrange(2,8):
area = get_MC_area(x, y, f, N=10**i, plot=False)
print i, area
Explanation: The above figure, shows that for $N=10^7$, the region covered by the sampled points is almost as smooth as the shaded region. Furthermore, the area is closer to the true value of $81.38$.
Now, let us also analyze, how the value of the calculated area changes with the order of number of sampled points.
End of explanation
%%time
N_vals = 1000 + np.arange(1000)*1000
areas = np.zeros_like(N_vals, dtype="float")
for i, N in enumerate(N_vals):
area = get_MC_area(x, y, f, N=N, plot=False)
areas[i] = area
print "Mean area of last 100 points: %.3f" % np.mean(areas[-100:])
print "Areas of last 10 points: ", areas[-10:]
plt.plot(N_vals, areas, color="0.1", alpha=0.7)
plt.axhline(y=np.mean(areas[100:]), linestyle="--", lw=1., color="k")
plt.ylabel("Area")
plt.xlabel("Number of samples")
#plt.xscale("log")
Explanation: Clearly, as the number of points increase, the area becomres closer to the true value.
Let us further examine this change by starting with $10^3$ points and then going all the way till $10^6$ points.
End of explanation |
45 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Universal Array Functions
Numpy comes with many universal array functions, which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones | Python Code:
import numpy as np
arr = np.arange(0,10)
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr/arr
# Also warning, but not an error instead infinity
1/arr
arr**3
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
NumPy Operations
Arithmetic
You can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
End of explanation
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
np.sin(arr)
np.log(arr)
Explanation: Universal Array Functions
Numpy comes with many universal array functions, which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones:
End of explanation |
46 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC
Step1: scikit-learn HP Tuning on AI Platform
This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository.
Citation
Step2: The data
The Auto MPG Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs
Step3: Load the hyperparameter values that are passed to the model during training.
In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.)
Step4: Add code to download the data from GCS
In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model.
Step5: Use the Hyperparameters
Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's scikit-learn code.
Step6: Report the mean accuracy as hyperparameter tuning objective metric.
Step7: Export and save the model to GCS
Step8: Part 2
Step9: Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info.
In this config file several key things are set
Step10: Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info.
To do this, AI Platform uses a setup.py file to install your dependencies.
Step11: Part 3
Step12: Submit the training job.
Step13: [Optional] StackDriver Logging
You can view the logs for your training job | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC
End of explanation
%env PROJECT_ID PROJECT_ID
%env BUCKET_ID BUCKET_ID
%env JOB_DIR gs://BUCKET_ID/scikit_learn_job_dir
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./auto_mpg_hp_tuning
%env MAIN_TRAINER_MODULE auto_mpg_hp_tuning.train
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
%env HPTUNING_CONFIG hptuning_config.yaml
! mkdir auto_mpg_hp_tuning
Explanation: scikit-learn HP Tuning on AI Platform
This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository.
Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
How to train your model on AI Platform with HP tuning.
Using HP Tuning for training can be done in a few steps:
1. Create your python model file
1. Add argument parsing for the hyperparameter values. (These values are chosen for you in this notebook)
1. Add code to download your data from Google Cloud Storage so that AI Platform can use it
1. Add code to track the performance of your hyperparameter values.
1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model
1. Prepare a package
1. Submit the training job
Prerequisites
Before you jump in, let’s cover some of the different tools you’ll be using to get HP tuning up and running on AI Platform.
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.
AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.
Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.
Overview of Hyperparameter Tuning - Hyperparameter tuning takes advantage of the processing infrastructure of Google Cloud Platform to test different hyperparameter configurations when training your model.
Part 0: Setup
Create a project on GCP
Create a Google Cloud Storage Bucket
Enable AI Platform Training and Prediction and Compute Engine APIs
Install Cloud SDK
Install scikit-learn [Optional: used if running locally]
Install pandas [Optional: used if running locally]
Install cloudml-hypertune [Optional: used if running locally]
These variables will be needed for the following steps.
* TRAINER_PACKAGE_PATH <./auto_mpg_hp_tuning> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.
* MAIN_TRAINER_MODULE <auto_mpg_hp_tuning.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>
* JOB_DIR <gs://$BUCKET_ID/scikit_learn_job_dir> - The path to a Google Cloud Storage location to use for job output.
* RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
* PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
* HPTUNING_CONFIG <hptuning_config.yaml> - Path to the job configuration file.
Replace:
* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.
* BUCKET_ID <YOUR_BUCKET_ID> - with the bucket id you created above.
* JOB_DIR <gs://YOUR_BUCKET_ID/scikit_learn_job_dir> - with the bucket id you created above.
* REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.
End of explanation
%%writefile ./auto_mpg_hp_tuning/train.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import datetime
import os
import pandas as pd
import subprocess
from google.cloud import storage
import hypertune
from sklearn.externals import joblib
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
Explanation: The data
The Auto MPG Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/auto_mpg/. The data has been pre-processed to remove rows with incomplete data so as not to create additional steps for this notebook.
Training file is auto-mpg.data
Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.
Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
Disclaimer
This dataset is provided by a third party. Google provides no representation,
warranty, or other guarantees about the validity or any other aspects of this dataset.
Part 1: Create your python model file
First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a scikit-learn model. However, there are a few key differences:
1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.
1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.
1. Define a command-line argument in your main training module for each tuned hyperparameter.
1. Use the value passed in those arguments to set the corresponding hyperparameter in your application's scikit-learn code.
1. Use cloudml-hypertune to track your training jobs metrics.
The code in this file first handles the hyperparameters passed to the file from AI Platform. Then it loads the data into a pandas DataFrame that can be used by scikit-learn. Then the model is fit against the training data and the metrics for that data are shared with AI Platform. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform's prediction service.
Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
Setup the imports
End of explanation
%%writefile -a ./auto_mpg_hp_tuning/train.py
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir', # handled automatically by AI Platform
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--alpha', # Specified in the config file
help='Constant that multiplies the L1 term.',
default=1.0,
type=float
)
parser.add_argument(
'--max_iter', # Specified in the config file
help='The maximum number of iterations.',
default=1000,
type=int
)
parser.add_argument(
'--tol', # Specified in the config file
help='The tolerance for the optimization: if the updates are smaller than tol, '
'the optimization code checks the dual gap for optimality and continues '
'until it is smaller than tol.',
default=0.0001,
type=float
)
parser.add_argument(
'--selection', # Specified in the config file
help='Supported criteria are “cyclic” loop over features sequentially and '
'“random” a random coefficient is updated every iteration ',
default='cyclic'
)
args = parser.parse_args()
Explanation: Load the hyperparameter values that are passed to the model during training.
In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.)
End of explanation
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Public bucket holding the auto mpg data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/auto_mpg/auto-mpg.data')
# Download the data
blob.download_to_filename('auto-mpg.data')
# ---------------------------------------
# This is where your model code would go. Below is an example model using the auto mpg dataset.
# ---------------------------------------
# Define the format of your input data including unused columns
# (These are the columns from the auto-mpg data files)
COLUMNS = (
'mpg',
'cylinders',
'displacement',
'horsepower',
'weight',
'acceleration',
'model-year',
'origin',
'car-name'
)
# Load the training auto mpg dataset
with open('./auto-mpg.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS, delim_whitespace=True)
# Remove the column we are trying to predict ('mpg') from our features list
# Convert the Dataframe to a lists of lists
features = raw_training_data.drop('mpg', axis=1).drop('car-name', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
labels = raw_training_data['mpg'].values.tolist()
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size=0.15)
Explanation: Add code to download the data from GCS
In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model.
End of explanation
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Create the regressor, here we will use a Lasso Regressor to demonstrate the use of HP Tuning.
# Here is where we set the variables used during HP Tuning from
# the parameters passed into the python script
regressor = Lasso(
alpha=args.alpha,
max_iter=args.max_iter,
tol=args.tol,
selection=args.selection)
# Transform the features and fit them to the regressor
regressor.fit(train_features, train_labels)
Explanation: Use the Hyperparameters
Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's scikit-learn code.
End of explanation
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Calculate the mean accuracy on the given test data and labels.
score = regressor.score(test_features, test_labels)
# The default name of the metric is training/hptuning/metric.
# We recommend that you assign a custom name. The only functional difference is that
# if you use a custom name, you must set the hyperparameterMetricTag value in the
# HyperparameterSpec object in your job request to match your chosen name.
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#HyperparameterSpec
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='my_metric_tag',
metric_value=score,
global_step=1000)
Explanation: Report the mean accuracy as hyperparameter tuning objective metric.
End of explanation
%%writefile -a ./auto_mpg_hp_tuning/train.py
# Export the model to a file
model_filename = 'model.joblib'
joblib.dump(regressor, model_filename)
# Example: job_dir = 'gs://BUCKET_ID/scikit_learn_job_dir/1'
job_dir = args.job_dir.replace('gs://', '') # Remove the 'gs://'
# Get the Bucket Id
bucket_id = job_dir.split('/')[0]
# Get the path
bucket_path = job_dir.lstrip('{}/'.format(bucket_id)) # Example: 'scikit_learn_job_dir/1'
# Upload the model to GCS
bucket = storage.Client().bucket(bucket_id)
blob = bucket.blob('{}/{}'.format(
bucket_path,
model_filename))
blob.upload_from_filename(model_filename)
Explanation: Export and save the model to GCS
End of explanation
%%writefile ./auto_mpg_hp_tuning/__init__.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Note that __init__.py can be an empty file.
Explanation: Part 2: Create Trainer Package with Hyperparameter Tuning
Next we need to build the Trainer Package, which holds all your code and dependencies need to train your model on AI Platform.
First, we create an empty __init__.py file.
End of explanation
%%writefile ./hptuning_config.yaml
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# hyperparam.yaml
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 30
maxParallelTrials: 5
hyperparameterMetricTag: my_metric_tag
enableTrialEarlyStopping: TRUE
params:
- parameterName: alpha
type: DOUBLE
minValue: 0.0
maxValue: 10.0
scaleType: UNIT_LINEAR_SCALE
- parameterName: max_iter
type: INTEGER
minValue: 1000
maxValue: 5000
scaleType: UNIT_LINEAR_SCALE
- parameterName: tol
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LINEAR_SCALE
- parameterName: selection
type: CATEGORICAL
categoricalValues: [
"cyclic",
"random"
]
Explanation: Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info.
In this config file several key things are set:
* maxTrials - How many training trials should be attempted to optimize the specified hyperparameters.
* maxParallelTrials: 5 - The number of training trials to run concurrently.
* params - The set of parameters to tune.. These are the different parameters to pass into your model and the specified ranges you wish to try.
* parameterName - The parameter name must be unique amongst all ParameterConfigs
* type - The type of the parameter. [INTEGER, DOUBLE, ...]
* minValue & maxValue - The range of values that this parameter could be.
* scaleType - How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
End of explanation
%%writefile ./setup.py
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['cloudml-hypertune']
setup(
name='auto_mpg_hp_tuning',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Auto MPG sklearn HP tuning training application'
)
Explanation: Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info.
To do this, AI Platform uses a setup.py file to install your dependencies.
End of explanation
! gcloud config set project $PROJECT_ID
Explanation: Part 3: Submit Training Job
Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:
job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S")
job-dir - The path to a Google Cloud Storage location to use for job output.
package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.
module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.
region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.
runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.
python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.
scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.
config - Path to the job configuration file. This file should be a YAML document (JSON also accepted) containing a Job resource as defined in the API
Note: Check to make sure gcloud is set to the current PROJECT_ID
End of explanation
! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC \
--config $HPTUNING_CONFIG
Explanation: Submit the training job.
End of explanation
! gsutil ls $JOB_DIR/*
Explanation: [Optional] StackDriver Logging
You can view the logs for your training job:
1. Go to https://console.cloud.google.com/
1. Select "Logging" in left-hand pane
1. In left-hand pane, go to "AI Platform" and select Jobs
1. In filter by prefix, use the value of $JOB_NAME to view the logs
On the logging page of your model, you can view the different results for each HP tuning job.
Example:
{
"trialId": "2",
"hyperparameters": {
"selection": "random",
"max_iter": "1892",
"tol": "0.0609819896050862",
"alpha": "4.3704164028167725"
},
"finalMetric": {
"trainingStep": "1000",
"objectiveValue": 0.8658283435394591
}
}
[Optional] Verify Model File in GCS
View the contents of the destination model folder to verify that all 30 model files have indeed been uploaded to GCS.
Note: The model can take a few minutes to train and show up in GCS.
End of explanation |
47 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: [Py-OO] Aula 02
Herança
O que você vai aprender nesta aula?
Após o término da aula você terá aprendido
Step2: Vamos brincar um pouco com o cão
Step5: Agora vamos criar a classe GoldenRetriever como subclasse de cão. Os cachorros da raça golden retriever são conhecidos por serem muito inteligente e amigáveis. Seu nome retriever (pegador) por sua habilidade de pegar os alvos em jogos de caça sem danificá-los. Por esse motivo vamos implementar um método que permita os cães dessa raça pegar e devolver itens.
Step6: Linha 1
Step7: E temos acesso aos métodos e atributos de GoldenRetriever
Step10: Python não tem suporte "nativo" a sobrecarga de métodos de mesmo nome. Isso se dá pois a linguagem possui outros métodos de emular essa funcionalidade, como
Step11: Vamos testar a nova funcionalidade
Step13: Devemos notar que usamos *args e **kwargs na definição da função (linha 2) para repassar a instanciação dos argumentos à classe Cão, dessa maneira se modificarmos esta classe e adicionarmos outros parâmetros, então GoldenRetriever também aceitará esses parâmetros, como podemos ver no seguinte exemplo que redefinimos a classe Cão
Step16: Precisamos recarregar a classe GoldenRetriever para ela herdar da nova superclasse
Step17: Vamos criar mais uma subclasse de Cão. Dessa vez iremos criar a classe Pinscher, essa raça de cachorro tem a quantidade de raiva inversamente proporcional ao seu tamanho, ou seja, são muito nervosos! E também latem mais cães de outras raças
Step18: Para finalizar vamos criar a classe SãoBernardo que representa cães dessa mesma raça. Esta ficou famosa por diversos filmes de humor que passou dezenas de vezes em certo canal da TV aberta brasileira. Há lendas que dizem que, em países que nevam, o cão São Bernardo leva um pequeno barril de conhaque para resgatar viajantes perdidos na neve
Step19: O Python possui duas funções para verificar instâncias
Step20: Para verificar se o objeto é exatamente o tipo desejado é necessário usar a função embutida type()
Step21: Por fim também existe a função issubclass(class, classinfo) que verifica se uma classe é derivada de outra
Step22: Spoiler
Step23: Porém o tipo float não.
UML - Unified Modeling Language
O UML é um padrão de modelagem de software para diversos campos do desenvolvimento. O padrão possui diversos diagramas para expressar lógica de negócio e estrutura de programas. Para este curso veremos apenas Diagramas de Classe.
Vamos ver como fica o nosso código de cães em diagramas de classe
Step24: E herdam de Exception
Step25: Podemos criar nossas próprias exceções herdando de Exception
Step26: Geralmente não precisamos sobreescreve métodos da superclasse Exception, pois muitas vezes criamos exceções para deixar o programa mais claro. Caso você queira criar um erro com novas funcionalidades consulte a documentação de Exception e a parte tutorial do python que fala sobre exceções.
Lidando com exceções
É possível escrever programas que lidem com exceções
Step27: Também é possível multiplicar múltiplas exceções a serem tratadas
Step28: Porém lidar com exceções dessa maneira não é recomendado, pois isso pode omitir o nome da exceção e mascarar erros reais! Um jeito melhor de trabalhar com exceções é tratar cada uma de uma maneira
Step29: É possível especificar uma variável para ser atribuída a uma exceção levantada e melhorar o tratamento do erro
Step30: Para finalizar segue a lista de exceções padrão. A explicação de cada exceção está na documentação
Step31: Como vemos, o método chamado pela instância da classe D foi B.pong(), p|odemos chamar explicitamente C.pong() passando a instância como argumento explícito
Step32: Essa maneira de chamar métodos só é utilizando para resolver ambiguidade causada pela herança múltipla. O jeito recomendado de delegar chamadas de métodos a superclasses é usar a função embutida super(), pois é mais seguro e resistente a mudanças futuras, principalmente quando chamamos métodos em um framework ou em qualquer hierarquia de classes sobre a qual você não tenha controle.
Vamos ver como a função D.ping() se comporta
Step33: d.ping() chamou a função A.ping() (que imprimiu ping) e imprimiu post-ping
Agora vamos analisar as chamadas do método pingpong
Step34: Primeiro foi chamado D.ping() que chama A.ping() e imprime post-ping
Depois foi chamado super().ping() que chama somente A.ping()
Foi chamado super().pong() que chamou B.pong()
E, por fim, C.pong(self) foi chamado diretamente, passando a instância de D.
Agora que entendemos como esse exemplo do problema do diagrama é rodado, podemos falar sobre como o Python resolve essa ambiguidade
Step35: Aqui comprovamos o que foi dito anteriormente
Step36: Lidando com herança múltipla
Como dito anteriormente a herança múltipla pode tornar o código de um software muito complicado e frágil. A seguir consta algumas dicas para deixar grafos de classes mais simples
Step37: Agora verificaremos o RedirectView.__mro__
Step38: DetailView
A DetailView cria uma página para exibição da informação de um único modelo. Por exemplo podemos utilizá-la em um e-commerce para facilitar a criação de uma página que exibe as informações de apenas um produto.
Step39: ListView
A ListView - favorita dos usuários de CBV - facilita a criação de views que permitem a listagem de modelos. Como por exemplo a listagem de todos os produtos do site (ou de apenas os de algum tipo, categoria etc.).
Step40: DeleteView
Por fim temos a DeleteView que permite a remoção de um modelo. | Python Code:
class Cão:
qtd_patas = 4
carnívoro = True
nervoso = False
def __init__(self, nome):
self.nome = nome
def latir(self, vezes=1):
Latir do cão. Quanto mais nervoso mais late.
vezes += self.nervoso * vezes
latido = 'Au! ' * vezes
print('{}: {}'.format(self.nome, latido))
Explanation: [Py-OO] Aula 02
Herança
O que você vai aprender nesta aula?
Após o término da aula você terá aprendido:
Herança e Herança múltipla
Lidando com exceções
Este material usou algumas explicações e exemplos do Capítulo 12 do livro Python Fluente do Luciano Ramalho.
Vamos falar nesta aula sobre herança, que é um dos conceitos mais importantes e utilizados dentro da orientação a objetos. A herança traz várias vantagens como, por exemplo, a reutilização de código que reduz a manutenção de software e permite criar aplicações mais complexas.
"Começamos a insistir na ideia de herança como uma maneira de permitir que novatos desenvolvessem com base em frameworks que poderiam ser projetados somente por experts" (Alan Kay, The Early History of Smalltalk)
Também vamos falar sobre herança múltipla. Muitos programadores chegam ao Python do Java sem ter visto herança múltipla na prática, por esse motivo serão usados exemplos didáticos sobre esse tema usando um projeto Python importante: o framework web Django.
Herança
Na aula passada vimos o seguinte exemplo da classe Cão. Hoje vamos usar herança para especificar raças diferentes de cachorros:
End of explanation
rex = Cão('Rex')
rex.qtd_patas
rex.nome
rex.latir()
rex.latir(5)
rex.nervoso
rex.nervoso = True
rex.latir()
rex.latir(10)
Explanation: Vamos brincar um pouco com o cão:
End of explanation
class GoldenRetriever(Cão):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.itens = []
def pega(self, item):
busca/pega um item quando ordenado
self.itens.append(item)
print('{} pegou {}'.format(self.nome, item))
def devolve(self, item=None):
devolve um item, caso o item não seja especificado retorna o último que pegou
if not self.itens:
print('{} não está segurando item algum!'.format(self.nome))
return
if not item:
item = self.itens.pop()
elif item not in self.itens:
print('{} não está segurando {}!'.format(self.nome, item))
return
else:
self.itens.remove(item)
print('{} devolve {}'.format(self.nome, item))
return item
Explanation: Agora vamos criar a classe GoldenRetriever como subclasse de cão. Os cachorros da raça golden retriever são conhecidos por serem muito inteligente e amigáveis. Seu nome retriever (pegador) por sua habilidade de pegar os alvos em jogos de caça sem danificá-los. Por esse motivo vamos implementar um método que permita os cães dessa raça pegar e devolver itens.
End of explanation
nana = GoldenRetriever('Nana')
nana.nome
nana.nervoso
nana.carnívoro
nana.latir()
nana.latir(5)
Explanation: Linha 1: especificamos que a classe GoldenRetriever herda de Cão.
Linha 2: sobrescrevemos o construtor da superclasse.
Linha 3: repassamos os argumentos para o inicializador de Cão.
Linha 4: criamos um novo atributo itens às instâncias de GoldenRetriever
Por herdar da classe Cão a classe GoldenRetriever recebe todos os métodos e atributos do primeiro:
End of explanation
nana.itens
nana.pega('bola')
nana.itens
nana.devolve()
nana.devolve()
Explanation: E temos acesso aos métodos e atributos de GoldenRetriever:
End of explanation
class GoldenRetriever(Cão):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.itens = []
def pega(self, item):
busca/pega um item quando ordenado
self.itens.append(item)
print('{} pegou {}'.format(self.nome, item))
def devolve(self, item):
devolve um item, caso o item não seja especificado retorna o último que pegou
if not self.itens:
raise ValueError('{} não está segurando item algum!'.format(self.nome))
if item not in self.itens:
raise ValueError('{} não está segurando {}!'.format(self.nome, item))
self.itens.remove(item)
print('{} devolve {}'.format(self.nome, item))
return item
def devolve_ultimo(self):
if not self.itens:
raise ValueError('{} não está segurando item algum!'.format(self.nome))
return self.itens.pop()
Explanation: Python não tem suporte "nativo" a sobrecarga de métodos de mesmo nome. Isso se dá pois a linguagem possui outros métodos de emular essa funcionalidade, como: argumentos padrão e empacotamento de argumentos posicionais e nomeados.
No exemplo anterior, o método GoldenRetriever.devolve() poderia ser escrito com dois metódos: um que não recebe item GoldenRetriever.devolve(self) e outro que recebe um item GoldenRetriever.devolve(self, item). Isso não foi necessário, pois usamos argumento padrão.
Mas, se analisarmos a função devolve com cuidado logo percebemos que ela faz duas coisas diferentes - assim como em uma lista temos os métodos list.pop() e list.remove() e não somente um list.remove() que funciona para todos os casos. Seguindo as melhores práticas de programação seria melhor escrever duas funções diferentes para isso:
End of explanation
toto = GoldenRetriever('Totó')
toto.nome
toto.itens
toto.pega('chinelo')
toto.pega('bola')
toto.itens
toto.devolve_ultimo()
toto.devolve('meia')
toto.devolve('chinelo')
toto.devolve_ultimo()
Explanation: Vamos testar a nova funcionalidade:
End of explanation
class Cão:
qtd_patas = 4
carnívoro = True
nervoso = False
def __init__(self, nome, data_nascimento=None):
self.nome = nome
self.data_nascimento = data_nascimento
def latir(self, vezes=1):
Latir do cão. Quanto mais nervoso mais late.
vezes += self.nervoso * vezes
latido = 'Au! ' * vezes
print('{}: {}'.format(self.nome, latido))
Explanation: Devemos notar que usamos *args e **kwargs na definição da função (linha 2) para repassar a instanciação dos argumentos à classe Cão, dessa maneira se modificarmos esta classe e adicionarmos outros parâmetros, então GoldenRetriever também aceitará esses parâmetros, como podemos ver no seguinte exemplo que redefinimos a classe Cão:
End of explanation
class GoldenRetriever(Cão):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.itens = []
def pega(self, item):
busca/pega um item quando ordenado
self.itens.append(item)
print('{} pegou {}'.format(self.nome, item))
def devolve(self, item):
devolve um item, caso o item não seja especificado retorna o último que pegou
if not self.itens:
raise ValueError('{} não está segurando item algum!'.format(self.nome))
if item not in self.itens:
raise ValueError('{} não está segurando {}!'.format(self.nome, item))
self.itens.remove(item)
print('{} devolve {}'.format(self.nome, item))
return item
def devolve_ultimo(self):
if not self.itens:
raise ValueError('{} não está segurando item algum!'.format(self.nome))
return self.itens.pop()
from datetime import date
totó = GoldenRetriever('Totó', date(2016, 4, 4))
totó.data_nascimento
print(totó.data_nascimento)
fido = GoldenRetriever('Fido')
print(fido.data_nascimento)
Explanation: Precisamos recarregar a classe GoldenRetriever para ela herdar da nova superclasse:
End of explanation
class Pinscher(Cão):
nervoso = True
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def latir(self, vezes=1):
vezes *= 2
super().latir(vezes)
mimi = Pinscher('Mimi')
mimi.nervoso
mimi.nome
mimi.latir()
mimi.latir(5)
Explanation: Vamos criar mais uma subclasse de Cão. Dessa vez iremos criar a classe Pinscher, essa raça de cachorro tem a quantidade de raiva inversamente proporcional ao seu tamanho, ou seja, são muito nervosos! E também latem mais cães de outras raças:
End of explanation
class SãoBernardo(Cão):
def __init__(self, *args):
super().__init__(*args)
self.doses = 10
def servir(self):
if self.doses == 0:
raise ValueError("'Cabou a birita!")
self.doses -= 1
print('{} serve a birita (restam {} doses)'.format(self.nome, self.doses))
sansao = SãoBernardo('Sansão')
sansao.servir()
sansao.doses = 1
sansao.servir()
sansao.servir()
Explanation: Para finalizar vamos criar a classe SãoBernardo que representa cães dessa mesma raça. Esta ficou famosa por diversos filmes de humor que passou dezenas de vezes em certo canal da TV aberta brasileira. Há lendas que dizem que, em países que nevam, o cão São Bernardo leva um pequeno barril de conhaque para resgatar viajantes perdidos na neve:
End of explanation
isinstance(sansao, SãoBernardo)
isinstance(sansao, Cão)
isinstance(totó, SãoBernardo)
isinstance(totó, GoldenRetriever)
isinstance(totó, Cão)
Explanation: O Python possui duas funções para verificar instâncias: isinstance(obj, cls) que checa se uma classe é uma instância da classe ou de suas superclasses:
End of explanation
type(sansao) is SãoBernardo
type(sansao) is Cão
type(totó) is Cão
Explanation: Para verificar se o objeto é exatamente o tipo desejado é necessário usar a função embutida type():
End of explanation
issubclass(type(sansao), Cão)
issubclass(bool, int)
Explanation: Por fim também existe a função issubclass(class, classinfo) que verifica se uma classe é derivada de outra:
End of explanation
issubclass(float, int)
Explanation: Spoiler: o tipo bool é derivado do tipo int. (veremos mais sobre isso na aula sobre python data model)
End of explanation
ValueError.__name__
Explanation: Porém o tipo float não.
UML - Unified Modeling Language
O UML é um padrão de modelagem de software para diversos campos do desenvolvimento. O padrão possui diversos diagramas para expressar lógica de negócio e estrutura de programas. Para este curso veremos apenas Diagramas de Classe.
Vamos ver como fica o nosso código de cães em diagramas de classe:
A classe é representada por retângulo e é dividida por três partes:
1. Nome da classe
2. Atributos
3. Métodos
O símbolo antes dos métodos e atributos representa a visibilidade deles. Os símbolos e suas visibilidades são:
```
+ Público
Protegido
~ Pacote
- Privado
```
A seta vazada (existem 3 delas no exemplo) indicam relações de generalização e é usado para indicar se uma classe herda de outra.
Para esta aula iremos usar somente essa pequena parte do padrão de Diagrama de Classes, caso você queira saber mais pode ver este simples tutorial.
Exceptions
Antes de falar sobre herança múltipla, vamos falar primeiro sobre exceções. Exception é uma classe:
End of explanation
issubclass(ValueError, Exception)
Explanation: E herdam de Exception:
End of explanation
class MeuErro(Exception):
pass
raise MeuErro('deu erro')
Explanation: Podemos criar nossas próprias exceções herdando de Exception:
End of explanation
a, b = 10, 0
a / b
try:
a / b
except ZeroDivisionError:
print('Olha a divisão por zero aí mano!')
Explanation: Geralmente não precisamos sobreescreve métodos da superclasse Exception, pois muitas vezes criamos exceções para deixar o programa mais claro. Caso você queira criar um erro com novas funcionalidades consulte a documentação de Exception e a parte tutorial do python que fala sobre exceções.
Lidando com exceções
É possível escrever programas que lidem com exceções:
End of explanation
b = "0"
try:
a / b
except (ZeroDivisionError, TypeError):
print('Erro: tem que ver isso daí')
Explanation: Também é possível multiplicar múltiplas exceções a serem tratadas:
End of explanation
b = "0"
try:
a / b
except ZeroDivisionError:
print('Não é possível dividir por 0')
except TypeError:
print('Algum tipo está errado')
b = 0
try:
a / b
except ZeroDivisionError:
print('Não é possível dividir por 0')
except TypeError:
print('Algum tipo está errado')
Explanation: Porém lidar com exceções dessa maneira não é recomendado, pois isso pode omitir o nome da exceção e mascarar erros reais! Um jeito melhor de trabalhar com exceções é tratar cada uma de uma maneira:
End of explanation
a = {}
try:
print(a['chave'])
except KeyError as exc:
print(type(exc)) # imprime o tipo da exceção
print(exc.args) # mostra a tupla de argumentos que a exceção recebeu
print(exc) # imprime diretamente os argumento
Explanation: É possível especificar uma variável para ser atribuída a uma exceção levantada e melhorar o tratamento do erro:
End of explanation
class A:
def ping(self):
print('ping', self)
class B(A):
def pong(self):
print('pong', self)
class C(A):
def pong(self):
print('PONG', self)
class D(B, C):
def ping(self):
super().ping()
print('post-ping:', self)
def pingpong(self):
self.ping()
super().ping()
self.pong()
C.pong(self)
d = D()
d.pong()
Explanation: Para finalizar segue a lista de exceções padrão. A explicação de cada exceção está na documentação:
BaseException
+-- SystemExit
+-- KeyboardInterrupt
+-- GeneratorExit
+-- Exception
+-- StopIteration
+-- StopAsyncIteration
+-- ArithmeticError
| +-- FloatingPointError
| +-- OverflowError
| +-- ZeroDivisionError
+-- AssertionError
+-- AttributeError
+-- BufferError
+-- EOFError
+-- ImportError
+-- LookupError
| +-- IndexError
| +-- KeyError
+-- MemoryError
+-- NameError
| +-- UnboundLocalError
+-- OSError
| +-- BlockingIOError
| +-- ChildProcessError
| +-- ConnectionError
| | +-- BrokenPipeError
| | +-- ConnectionAbortedError
| | +-- ConnectionRefusedError
| | +-- ConnectionResetError
| +-- FileExistsError
| +-- FileNotFoundError
| +-- InterruptedError
| +-- IsADirectoryError
| +-- NotADirectoryError
| +-- PermissionError
| +-- ProcessLookupError
| +-- TimeoutError
+-- ReferenceError
+-- RuntimeError
| +-- NotImplementedError
| +-- RecursionError
+-- SyntaxError
| +-- IndentationError
| +-- TabError
+-- SystemError
+-- TypeError
+-- ValueError
| +-- UnicodeError
| +-- UnicodeDecodeError
| +-- UnicodeEncodeError
| +-- UnicodeTranslateError
+-- Warning
+-- DeprecationWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+-- FutureWarning
+-- ImportWarning
+-- UnicodeWarning
+-- BytesWarning
+-- ResourceWarning
Herança múltipla
O Python oferece suporte a herança múltipla. A forma de usar herança múltipla é parecida com a herança simples, porém a subclasse herda de mais de uma classe, por exemplo:
```py
class Subclasse(Base1, Base2, Base3):
...
...
```
Algumas linguagens famosas não implementam herança múltipla como Java, C# e Ruby (elas possuem outras técnicas para prover essa funcionalidade), pois quando usada de forma incorreta pode resultar em sistemas ambíguos e difíceis de entender.
Um problema muito comum de herança múltipla é o Problema do losango, em que aparece ambiguidade quando uma classe A implementa um método que é sobrescrito nas subclasses B e C e quando D, que não sobrescreveu esse método, invoca esse método qual deve ser chamado: o de C ou de D?
Vamos fazer um exemplo que implementa o problema do losango e mostaremos como o Python o resolve:
End of explanation
C.pong(d)
Explanation: Como vemos, o método chamado pela instância da classe D foi B.pong(), p|odemos chamar explicitamente C.pong() passando a instância como argumento explícito:
End of explanation
d.ping()
Explanation: Essa maneira de chamar métodos só é utilizando para resolver ambiguidade causada pela herança múltipla. O jeito recomendado de delegar chamadas de métodos a superclasses é usar a função embutida super(), pois é mais seguro e resistente a mudanças futuras, principalmente quando chamamos métodos em um framework ou em qualquer hierarquia de classes sobre a qual você não tenha controle.
Vamos ver como a função D.ping() se comporta:
End of explanation
d.pingpong()
Explanation: d.ping() chamou a função A.ping() (que imprimiu ping) e imprimiu post-ping
Agora vamos analisar as chamadas do método pingpong:
End of explanation
D.__mro__
Explanation: Primeiro foi chamado D.ping() que chama A.ping() e imprime post-ping
Depois foi chamado super().ping() que chama somente A.ping()
Foi chamado super().pong() que chamou B.pong()
E, por fim, C.pong(self) foi chamado diretamente, passando a instância de D.
Agora que entendemos como esse exemplo do problema do diagrama é rodado, podemos falar sobre como o Python resolve essa ambiguidade: percorrendo o grafo de herança de uma forma específica: o MRO - Method Resolution Order ou Ordem de Resolução dos Métodos.
O Python procura o método na própria classe, depois em cada superclasse (sem repetição, caso exista haja sobreposição na hierarquia) da esquerda para a direita, depois procura nas superclasses da superclasses da esquerda para direita até chegar a object (herdado por todas as classes).
Todos os objetos das classes possuem um atributo chamado __mro__ que apresenta uma tupla de referências às superclasses na ordem MRO, da classe atual até a classe object. Vamos ver qual é o MRO da classe D:
End of explanation
bool.__mro__
from numbers import Integral
Integral.__mro__
from io import StringIO
StringIO.__mro__
Explanation: Aqui comprovamos o que foi dito anteriormente: primeiro um atributo é buscado na seguinte ordem: D, B, C, A e object.
Vamos inspecionar o atributo __mro__ de outras classes:
End of explanation
from django.views.generic import TemplateView
TemplateView.__mro__
Explanation: Lidando com herança múltipla
Como dito anteriormente a herança múltipla pode tornar o código de um software muito complicado e frágil. A seguir consta algumas dicas para deixar grafos de classes mais simples:
Faça a distinção entre herança de interface (para criar um subtipo, implicando em uma relação "é um") e herança de implementação (para evitar duplicação de código por meio de reutilização)
Deixe as interfaces explícitas com ABCs
Use mixins e explicite as mixins pelo nome
Uma ABC pode ser um mixin, mas não o contrário
Não herde de mais de uma classe concreta
Ofereça classes agregadas aos usuários
"Prefira composição de objetos à herança de classe" (GoF, Padrões de Projeto)
Mixins
"Mixin é uma classe que provê funcionalidade para ser herdada, mas não instanciada sozinha. [...]
pode ser usada para melhorar funcionalidades e comportamentos de classes" (Greenfield e Greenfield, Two Scoops of Django 1.8)
Regra de uso de mixins:
1. A classe base deve estar sempre a direita
2. Os mixins devem estar a esquerda da classe base
3. As classes e mixins devem herdar de object (isso já é feito por padrão no Python 3)
Herança múltipla na prática: Django
O django é um framework web Python muito popular. Ele facilita a criação de views oferecendo views genéricas baseadas em classe - as Class Based Views (CBV - que implementam tarefas comuns na criação de sistema webs como listar objetos, exibir páginas estáticas e criar views de redirecionamento.
As Class Based Views foram criadas usando herança múltipla e seguem as dicas citadas acima. Abaixo segue o Diagrama de Classes para podermos entender como a herança múltipla pode ser usada de maneira positiva na prática:
TemplateView e RedirectView
A TemplateView exibe o conteúdo de uma página estática. Enquanto que o RedirectView simplesmente redireciona uma página para outra.
Vamos ver como o python resolve a herança de TemplateView verificando seu atributo __mro__:
End of explanation
from django.views.generic import RedirectView
RedirectView.__mro__
Explanation: Agora verificaremos o RedirectView.__mro__:
End of explanation
from django.views.generic import DetailView
DetailView.__mro__
Explanation: DetailView
A DetailView cria uma página para exibição da informação de um único modelo. Por exemplo podemos utilizá-la em um e-commerce para facilitar a criação de uma página que exibe as informações de apenas um produto.
End of explanation
from django.views.generic import ListView
ListView.__mro__
Explanation: ListView
A ListView - favorita dos usuários de CBV - facilita a criação de views que permitem a listagem de modelos. Como por exemplo a listagem de todos os produtos do site (ou de apenas os de algum tipo, categoria etc.).
End of explanation
from django.views.generic import DeleteView
DeleteView.__mro__
Explanation: DeleteView
Por fim temos a DeleteView que permite a remoção de um modelo.
End of explanation |
48 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transformer with non-trivial phase shift and tap ratio
This example is a copy of pandapower's minimal example.
Step1: Now play with tap changer on LV side
Step2: Now make sure that the phase shift is also there in the LOPF | Python Code:
import pypsa
import numpy as np
import pandas as pd
network = pypsa.Network()
network.add("Bus", "MV bus", v_nom=20, v_mag_pu_set=1.02)
network.add("Bus", "LV1 bus", v_nom=0.4)
network.add("Bus", "LV2 bus", v_nom=0.4)
network.add(
"Transformer",
"MV-LV trafo",
type="0.4 MVA 20/0.4 kV",
bus0="MV bus",
bus1="LV1 bus",
)
network.add(
"Line", "LV cable", type="NAYY 4x50 SE", bus0="LV1 bus", bus1="LV2 bus", length=0.1
)
network.add("Generator", "External Grid", bus="MV bus", control="Slack")
network.add("Load", "LV load", bus="LV2 bus", p_set=0.1, q_set=0.05)
def run_pf():
network.lpf()
network.pf(use_seed=True)
return pd.DataFrame(
{
"Voltage Angles": network.buses_t.v_ang.loc["now"] * 180.0 / np.pi,
"Volate Magnitude": network.buses_t.v_mag_pu.loc["now"],
}
)
run_pf()
network.transformers.tap_position = 2
run_pf()
network.transformers.tap_position = -2
run_pf()
Explanation: Transformer with non-trivial phase shift and tap ratio
This example is a copy of pandapower's minimal example.
End of explanation
new_trafo_lv_tap = network.transformer_types.loc[["0.4 MVA 20/0.4 kV"]]
new_trafo_lv_tap.index = ["New trafo"]
new_trafo_lv_tap.tap_side = 1
new_trafo_lv_tap.T
network.transformer_types = network.transformer_types.append(new_trafo_lv_tap)
network.transformers.type = "New trafo"
network.transformers.tap_position = 2
run_pf()
network.transformers.T
network.transformers.tap_position = -2
run_pf()
Explanation: Now play with tap changer on LV side
End of explanation
network.generators.p_nom = 1.0
network.lines.s_nom = 1.0
network.lopf()
pd.DataFrame(
{
"Voltage Angles": network.buses_t.v_ang.loc["now"] * 180.0 / np.pi,
"Volate Magnitude": network.buses_t.v_mag_pu.loc["now"],
}
)
Explanation: Now make sure that the phase shift is also there in the LOPF
End of explanation |
49 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
textset = set(text)
vocab_to_int = { word: i for i, word in enumerate(textset)}
int_to_vocab = { i : word for i, word in enumerate(textset)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return { '.' : '||period||',
',' : '||comma||',
'"' : '||quotes||',
';' : '||semicolon||',
'!' : '||exclamation_mark||',
'?' : '||question_mark||',
'(' : '||lparen||',
')' : '||rparen||',
'--' : '||dash||',
'\n' : '||return||'}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
Input = tf.placeholder(tf.int32, shape = [None, None], name="input")
Targets = tf.placeholder(tf.int32, shape = [None, None])
LearningRate = tf.placeholder(tf.float32)
return Input, Targets, LearningRate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
num_layers = 1
keep_prob = 1.0
#cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
Cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size), output_keep_prob=keep_prob) for _ in range(num_layers)]) #[cell] * num_layers)
InitialState = Cell.zero_state(batch_size, tf.float32)
InitialState = tf.identity(InitialState, name="initial_state")
return Cell, InitialState
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim)))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
Outputs, FinalState = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
FinalState = tf.identity(FinalState, name="final_state")
return Outputs, FinalState
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed_input = get_embed(input_data, vocab_size, embed_dim)
rnn_output, FinalState = build_rnn(cell, embed_input)
Logits = tf.contrib.layers.fully_connected(inputs = rnn_output, num_outputs = vocab_size, activation_fn = None,
weights_initializer = tf.truncated_normal_initializer(stddev=0.1))
#biases_initializer = tf.zeros_initializer())
# seq_output = tf.concat(rnn_output, axis=1)
# x = tf.reshape(rnn_output, [-1, rnn_size])
# weights = tf.Variable(tf.truncated_normal((rnn_size, vocab_size), stddev=0.1))
# biases = tf.Variable(tf.zeros(vocab_size))
# Logits = tf.matmul(x, weights) + biases
# Logits = tf.reshape(Logits, input_data.get_shape().as_list() + [vocab_size])
return Logits, FinalState
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
words_per_batch = batch_size * seq_length
nbatches = len(int_text) // words_per_batch
int_text = int_text[:nbatches * words_per_batch]
# Yes, I know it's not the most pythonic way to do it like this.
# It was just the simplest way for me to get the batches right the first time,
# and focus on the other parts of the exercise.
batches = np.array(np.zeros((nbatches, 2, batch_size, seq_length), dtype=np.int32))
batchno = 0
posinbatch = 0
for i in range(0, len(int_text), seq_length):
for j in range(seq_length):
batches[batchno][0][posinbatch][j] = int_text[i + j]
batches[batchno][1][posinbatch][j] = int_text[(i + j + 1) % len(int_text)]
batchno += 1
if batchno == nbatches:
batchno = 0
posinbatch += 1
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 1024
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
val = np.random.uniform()
for i, p in enumerate(probabilities):
if val < p:
return int_to_vocab[i]
val -= p
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
50 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook demonstrates some basic post-processing tasks that can be performed with the Python API, such as plotting a 2D mesh tally and plotting neutron source sites from an eigenvalue calculation. The problem we will use is a simple reflected pin-cell.
Step1: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
Step2: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step6: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
Step7: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 90 active batches each with 5000 particles.
Step8: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step9: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a 2D mesh tally.
Step10: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step11: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, data from the statepoint file is only read into memory when it is requested. This helps keep the memory use to a minimum even when a statepoint file may be huge.
Step12: Next we need to get the tally, which can be done with the StatePoint.get_tally(...) method.
Step13: The statepoint file actually stores the sum and sum-of-squares for each tally bin from which the mean and variance can be calculated as described here. The sum and sum-of-squares can be accessed using the sum and sum_sq properties
Step14: However, the mean and standard deviation of the mean are usually what you are more interested in. The Tally class also has properties mean and std_dev which automatically calculate these statistics on-the-fly.
Step15: The tally data has three dimensions
Step16: To get the bins into a form that we can plot, we can simply change the shape of the array since it is a numpy array.
Step17: Now let's say we want to look at the distribution of relative errors of our tally bins for flux. First we create a new variable called relative_error and set it to the ratio of the standard deviation and the mean, being careful not to divide by zero in case some bins were never scored to.
Step18: Source Sites
Source sites can be accessed from the source property. As shown below, the source sites are represented as a numpy array with a structured datatype.
Step19: If we want, say, only the energies from the source sites, we can simply index the source array with the name of the field
Step20: Now, we can look at things like the energy distribution of source sites. Note that we don't directly use the matplotlib.pyplot.hist method since our binning is logarithmic.
Step21: Let's also look at the spatial distribution of the sites. To make the plot a little more interesting, we can also include the direction of the particle emitted from the source and color each source by the logarithm of its energy. | Python Code:
%matplotlib inline
from IPython.display import Image
import numpy as np
import matplotlib.pyplot as plt
import openmc
Explanation: This notebook demonstrates some basic post-processing tasks that can be performed with the Python API, such as plotting a 2D mesh tally and plotting neutron source sites from an eigenvalue calculation. The problem we will use is a simple reflected pin-cell.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 100
inactive = 10
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 90 active batches each with 5000 particles.
End of explanation
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.to_ipython_image()
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Create mesh which will be used for tally
mesh = openmc.RegularMesh()
mesh.dimension = [100, 100]
mesh.lower_left = [-0.63, -0.63]
mesh.upper_right = [0.63, 0.63]
# Create mesh filter for tally
mesh_filter = openmc.MeshFilter(mesh)
# Create mesh tally to score flux and fission rate
tally = openmc.Tally(name='flux')
tally.filters = [mesh_filter]
tally.scores = ['flux', 'fission']
tallies_file.append(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a 2D mesh tally.
End of explanation
# Run OpenMC!
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint('statepoint.100.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, data from the statepoint file is only read into memory when it is requested. This helps keep the memory use to a minimum even when a statepoint file may be huge.
End of explanation
tally = sp.get_tally(scores=['flux'])
print(tally)
Explanation: Next we need to get the tally, which can be done with the StatePoint.get_tally(...) method.
End of explanation
tally.sum
Explanation: The statepoint file actually stores the sum and sum-of-squares for each tally bin from which the mean and variance can be calculated as described here. The sum and sum-of-squares can be accessed using the sum and sum_sq properties:
End of explanation
print(tally.mean.shape)
(tally.mean, tally.std_dev)
Explanation: However, the mean and standard deviation of the mean are usually what you are more interested in. The Tally class also has properties mean and std_dev which automatically calculate these statistics on-the-fly.
End of explanation
flux = tally.get_slice(scores=['flux'])
fission = tally.get_slice(scores=['fission'])
print(flux)
Explanation: The tally data has three dimensions: one for filter combinations, one for nuclides, and one for scores. We see that there are 10000 filter combinations (corresponding to the 100 x 100 mesh bins), a single nuclide (since none was specified), and two scores. If we only want to look at a single score, we can use the get_slice(...) method as follows.
End of explanation
flux.std_dev.shape = (100, 100)
flux.mean.shape = (100, 100)
fission.std_dev.shape = (100, 100)
fission.mean.shape = (100, 100)
fig = plt.subplot(121)
fig.imshow(flux.mean)
fig2 = plt.subplot(122)
fig2.imshow(fission.mean)
Explanation: To get the bins into a form that we can plot, we can simply change the shape of the array since it is a numpy array.
End of explanation
# Determine relative error
relative_error = np.zeros_like(flux.std_dev)
nonzero = flux.mean > 0
relative_error[nonzero] = flux.std_dev[nonzero] / flux.mean[nonzero]
# distribution of relative errors
ret = plt.hist(relative_error[nonzero], bins=50)
Explanation: Now let's say we want to look at the distribution of relative errors of our tally bins for flux. First we create a new variable called relative_error and set it to the ratio of the standard deviation and the mean, being careful not to divide by zero in case some bins were never scored to.
End of explanation
sp.source
Explanation: Source Sites
Source sites can be accessed from the source property. As shown below, the source sites are represented as a numpy array with a structured datatype.
End of explanation
sp.source['E']
Explanation: If we want, say, only the energies from the source sites, we can simply index the source array with the name of the field:
End of explanation
# Create log-spaced energy bins from 1 keV to 10 MeV
energy_bins = np.logspace(3,7)
# Calculate pdf for source energies
probability, bin_edges = np.histogram(sp.source['E'], energy_bins, density=True)
# Make sure integrating the PDF gives us unity
print(sum(probability*np.diff(energy_bins)))
# Plot source energy PDF
plt.semilogx(energy_bins[:-1], probability*np.diff(energy_bins), drawstyle='steps')
plt.xlabel('Energy (eV)')
plt.ylabel('Probability/eV')
Explanation: Now, we can look at things like the energy distribution of source sites. Note that we don't directly use the matplotlib.pyplot.hist method since our binning is logarithmic.
End of explanation
plt.quiver(sp.source['r']['x'], sp.source['r']['y'],
sp.source['u']['x'], sp.source['u']['y'],
np.log(sp.source['E']), cmap='jet', scale=20.0)
plt.colorbar()
plt.xlim((-0.5,0.5))
plt.ylim((-0.5,0.5))
Explanation: Let's also look at the spatial distribution of the sites. To make the plot a little more interesting, we can also include the direction of the particle emitted from the source and color each source by the logarithm of its energy.
End of explanation |
51 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 8 - Local Realism
with Alice and Bob
Step2: First define the projection operator for a state at angle $\theta$
Step3: Create the projection operators for each of the angles, two for Alice, two for Bob
Step4: Create the state $\big|\psi\big\rangle = \sqrt{0.2} \big|H,H\big\rangle + \sqrt{0.8} \big|V,V\big\rangle$
Step5: Now, find the joint probability that Alice measures A1 and Bob measures B1. We do this by finding the expectation value of the projection operator for the joint state $\big|\theta_{A1},\theta_{B1}\big\rangle$. This is formed as the tensor product of the two appropriate projection operators. In these tensor products, be sure to put Alice's operator first, then Bob's (just like we did for the signal and idler photons). Each operator acts on the photon corresponding to the order in the tensor() function.
Notice we'll be using a new function expect(). This is equivalent to putting the operator in between the state bra and ket
Step6: Find the conditional probability $P(\theta_{B2}|\theta_{A1}) = \frac{P(\theta_{B2},\theta_{A1})}{P(\theta_{A1})}$
Step7: Find the conditional probability $P(\theta_{A2}|\theta_{B1}) = \frac{P(\theta_{A2},\theta_{B1})}{P(\theta_{B1})}$
Step8: This is what we described in class.
What if the state was just $|H,H\rangle$?
Step9: This is harder to interpret, but we clearly have different probabilities. Finally, check if we had used a mixed state
Step10: We see that $P(\theta_{B2},\theta_{A2}) > P(\theta_{B1},\theta_{A1})$ as we said in class for a state that obeys realism.
Now repeat with the pure state but using density matrix techniques.
This isn't going to tell us anything new, but it shows how to work with the density matrix if you already know the ket state.
Step11: The calculations are actually the same in QuTiP, the expect function takes either a ket state or a density matrix.
Step12: These all agree (as they should).
Explore the angles in more detail
Step13: Make a list of the probability of joint measurements for a pair of angles
Step14: We see that the joint probabilities have a zero at 35˚. Now plug that in to one of the conditional probabilities and see what angle for the conditional probability gives 1
Step15: So only 19 and 35 work. Now, can you derive 19 and 35 given only the state $|\psi\rangle$? Try the first plot, i.e. calculate the joint probability $P(\theta_A,\theta_B)$
Solution
Using the state, write the projection operators for a two photon state with angles $\theta_A$ and $\theta_B$. First, recall $$\big|\theta_i\big\rangle = \cos\theta_i\big|H\big\rangle + \sin\theta_i\big|V\big\rangle.$$ Next, form the two-photon state
Step16: Challenge | Python Code:
from numpy import sin,cos,pi,sqrt,angle,exp,deg2rad,arange,rad2deg
import matplotlib.pyplot as plt
from qutip import *
%matplotlib inline
H = Qobj([[1],[0]])
V = Qobj([[0],[1]])
Explanation: Chapter 8 - Local Realism
with Alice and Bob
End of explanation
def P(theta):
The projection operator for a state at angle theta
theta_ket = cos(theta)*H + sin(theta)*V
return theta_ket*theta_ket.dag()
Explanation: First define the projection operator for a state at angle $\theta$:
End of explanation
Pa1 = P(deg2rad(19))
Pa2 = P(deg2rad(-35))
Pb1 = P(deg2rad(-19))
Pb2 = P(deg2rad(35))
Explanation: Create the projection operators for each of the angles, two for Alice, two for Bob
End of explanation
psi=sqrt(0.2)*tensor(H,H) + sqrt(0.8)*tensor(V,V)
Explanation: Create the state $\big|\psi\big\rangle = \sqrt{0.2} \big|H,H\big\rangle + \sqrt{0.8} \big|V,V\big\rangle$:
End of explanation
P1 = expect(tensor(Pa1,Pb1),psi) # joint for A1, B1 (expect 0.09)
P2 = psi.dag()*tensor(Pa1,Pb1)*psi
P1 == P2.data[0,0] # The only difference is that we have to pull out the value
# from the Qobj using the .data[0,0] method so we can compare it to result from `expect`
P1
Explanation: Now, find the joint probability that Alice measures A1 and Bob measures B1. We do this by finding the expectation value of the projection operator for the joint state $\big|\theta_{A1},\theta_{B1}\big\rangle$. This is formed as the tensor product of the two appropriate projection operators. In these tensor products, be sure to put Alice's operator first, then Bob's (just like we did for the signal and idler photons). Each operator acts on the photon corresponding to the order in the tensor() function.
Notice we'll be using a new function expect(). This is equivalent to putting the operator in between the state bra and ket:
End of explanation
# B2 conditioned on A1 (expect 1)
Prob_b2_a1 = expect(tensor(Pa1,Pb2),psi)
#(psi.dag()*tensor(Pa1,Pb2)*psi).data[0,0] # the joint probability
Prob_a1 = expect(tensor(Pa1,qeye(2)),psi)
#(psi.dag()*tensor(Pa1,qeye(2))*psi).data[0,0] # the singular probability
Prob_b2a1 = Prob_b2_a1 / Prob_a1 # the conditional probability
Prob_b2a1
Explanation: Find the conditional probability $P(\theta_{B2}|\theta_{A1}) = \frac{P(\theta_{B2},\theta_{A1})}{P(\theta_{A1})}$
End of explanation
# A2 conditioned on B1 (expect 1)
# can do it all on one line:
expect(tensor(Pa2,Pb1),psi) / expect(tensor(qeye(2),Pb1),psi)
expect(tensor(Pa2,Pb2),psi) # joint for A2, B2 (classically expect 0.09, QM says 0)
Explanation: Find the conditional probability $P(\theta_{A2}|\theta_{B1}) = \frac{P(\theta_{A2},\theta_{B1})}{P(\theta_{B1})}$
End of explanation
psi2=tensor(H,H)
expect(tensor(Pa1,Pb1),psi2) # joint for A1, B1 (expect 0.09)
# B2 conditioned on A1:
expect(tensor(Pa1,Pb2),psi2) / expect(tensor(Pa1,qeye(2)),psi2)
# A2 conditioned on B1
expect(tensor(Pa2,Pb1),psi2) / expect(tensor(qeye(2),Pb1),psi2)
# joint for A2, B2
expect(tensor(Pa2,Pb2),psi2)
Explanation: This is what we described in class.
What if the state was just $|H,H\rangle$?
End of explanation
rho_mix = 0.2 * ket2dm(tensor(H,H)) + 0.8 * ket2dm(tensor(V,V))
rho_mix
# joint for A1, B1
expect(tensor(Pa1,Pb1),rho_mix)
# B2 conditioned on A1
expect(tensor(Pa1,Pb2),rho_mix) / expect(tensor(Pa1,qeye(2)),rho_mix)
# A2 conditioned on B1
expect(tensor(Pa2,Pb1),rho_mix) / expect(tensor(Pb1,qeye(2)),rho_mix)
# joint for A2, B2:
expect(tensor(Pa2,Pb2),rho_mix)
Explanation: This is harder to interpret, but we clearly have different probabilities. Finally, check if we had used a mixed state:
A mixed state instead of the pure (entangled state).
Here we have to use the density matrix (since a ket cannot describe a mixed state). First some background:
QuTiP has a function that gives the density matrix from a ket state: ket2dm.
End of explanation
rho_pure = ket2dm(psi) # convert from a ket to a density matrix (dm)
rho_pure
Explanation: We see that $P(\theta_{B2},\theta_{A2}) > P(\theta_{B1},\theta_{A1})$ as we said in class for a state that obeys realism.
Now repeat with the pure state but using density matrix techniques.
This isn't going to tell us anything new, but it shows how to work with the density matrix if you already know the ket state.
End of explanation
# joint for A1, B1
expect(tensor(Pa1,Pb1),rho_pure)
# B2 conditioned on A1
expect(tensor(Pa1,Pb2),rho_pure) / expect(tensor(Pa1,qeye(2)),rho_pure)
# A2 conditioned on B1
expect(tensor(Pa2,Pb1),rho_pure) / expect(tensor(Pb1,qeye(2)),rho_pure)
# joint for A2, B2:
expect(tensor(Pa2,Pb2),rho_pure)
Explanation: The calculations are actually the same in QuTiP, the expect function takes either a ket state or a density matrix.
End of explanation
psi=sqrt(0.2)*tensor(H,H) + sqrt(0.8)*tensor(V,V)
angles = arange(1,90,1)
rads = deg2rad(angles)
Explanation: These all agree (as they should).
Explore the angles in more detail:
Why these angles, 19 and 35?
End of explanation
out = []
for r in rads:
out.append(expect(tensor(P(-r),P(r)),psi))
plt.plot(angles,out,".") # plot in units of pi
Explanation: Make a list of the probability of joint measurements for a pair of angles:
End of explanation
out = []
for r in rads:
out.append(expect(tensor(P(r),P(deg2rad(35))),psi) / expect(tensor(P(r),qeye(2)),psi))
plt.plot(angles,out,".")
Explanation: We see that the joint probabilities have a zero at 35˚. Now plug that in to one of the conditional probabilities and see what angle for the conditional probability gives 1:
End of explanation
# Solution:
# For the first plot, we can show the joint probability for two angles is given by:
plt.plot(rad2deg(rads),(sqrt(0.2)*cos(-rads)*cos(rads) + sqrt(0.8)*sin(-rads)*sin(rads))**2)
Explanation: So only 19 and 35 work. Now, can you derive 19 and 35 given only the state $|\psi\rangle$? Try the first plot, i.e. calculate the joint probability $P(\theta_A,\theta_B)$
Solution
Using the state, write the projection operators for a two photon state with angles $\theta_A$ and $\theta_B$. First, recall $$\big|\theta_i\big\rangle = \cos\theta_i\big|H\big\rangle + \sin\theta_i\big|V\big\rangle.$$ Next, form the two-photon state: $$\big|\theta_A,\theta_B\big\rangle = \big|\theta_A\big\rangle \otimes \big|\theta_B\big\rangle = \left(\cos\theta_A\big|H\big\rangle + \sin\theta_A\big|V\big\rangle\right) \otimes \left(\cos\theta_B\big|H\big\rangle + \sin\theta_B\big|V\big\rangle\right)$$
which we can reduce to:
$$=\cos\theta_A\cos\theta_B\big|H,H\big\rangle + \cos\theta_A\sin\theta_B\big|H,V\big\rangle + \sin\theta_A\cos\theta_B\big|V,H\big\rangle + \sin\theta_A\sin\theta_B\big|V,V\big\rangle.$$
Find the probability of a joint measurement of polarizations $\theta_A$ and $\theta_B$:
$$P(\theta_A,\theta_B) = \big|\big\langle\psi\big|\theta_A,\theta_B\big\rangle\big|^2$$
Since $\big|\psi\big\rangle$ only has $\big|H,H\big\rangle$ and $\big|V,V\big\rangle$ terms, this probability only has two terms:
$$P(\theta_A,\theta_B) = \left|\sqrt{0.2}\cos\theta_A\cos\theta_B + \sqrt{0.8}\sin\theta_A\sin\theta_B\right|^2$$
Plot is shown below for $\theta_A = -\theta_B$ and it agrees perfectly with our model above.
End of explanation
# Solution
psi3=sqrt(0.8)*tensor(H,H) + sqrt(0.2)*tensor(V,V)
out = []
for r in rads:
out.append(expect(tensor(P(-r),P(r)),psi3))
plt.plot(angles,out,".") # plot in units of pi
# Solution
out = []
for r in rads:
out.append(expect(tensor(P(r),P(deg2rad(55))),psi3) / expect(tensor(P(r),qeye(2)),psi3))
plt.plot(angles,out,".")
Explanation: Challenge:
If we change the state to $\big|\psi\big\rangle = \sqrt{0.8} \big|H,H\big\rangle + \sqrt{0.2} \big|V,V\big\rangle$, the two angles that work for this state.
End of explanation |
52 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maps
1. Introduction
Maps are a way to present information on a (roughly) spherical earth on a flat plane, like a page or a screen. Here are two examples of common map projections. The projection is only accurate in the region where the plane touches the sphere, and is less accurate as the distance between the plane and the sphere increases.
Mercator
Lambert conformal conic
You can read more about map projections from Map Projections – a Working Manual, the source of the images above, or, more entertainingly, from XKCD.
We'll use cartopy to plot on maps. Check out the gallery for inspiration.
Step1: Here we have the most basic projection
Step2: cartopy provides a number of projections. Available projections are
Step3: Let's make a map of the Gulf of Mexico using the LambertConformal projection. Projections take in different keywords to specify properties. For this projection, we can specify the central longitude and latitude, which control the center of the projection. Our selection in the example is not far off from the default, so it looks similar to the previous plot.
Step4: Don't forget to center your projections around your area of interest to be as accurate as possible for your purposes.
The map is plotted in a projected coordinate system, with units in meters, but the package deals with the projection behind the scenes. We can see this by looking at the limits of the two axes, which don't look like longitude/latitude at all
Step5: This same call to a plot set up with the PlateCarree projection, which is in geographic coordinates (lon/lat) does give limits in longitude and latitude, because in that case we told the plot to be in those coordinates, but in this case we said to use a LambertConformal.
We can use whatever type of coordinates we want, including latitude and longitude, as long as we tell cartopy which type we are using.
As you saw above, we set the limits of the plot not with xlim and ylim, but with extent and the appropriate projection object.
Exercise
Create a map of the Gulf of Mexico using a different projection. How does it compare to the map above?
This is pretty good, but there are some limitations in this package currently. One is that we can't add labels to the lat/lon lines for the Lambert Conformal Conic projection. We can do this using Mercator, though
Step6: When we want to add something to the plot, we just need to tell it what projection the information is given in using the transform keyword argument.
If the information is in latitude/longitude – typical for the way people tend to think about information (instead of projected locations) – then we give the Plate Carree projection with the transform keyword argument to the plot call
Step7: Exercise
Convert the Mercator coordinates given for Austin to latitude and longitude and confirm that they are correct.
Other features you can add
The code we used earlier, like
Step8: There are also some built-in colors, but you can use any matplotlib color available to color the land or water.
Step9: Using higher resolution can be pretty significantly different.
Here we will prepare the higher resolution land and ocean information for the highest resolution available, then use it in the plot.
Step10: Here is a list (with reference names in some cases appended) of the many features that are available through Natural Earth | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import cartopy
import cartopy.crs as ccrs # commonly used shorthand
import cartopy.feature as cfeature
Explanation: Maps
1. Introduction
Maps are a way to present information on a (roughly) spherical earth on a flat plane, like a page or a screen. Here are two examples of common map projections. The projection is only accurate in the region where the plane touches the sphere, and is less accurate as the distance between the plane and the sphere increases.
Mercator
Lambert conformal conic
You can read more about map projections from Map Projections – a Working Manual, the source of the images above, or, more entertainingly, from XKCD.
We'll use cartopy to plot on maps. Check out the gallery for inspiration.
End of explanation
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.coastlines(resolution='110m') # coastline resolution options are '110m', '50m', '10m'
ax.gridlines()
Explanation: Here we have the most basic projection: plate carrée, which is an equirectangular projection, and is essentially equivalent to just plotting the longitude and latitude values without a projection. I will refer to longitude and latitude as "geographic coordinates".
We can make an axes that is plotted in geographic coordinates (or, indeed, any projection we choose) by using the projection keyword argument to fig.add_subplot(). Here we also plot the coastline and add gridlines.
End of explanation
plt.figure()
ax = plt.axes(projection=ccrs.LambertConformal()) ## The map is in the Lambert Conformal projection
ax.coastlines(resolution='110m')
ax.gridlines()
Explanation: cartopy provides a number of projections. Available projections are:
PlateCarree
AlbersEqualArea
AzimuthalEquidistant
LambertConformal
LambertCylindrical
Mercator
Miller
Mollweide
Orthographic
Robinson
Sinusoidal
Stereographic
TransverseMercator
UTM
InterruptedGoodeHomolosine
RotatedPole
OSGB
EuroPP
Geostationary
Gnomonic
LambertAzimuthalEqualArea
NorthPolarStereo
OSNI
SouthPolarStereo
Lambert Conformal Conic is a useful projection in numerical modeling because it preserves right angles. Here we use the projection without any keyword specifications, but with the coastline plotted so that we have something to look at.
The projection that we choose in the axes line with projection= is the projection that the plot is in. Data from any projection can be plotted on this map, but we will have to tell it which projection it is in.
End of explanation
# the central_longitude and central_latitude parameters tell the projection where to be centered for the calculation
# The map is in Lambert Conformal
ax = plt.axes(projection=ccrs.LambertConformal(central_longitude=-85.0, central_latitude=25.0))
gl = ax.gridlines(linewidth=0.2, color='gray', alpha=0.5, linestyle='-')
# we control what we actually see in the plot with this:
# We can set the extent using latitude and longitude, but then we need to tell it the projection, which is
# PlateCarree since that is equivalent
# We are choosing the bounds of the map using geographic coordinates,
# then identifying as being in PlateCarree
ax.set_extent([-100, -70, 15, 35], ccrs.PlateCarree())
# add geographic information
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.coastlines(resolution='110m') # looks better with resolution='10m'
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', lw=.5)
ax.add_feature(cartopy.feature.RIVERS)
Explanation: Let's make a map of the Gulf of Mexico using the LambertConformal projection. Projections take in different keywords to specify properties. For this projection, we can specify the central longitude and latitude, which control the center of the projection. Our selection in the example is not far off from the default, so it looks similar to the previous plot.
End of explanation
ax.get_xlim(), ax.get_ylim()
Explanation: Don't forget to center your projections around your area of interest to be as accurate as possible for your purposes.
The map is plotted in a projected coordinate system, with units in meters, but the package deals with the projection behind the scenes. We can see this by looking at the limits of the two axes, which don't look like longitude/latitude at all:
End of explanation
plt.figure(figsize=(10, 6))
# the central_longitude parameter tells the projection where to be centered for this axes
ax = plt.axes(projection=ccrs.Mercator(central_longitude=-85.0))
gl = ax.gridlines(linewidth=0.2, color='gray', alpha=0.5, linestyle='-', draw_labels=True)
# we control what we actually see in the plot with this:
# We can set the extent using latitude and longitude, but then we need to tell it the projection, which is
# PlateCarree since that is equivalent
ax.set_extent([-100, -70, 15, 35], ccrs.PlateCarree())
# add geographic information
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.coastlines(resolution='110m') # looks better with resolution='10m'
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', lw=.1)
ax.add_feature(cartopy.feature.RIVERS)
# Now we can add on lat/lon labels:
# more info: http://scitools.org.uk/cartopy/docs/v0.13/matplotlib/gridliner.html
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
# the following two make the labels look like lat/lon format
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlocator = mticker.FixedLocator([-105, -95, -85, -75, -65]) # control where the ticks are
gl.xlabel_style = {'size': 15, 'color': 'gray'} # control how the tick labels look
gl.ylabel_style = {'color': 'red', 'weight': 'bold'}
gl.xlabels_top = False # turn off labels where you don't want them
gl.ylabels_right = False
Explanation: This same call to a plot set up with the PlateCarree projection, which is in geographic coordinates (lon/lat) does give limits in longitude and latitude, because in that case we told the plot to be in those coordinates, but in this case we said to use a LambertConformal.
We can use whatever type of coordinates we want, including latitude and longitude, as long as we tell cartopy which type we are using.
As you saw above, we set the limits of the plot not with xlim and ylim, but with extent and the appropriate projection object.
Exercise
Create a map of the Gulf of Mexico using a different projection. How does it compare to the map above?
This is pretty good, but there are some limitations in this package currently. One is that we can't add labels to the lat/lon lines for the Lambert Conformal Conic projection. We can do this using Mercator, though:
End of explanation
projection = ccrs.Mercator()
x, y = projection.transform_point(-93.0-45.0/60.0, 27.0+55.0/60.0, ccrs.PlateCarree())
print(x, y)
Explanation: When we want to add something to the plot, we just need to tell it what projection the information is given in using the transform keyword argument.
If the information is in latitude/longitude – typical for the way people tend to think about information (instead of projected locations) – then we give the Plate Carree projection with the transform keyword argument to the plot call:
transform=ccrs.PlateCarree()
For example, to plot some points with a particular projection, you can type:
plt.plot(xpts, ypts, transform=ccrs.projection_that_xpts_and_ypts_are_given_in)
A nice thing about the cartopy package is that you can plot directly data from any projection — you just tell it the projection through the transform keyword argument when you add to the plot.
Exercise
The latitude and longitude of College Station are given below. Plot the location of College Station on the map above with a red dot.
lat_cll = 30.0 + 36.0/60.0 + 5.0/3600.0
lon_cll = -(96.0 + 18.0/60.0 + 52.0/3600.0)
What happens if you put in the wrong projection or no projection?
Exercise
Data from any projection can be added to a map, the data must just be input with its projection using the transform keyword.
The x, y location of Austin, TX, is given below in the Mercator projection. Plot the location of Austin in Mercator coordinates on the map above with a blue 'x'.
x, y = -10880707.173023093, 3516376.324225941
Point conversion
While cartopy removes the need to convert points on your own between projections (instead doing it behind the scenes), you can always convert between projections if you want using the following. Or, if you want to transform more than one point, use projection.transform_points(projection, x, y).
End of explanation
# this is another way to do `ax.add_feature(cartopy.feature.LAND)` but to have more control over it
# 50m: moderate resolution data
# set up for plotting land
land_50m = cfeature.NaturalEarthFeature('physical', 'land', '50m',
edgecolor='face', facecolor=cfeature.COLORS['land'])
# set up for plotting water at higher resolution
ocean_50m = cfeature.NaturalEarthFeature('physical', 'ocean', '50m',
edgecolor='face', facecolor=cfeature.COLORS['water'])
Explanation: Exercise
Convert the Mercator coordinates given for Austin to latitude and longitude and confirm that they are correct.
Other features you can add
The code we used earlier, like:
ax.add_feature(cartopy.feature.LAND)
was a convenience function wrapping more complex and capable code options. Here we explore a little more the capabilities. Note that this requires downloading data which you will see a warning about the first time you run the code.
We can set up the ability to plot with high resolution land data:
End of explanation
sorted(cfeature.COLORS.keys())
Explanation: There are also some built-in colors, but you can use any matplotlib color available to color the land or water.
End of explanation
land_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m',
edgecolor='face',
facecolor=cfeature.COLORS['land'])
ocean_10m = cfeature.NaturalEarthFeature('physical', 'ocean', '10m',
edgecolor='face',
facecolor=cfeature.COLORS['water'])
projection=ccrs.LambertConformal(central_longitude=-95.0, central_latitude=29.0)
# Galveston Bay
fig = plt.figure(figsize=(15, 15))
# lower resolution
ax1 = fig.add_subplot(1,2,1, projection=projection)
ax1.set_extent([-96, -94, 28.5, 30], ccrs.PlateCarree())
ax1.add_feature(cartopy.feature.LAND)
ax1.add_feature(cartopy.feature.OCEAN)
# now higher resolution
ax2 = fig.add_subplot(1,2,2, projection=projection)
ax2.set_extent([-96, -94, 28.5, 30], ccrs.PlateCarree())
ax2.add_feature(ocean_10m)
ax2.add_feature(land_10m)
Explanation: Using higher resolution can be pretty significantly different.
Here we will prepare the higher resolution land and ocean information for the highest resolution available, then use it in the plot.
End of explanation
projection=ccrs.PlateCarree()
fig = plt.figure()
ax = fig.add_subplot(111, projection=projection)
ax.set_extent([-125, -70, 24, 50], ccrs.PlateCarree())
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
states = cfeature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
ax.add_feature(states, edgecolor='gray')
Explanation: Here is a list (with reference names in some cases appended) of the many features that are available through Natural Earth:
(10, 50, 110 for high, medium, low resolution)
Physical Vector Data Themes:
(physical)
Coastline (10, 50, 110): coastline
Land (10, 50, 110): land
Ocean (10, 50, 110): ocean
Minor Islands (10): minor_islands, minor_islands_coastline
Reefs (10): reefs
Physical region features (10): geography_regions_polys, geography_regions_points, geography_regions_elevation_points, geography_marine_polys
Rivers and Lake Centerlines (10, 50, 110): rivers_lake_centerlines
Lakes (10, 50, 110): lakes
Glaciated areas (10, 50, 110): glaciated_areas
Antarctic ice shelves (10, 50): antarctic_ice_shelves_polys, antarctic_ice_shelves_lines
Bathymetry (10): bathymetry_all or choose which depth(s)
Geographic lines (10, 50): geographic_lines
Graticules (10, 50, 110): (grid lines) graticules_all or choose degree interval
Raster Data Themes:
(raster: land coloring)
Cross Blended Hypsometric Tints (10, 50)
Natural Earth 1 (10, 50)
Natural Earth 2 (10, 50)
Ocean Bottom (10, 50)
Bathymetry (50)
Shaded Relief (10, 50)
Gray Earth (10, 50)
Manual Shaded Relief (10, 50)
Cultural Vector Data Themes:
(cultural)
Countries (10, 50, 110): admin_0_countries, admin_0_countries_lakes, admin_0_boundary_lines
Disputed areas and breakaway regions (10, 50)
First order admin (provinces, departments, states, etc.) (10, 50): e.g. admin_1_states_provinces_lines
Populated places (10, 50, 110)
Urban polygons (10, 50)
Parks and protected areas (10): parks_and_protected_lands
Pacific nation groupings (10, 50, 110)
Water boundary indicators (10)
Here is an example showing state boundaries:
End of explanation |
53 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Feature Matrix
Step2: Normalize Observations
Normalizer rescales the values on individual observations to have unit norm (the sum of their lengths is one). | Python Code:
# Load libraries
from sklearn.preprocessing import Normalizer
import numpy as np
Explanation: Title: Normalizing Observations
Slug: normalizing_observations
Summary: How to normalize observations for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
<a alt="Normalizing Observations" href="https://machinelearningflashcards.com">
<img src="normalizing_observations/Normalizing_Observations_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
# Create feature matrix
X = np.array([[0.5, 0.5],
[1.1, 3.4],
[1.5, 20.2],
[1.63, 34.4],
[10.9, 3.3]])
Explanation: Create Feature Matrix
End of explanation
# Create normalizer
normalizer = Normalizer(norm='l2')
# Transform feature matrix
normalizer.transform(X)
Explanation: Normalize Observations
Normalizer rescales the values on individual observations to have unit norm (the sum of their lengths is one).
End of explanation |
54 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
by Maxwell Margenot
Part of the Quantopian Lecture Series
Step2: You may hear text enclosed in triple quotes (
Step4: Make sure you read the comments within each code cell (if they are there). They will provide more real-time explanations of what is going on as you look at each line of code.
Variables
Variables provide names for values in programming. If you want to save a value for later or repeated use, you give the value a name, storing the contents in a variable. Variables in programming work in a fundamentally similar way to variables in algebra, but in Python they can take on various different data types.
The basic variable types that we will cover in this section are integers, floating point numbers, booleans, and strings.
An integer in programming is the same as in mathematics, a round number with no values after the decimal point. We use the built-in print function here to display the values of our variables as well as their types!
Step5: Variables, regardless of type, are assigned by using a single equals sign (=). Variables are case-sensitive so any changes in variation in the capitals of a variable name will reference a different variable entirely.
Step6: A floating point number, or a float is a fancy name for a real number (again as in mathematics). To define a float, we need to either include a decimal point or specify that the value is a float.
Step7: A variable of type float will not round the number that you store in it, while a variable of type integer will. This makes floats more suitable for mathematical calculations where you want more than just integers.
Note that as we used the float() function to force an number to be considered a float, we can use the int() function to force a number to be considered an int.
Step8: The int() function will also truncate any digits that a number may have after the decimal point!
Strings allow you to include text as a variable to operate on. They are defined using either single quotes ('') or double quotes ("").
Step9: Both are allowed so that we can include apostrophes or quotation marks in a string if we so choose.
Step10: Booleans, or bools are binary variable types. A bool can only take on one of two values, these being True or False. There is much more to this idea of truth values when it comes to programming, which we cover later in the Logical Operators of this notebook.
Step11: There are many more data types that you can assign as variables in Python, but these are the basic ones! We will cover a few more later as we move through this tutorial.
Basic Math
Python has a number of built-in math functions. These can be extended even further by importing the math package or by including any number of other calculation-based packages.
All of the basic arithmetic operations are supported
Step12: If you are not familiar with the the mod operator, it operates like a remainder function. If we type $15 \ \% \ 4$, it will return the remainder after dividing $15$ by $4$.
Step13: Mathematical functions also work on variables!
Step14: Make sure that your variables are floats if you want to have decimal points in your answer. If you perform math exclusively with integers, you get an integer. Including any float in the calculation will make the result a float.
Step15: Python has a few built-in math functions. The most notable of these are
Step16: The math library adds a long list of new mathematical functions to Python. Feel free to check out the documentation for the full list and details. It concludes some mathematical constants
Step17: As well as some commonly used math functions
Step18: Collections
Lists
A list in Python is an ordered collection of objects that can contain any data type. We define a list using brackets ([]).
Step19: We can access and index the list by using brackets as well. In order to select an individual element, simply type the list name followed by the index of the item you are looking for in braces.
Step20: Indexing in Python starts from $0$. If you have a list of length $n$, the first element of the list is at index $0$, the second element is at index $1$, and so on and so forth. The final element of the list will be at index $n-1$. Be careful! Trying to access a non-existent index will cause an error.
Step21: We can see the number of elements in a list by calling the len() function.
Step22: We can update and change a list by accessing an index and assigning new value.
Step23: This is fundamentally different from how strings are handled. A list is mutable, meaning that you can change a list's elements without changing the list itself. Some data types, like strings, are immutable, meaning you cannot change them at all. Once a string or other immutable data type has been created, it cannot be directly modified without creating an entirely new object.
Step24: As we stated before, a list can contain any data type. Thus, lists can also contain strings.
Step25: Lists can also contain multiple different data types at once!
Step26: If you want to put two lists together, they can be combined with a + symbol.
Step27: In addition to accessing individual elements of a list, we can access groups of elements through slicing.
Step28: Slicing
We use the colon (
Step29: Using
Step30: And everything before a certain point
Step31: Using negative numbers will count from the end of the indices instead of from the beginning. For example, an index of -1 indicates the last element of the list.
Step32: You can also add a third component to slicing. Instead of simply indicating the first and final parts of your slice, you can specify the step size that you want to take. So instead of taking every single element, you can take every other element.
Step33: Here we have selected the entire list (because 0
Step34: Lists implictly select the beginning and end of the list when not otherwise specified.
Step35: With a negative step size we can even reverse the list!
Step36: Python does not have native matrices, but with lists we can produce a working fascimile. Other packages, such as numpy, add matrices as a separate data type, but in base Python the best way to create a matrix is to use a list of lists.
We can also use built-in functions to generate lists. In particular we will look at range() (because we will be using it later!). Range can take several different inputs and will return a list.
Step37: Similar to our list-slicing methods from before, we can define both a start and an end for our range. This will return a list that is includes the start and excludes the end, just like a slice.
Step38: We can also specify a step size. This again has the same behavior as a slice.
Step39: Tuples
A tuple is a data type similar to a list in that it can hold different kinds of data types. The key difference here is that a tuple is immutable. We define a tuple by separating the elements we want to include by commas. It is conventional to surround a tuple with parentheses.
Step40: As mentioned before, tuples are immutable. You can't change any part of them without defining a new tuple.
Step41: You can slice tuples the same way that you slice lists!
Step42: And concatenate them the way that you would with strings!
Step43: We can 'pack' values together, creating a tuple (as above), or we can 'unpack' values from a tuple, taking them out.
Step44: Unpacking assigns each value of the tuple in order to each variable on the left hand side of the equals sign. Some functions, including user-defined functions, may return tuples, so we can use this to directly unpack them and access the values that we want.
Sets
A set is a collection of unordered, unique elements. It works almost exactly as you would expect a normal set of things in mathematics to work and is defined using braces ({}).
Step45: Note how any extra instances of the same item are removed in the final set. We can also create a set from a list, using the set() function.
Step46: Calling len() on a set will tell you how many elements are in it.
Step47: Because a set is unordered, we can't access individual elements using an index. We can, however, easily check for membership (to see if something is contained in a set) and take the unions and intersections of sets by using the built-in set functions.
Step48: Here we checked to see whether the string 'cats' was contained within our animal_set and it returned True, telling us that it is indeed in our set.
We can connect sets by using typical mathematical set operators, namely |, for union, and &, for intersection. Using | or & will return exactly what you would expect if you are familiar with sets in mathematics.
Step49: Pairing two sets together with | combines the sets, removing any repetitions to make every set element unique.
Step50: Pairing two sets together with & will calculate the intersection of both sets, returning a set that only contains what they have in common.
If you are interested in learning more about the built-in functions for sets, feel free to check out the documentation.
Dictionaries
Another essential data structure in Python is the dictionary. Dictionaries are defined with a combination of curly braces ({}) and colons (
Step51: After defining a dictionary, we can access any individual value by indicating its key in brackets.
Step52: We can also change the value associated with a given key
Step53: Adding a new key-value pair is as simple as defining it.
Step54: String Shenanigans
We already know that strings are generally used for text. We can used built-in operations to combine, split, and format strings easily, depending on our needs.
The + symbol indicates concatenation in string language. It will combine two strings into a longer string.
Step55: Strings are also indexed much in the same way that lists are.
Step56: Built-in objects and classes often have special functions associated with them that are called methods. We access these methods by using a period ('.'). We will cover objects and their associated methods more in another lecture!
Using string methods we can count instances of a character or group of characters.
Step57: We can also find the first instance of a character or group of characters in a string.
Step58: As well as replace characters in a string.
Step59: There are also some methods that are unique to strings. The function upper() will convert all characters in a string to uppercase, while lower() will convert all characters in a string to lowercase!
Step60: String Formatting
Using the format() method we can add in variable values and generally format our strings.
Step61: We use braces ({}) to indicate parts of the string that will be filled in later and we use the arguments of the format() function to provide the values to substitute. The numbers within the braces indicate the index of the value in the format() arguments.
See the format() documentation for additional examples.
If you need some quick and dirty formatting, you can instead use the % symbol, called the string formatting operator.
Step62: The % symbol basically cues Python to create a placeholder. Whatever character follows the % (in the string) indicates what sort of type the value put into the placeholder will have. This character is called a conversion type. Once the string has been closed, we need another % that will be followed by the values to insert. In the case of one value, you can just put it there. If you are inserting more than one value, they must be enclosed in a tuple.
Step63: In these examples, the %s indicates that Python should convert the values into strings. There are multiple conversion types that you can use to get more specific with the the formatting. See the string formatting documentation for additional examples and more complete details on use.
Logical Operators
Basic Logic
Logical operators deal with boolean values, as we briefly covered before. If you recall, a bool takes on one of two values, True or False (or $1$ or $0$). The basic logical statements that we can make are defined using the built-in comparators. These are == (equal), != (not equal), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to).
Step64: These comparators also work in conjunction with variables.
Step65: We can string these comparators together to make more complex logical statements using the logical operators or, and, and not.
Step66: The or operator performs a logical or calculation. This is an inclusive or, so if either component paired together by or is True, the whole statement will be True. The and statement only outputs True if all components that are anded together are True. Otherwise it will output False. The not statement simply inverts the truth value of whichever statement follows it. So a True statement will be evaluated as False when a not is placed in front of it. Similarly, a False statement will become True when a not is in front of it.
Say that we have two logical statements, or assertions, $P$ and $Q$. The truth table for the basic logical operators is as follows
Step67: Logical statements can be as simple or complex as we like, depending on what we need to express. Evaluating the above logical statement step by step we see that we are evaluating (True and True) or (False and not False). This becomes True or (False and True), subsequently becoming True or False, ultimately being evaluated as True.
Truthiness
Data types in Python have a fun characteristic called truthiness. What this means is that most built-in types will evaluate as either True or False when a boolean value is needed (such as with an if-statement). As a general rule, containers like strings, tuples, dictionaries, lists, and sets, will return True if they contain anything at all and False if they contain nothing.
Step68: And so on, for the other collections and containers. None also evaluates as False. The number 1 is equivalent to True and the number 0 is equivalent to False as well, in a boolean context.
If-statements
We can create segments of code that only execute if a set of conditions is met. We use if-statements in conjunction with logical statements in order to create branches in our code.
An if block gets entered when the condition is considered to be True. If condition is evaluated as False, the if block will simply be skipped unless there is an else block to accompany it. Conditions are made using either logical operators or by using the truthiness of values in Python. An if-statement is defined with a colon and a block of indented text.
Step69: Because in this example i = 4 and the if-statement is only looking for whether i is equal to 5, the print statement will never be executed. We can add in an else statement to create a contingency block of code in case the condition in the if-statement is not evaluated as True.
Step70: We can implement other branches off of the same if-statement by using elif, an abbreviation of "else if". We can include as many elifs as we like until we have exhausted all the logical branches of a condition.
Step71: You can also nest if-statements within if-statements to check for further conditions.
Step72: Remember that we can group multiple conditions together by using the logical operators!
Step73: You can use the logical comparators to compare strings!
Step74: As with other data types, == will check for whether the two things on either side of it have the same value. In this case, we compare whether the value of the strings are the same. Using > or < or any of the other comparators is not quite so intuitive, however, so we will stay from using comparators with strings in this lecture. Comparators will examine the lexicographical order of the strings, which might be a bit more in-depth than you might like.
Some built-in functions return a boolean value, so they can be used as conditions in an if-statement. User-defined functions can also be constructed so that they return a boolean value. This will be covered later with function definition!
The in keyword is generally used to check membership of a value within another value. We can check memebership in the context of an if-statement and use it to output a truth value.
Step75: Here we use in to check whether the variable my_string contains any particular letters. We will later use in to iterate through lists!
Loop Structures
Loop structures are one of the most important parts of programming. The for loop and the while loop provide a way to repeatedly run a block of code repeatedly. A while loop will iterate until a certain condition has been met. If at any point after an iteration that condition is no longer satisfied, the loop terminates. A for loop will iterate over a sequence of values and terminate when the sequence has ended. You can instead include conditions within the for loop to decide whether it should terminate early or you could simply let it run its course.
Step76: With while loops we need to make sure that something actually changes from iteration to iteration so that that the loop actually terminates. In this case, we use the shorthand i -= 1 (short for i = i - 1) so that the value of i gets smaller with each iteration. Eventually i will be reduced to 0, rendering the condition False and exiting the loop.
A for loop iterates a set number of times, determined when you state the entry into the loop. In this case we are iterating over the list returned from range(). The for loop selects a value from the list, in order, and temporarily assigns the value of i to it so that operations can be performed with the value.
Step77: Note that in this for loop we use the in keyword. Use of the in keyword is not limited to checking for membership as in the if-statement example. You can iterate over any collection with a for loop by using the in keyword.
In this next example, we will iterate over a set because we want to check for containment and add to a new set.
Step78: There are two statements that are very helpful in dealing with both for and while loops. These are break and continue. If break is encountered at any point while a loop is executing, the loop will immediately end.
Step79: The continue statement will tell the loop to immediately end this iteration and continue onto the next iteration of the loop.
Step80: This loop skips printing the number $3$ because of the continue statement that executes when we enter the if-statement. The code never sees the command to print the number $3$ because it has already moved to the next iteration. The break and continue statements are further tools to help you control the flow of your loops and, as a result, your code.
The variable that we use to iterate over a loop will retain its value when the loop exits. Similarly, any variables defined within the context of the loop will continue to exist outside of it.
Step81: We can also iterate over a dictionary!
Step82: If we just iterate over a dictionary without doing anything else, we will only get the keys. We can either use the keys to get the values, like so
Step83: Or we can use the iteritems() function to get both key and value at the same time.
Step85: The iteritems() function creates a tuple of each key-value pair and the for loop stores unpacks that tuple into key, value on each separate execution of the loop!
Functions
A function is a reusable block of code that you can call repeatedly to make calculations, output data, or really do anything that you want. This is one of the key aspects of using a programming language. To add to the built-in functions in Python, you can define your own!
Step86: Functions are defined with def, a function name, a list of parameters, and a colon. Everything indented below the colon will be included in the definition of the function.
We can have our functions do anything that you can do with a normal block of code. For example, our hello_world() function prints a string every time it is called. If we want to keep a value that a function calculates, we can define the function so that it will return the value we want. This is a very important feature of functions, as any variable defined purely within a function will not exist outside of it.
Step87: The scope of a variable is the part of a block of code where that variable is tied to a particular value. Functions in Python have an enclosed scope, making it so that variables defined within them can only be accessed directly within them. If we pass those values to a return statement we can get them out of the function. This makes it so that the function call returns values so that you can store them in variables that have a greater scope.
In this case specifically,including a return statement allows us to keep the string value that we define in the function.
Step89: Just as we can get values out of a function, we can also put values into a function. We do this by defining our function with parameters.
Step92: In this example we only had one parameter for our function, x. We can easily add more parameters, separating everything with a comma.
Step93: If we want to, we can define a function so that it takes an arbitrary number of parameters. We tell Python that we want this by using an asterisk (*).
Step94: The time to use *args as a parameter for your function is when you do not know how many values may be passed to it, as in the case of our sum function. The asterisk in this case is the syntax that tells Python that you are going to pass an arbitrary number of parameters into your function. These parameters are stored in the form of a tuple.
Step97: We can put as many elements into the args tuple as we want to when we call the function. However, because args is a tuple, we cannot modify it after it has been created.
The args name of the variable is purely by convention. You could just as easily name your parameter *vars or *things. You can treat the args tuple like you would any other tuple, easily accessing arg's values and iterating over it, as in the above sum_values(*args) function.
Our functions can return any data type. This makes it easy for us to create functions that check for conditions that we might want to monitor.
Here we define a function that returns a boolean value. We can easily use this in conjunction with if-statements and other situations that require a boolean.
Step99: This above function returns an ordered pair of the input parameters, stored as a tuple.
Step100: And that one calculates the slope between two points! | Python Code:
# This is a comment
# These lines of code will not change any values
# Anything following the first # is not run as code
Explanation: Introduction to Python
by Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
All of the coding that you will do on the Quantopian platform will be in Python. It is also just a good, jack-of-all-trades language to know! Here we will provide you with the basics so that you can feel confident going through our other lectures and understanding what is happening.
Code Comments
A comment is a note made by a programmer in the source code of a program. Its purpose is to clarify the source code and make it easier for people to follow along with what is happening. Anything in a comment is generally ignored when the code is actually run, making comments useful for including explanations and reasoning as well as removing specific lines of code that you may be unsure about. Comments in Python are created by using the pound symbol (# Insert Text Here). Including a # in a line of code will comment out anything that follows it.
End of explanation
This is a special string
Explanation: You may hear text enclosed in triple quotes ( Insert Text Here ) referred to as multi-line comments, but this is not entirely accurate. This is a special type of string (a data type we will cover), called a docstring, used to explain the purpose of a function.
End of explanation
my_integer = 50
print my_integer, type(my_integer)
Explanation: Make sure you read the comments within each code cell (if they are there). They will provide more real-time explanations of what is going on as you look at each line of code.
Variables
Variables provide names for values in programming. If you want to save a value for later or repeated use, you give the value a name, storing the contents in a variable. Variables in programming work in a fundamentally similar way to variables in algebra, but in Python they can take on various different data types.
The basic variable types that we will cover in this section are integers, floating point numbers, booleans, and strings.
An integer in programming is the same as in mathematics, a round number with no values after the decimal point. We use the built-in print function here to display the values of our variables as well as their types!
End of explanation
one = 1
print One
Explanation: Variables, regardless of type, are assigned by using a single equals sign (=). Variables are case-sensitive so any changes in variation in the capitals of a variable name will reference a different variable entirely.
End of explanation
my_float = 1.0
print my_float, type(my_float)
my_float = float(1)
print my_float, type(my_float)
Explanation: A floating point number, or a float is a fancy name for a real number (again as in mathematics). To define a float, we need to either include a decimal point or specify that the value is a float.
End of explanation
my_int = int(3.14159)
print my_int, type(my_int)
Explanation: A variable of type float will not round the number that you store in it, while a variable of type integer will. This makes floats more suitable for mathematical calculations where you want more than just integers.
Note that as we used the float() function to force an number to be considered a float, we can use the int() function to force a number to be considered an int.
End of explanation
my_string = 'This is a string with single quotes'
print my_string
my_string = "This is a string with double quotes"
print my_string
Explanation: The int() function will also truncate any digits that a number may have after the decimal point!
Strings allow you to include text as a variable to operate on. They are defined using either single quotes ('') or double quotes ("").
End of explanation
my_string = '"Jabberwocky", by Lewis Carroll'
print my_string
my_string = "'Twas brillig, and the slithy toves / Did gyre and gimble in the wabe;"
print my_string
Explanation: Both are allowed so that we can include apostrophes or quotation marks in a string if we so choose.
End of explanation
my_bool = True
print my_bool, type(my_bool)
Explanation: Booleans, or bools are binary variable types. A bool can only take on one of two values, these being True or False. There is much more to this idea of truth values when it comes to programming, which we cover later in the Logical Operators of this notebook.
End of explanation
print 'Addition: ', 2 + 2
print 'Subtraction: ', 7 - 4
print 'Multiplication: ', 2 * 5
print 'Division: ', 10 / 2
print 'Exponentiation: ', 3**2
Explanation: There are many more data types that you can assign as variables in Python, but these are the basic ones! We will cover a few more later as we move through this tutorial.
Basic Math
Python has a number of built-in math functions. These can be extended even further by importing the math package or by including any number of other calculation-based packages.
All of the basic arithmetic operations are supported: +, -, /, and *. You can create exponents by using ** and modular arithmetic is introduced with the mod operator, %.
End of explanation
print 'Modulo: ', 15 % 4
Explanation: If you are not familiar with the the mod operator, it operates like a remainder function. If we type $15 \ \% \ 4$, it will return the remainder after dividing $15$ by $4$.
End of explanation
first_integer = 4
second_integer = 5
print first_integer * second_integer
Explanation: Mathematical functions also work on variables!
End of explanation
first_integer = 11
second_integer = 3
print first_integer / second_integer
first_number = 11.0
second_number = 3.0
print first_number / second_number
Explanation: Make sure that your variables are floats if you want to have decimal points in your answer. If you perform math exclusively with integers, you get an integer. Including any float in the calculation will make the result a float.
End of explanation
import math
Explanation: Python has a few built-in math functions. The most notable of these are:
abs()
round()
max()
min()
sum()
These functions all act as you would expect, given their names. Calling abs() on a number will return its absolute value. The round() function will round a number to a specified number of the decimal points (the default is $0$). Calling max() or min() on a collection of numbers will return, respectively, the maximum or minimum value in the collection. Calling sum() on a collection of numbers will add them all up. If you're not familiar with how collections of values in Python work, don't worry! We will cover collections in-depth in the next section.
Additional math functionality can be added in with the math package.
End of explanation
print 'Pi: ', math.pi
print "Euler's Constant: ", math.e
Explanation: The math library adds a long list of new mathematical functions to Python. Feel free to check out the documentation for the full list and details. It concludes some mathematical constants
End of explanation
print 'Cosine of pi: ', math.cos(math.pi)
Explanation: As well as some commonly used math functions
End of explanation
my_list = [1, 2, 3]
print my_list
Explanation: Collections
Lists
A list in Python is an ordered collection of objects that can contain any data type. We define a list using brackets ([]).
End of explanation
print my_list[0]
print my_list[2]
Explanation: We can access and index the list by using brackets as well. In order to select an individual element, simply type the list name followed by the index of the item you are looking for in braces.
End of explanation
print 'The first, second, and third list elements: ', my_list[0], my_list[1], my_list[2]
print 'Accessing outside the list bounds causes an error: ', my_list[3]
Explanation: Indexing in Python starts from $0$. If you have a list of length $n$, the first element of the list is at index $0$, the second element is at index $1$, and so on and so forth. The final element of the list will be at index $n-1$. Be careful! Trying to access a non-existent index will cause an error.
End of explanation
print len(my_list)
Explanation: We can see the number of elements in a list by calling the len() function.
End of explanation
print my_list
my_list[0] = 42
print my_list
Explanation: We can update and change a list by accessing an index and assigning new value.
End of explanation
my_string = "Strings never change"
my_string[0] = 'Z'
Explanation: This is fundamentally different from how strings are handled. A list is mutable, meaning that you can change a list's elements without changing the list itself. Some data types, like strings, are immutable, meaning you cannot change them at all. Once a string or other immutable data type has been created, it cannot be directly modified without creating an entirely new object.
End of explanation
my_list_2 = ['one', 'two', 'three']
print my_list_2
Explanation: As we stated before, a list can contain any data type. Thus, lists can also contain strings.
End of explanation
my_list_3 = [True, 'False', 42]
Explanation: Lists can also contain multiple different data types at once!
End of explanation
my_list_4 = my_list + my_list_2 + my_list_3
print my_list_4
Explanation: If you want to put two lists together, they can be combined with a + symbol.
End of explanation
my_list = ['friends', 'romans', 'countrymen', 'lend', 'me', 'your', 'ears']
Explanation: In addition to accessing individual elements of a list, we can access groups of elements through slicing.
End of explanation
print my_list[2:4]
Explanation: Slicing
We use the colon (:) to slice lists.
End of explanation
print my_list[1:]
Explanation: Using : we can select a group of elements in the list starting from the first element indicated and going up to (but not including) the last element indicated.
We can also select everything after a certain point
End of explanation
print my_list[:4]
Explanation: And everything before a certain point
End of explanation
print my_list[-1]
Explanation: Using negative numbers will count from the end of the indices instead of from the beginning. For example, an index of -1 indicates the last element of the list.
End of explanation
print my_list[0:7:2]
Explanation: You can also add a third component to slicing. Instead of simply indicating the first and final parts of your slice, you can specify the step size that you want to take. So instead of taking every single element, you can take every other element.
End of explanation
print my_list[::2]
Explanation: Here we have selected the entire list (because 0:7 will yield elements 0 through 6) and we have selected a step size of 2. So this will spit out element 0 , element 2, element 4, and so on through the list element selected. We can skip indicated the beginning and end of our slice, only indicating the step, if we like.
End of explanation
print my_list[:]
Explanation: Lists implictly select the beginning and end of the list when not otherwise specified.
End of explanation
print my_list[::-1]
Explanation: With a negative step size we can even reverse the list!
End of explanation
b = 10
my_list = range(b)
print my_list
Explanation: Python does not have native matrices, but with lists we can produce a working fascimile. Other packages, such as numpy, add matrices as a separate data type, but in base Python the best way to create a matrix is to use a list of lists.
We can also use built-in functions to generate lists. In particular we will look at range() (because we will be using it later!). Range can take several different inputs and will return a list.
End of explanation
a = 0
b = 10
my_list = range(a, b)
print my_list
Explanation: Similar to our list-slicing methods from before, we can define both a start and an end for our range. This will return a list that is includes the start and excludes the end, just like a slice.
End of explanation
a = 0
b = 10
step = 2
my_list = range(a, b, step)
print my_list
Explanation: We can also specify a step size. This again has the same behavior as a slice.
End of explanation
my_tuple = 'I', 'have', 30, 'cats'
print my_tuple
my_tuple = ('I', 'have', 30, 'cats')
print my_tuple
Explanation: Tuples
A tuple is a data type similar to a list in that it can hold different kinds of data types. The key difference here is that a tuple is immutable. We define a tuple by separating the elements we want to include by commas. It is conventional to surround a tuple with parentheses.
End of explanation
my_tuple[3] = 'dogs' # Attempts to change the 'cats' value stored in the the tuple to 'dogs'
Explanation: As mentioned before, tuples are immutable. You can't change any part of them without defining a new tuple.
End of explanation
print my_tuple[1:3]
Explanation: You can slice tuples the same way that you slice lists!
End of explanation
my_other_tuple = ('make', 'that', 50)
print my_tuple + my_other_tuple
Explanation: And concatenate them the way that you would with strings!
End of explanation
str_1, str_2, int_1 = my_other_tuple
print str_1, str_2, int_1
Explanation: We can 'pack' values together, creating a tuple (as above), or we can 'unpack' values from a tuple, taking them out.
End of explanation
things_i_like = {'dogs', 7, 'the number 4', 4, 4, 4, 42, 'lizards', 'man I just LOVE the number 4'}
print things_i_like, type(things_i_like)
Explanation: Unpacking assigns each value of the tuple in order to each variable on the left hand side of the equals sign. Some functions, including user-defined functions, may return tuples, so we can use this to directly unpack them and access the values that we want.
Sets
A set is a collection of unordered, unique elements. It works almost exactly as you would expect a normal set of things in mathematics to work and is defined using braces ({}).
End of explanation
animal_list = ['cats', 'dogs', 'dogs', 'dogs', 'lizards', 'sponges', 'cows', 'bats', 'sponges']
animal_set = set(animal_list)
print animal_set # Removes all extra instances from the list
Explanation: Note how any extra instances of the same item are removed in the final set. We can also create a set from a list, using the set() function.
End of explanation
print len(animal_set)
Explanation: Calling len() on a set will tell you how many elements are in it.
End of explanation
'cats' in animal_set # Here we check for membership using the `in` keyword.
Explanation: Because a set is unordered, we can't access individual elements using an index. We can, however, easily check for membership (to see if something is contained in a set) and take the unions and intersections of sets by using the built-in set functions.
End of explanation
print animal_set | things_i_like # You can also write things_i_like | animal_set with no difference
Explanation: Here we checked to see whether the string 'cats' was contained within our animal_set and it returned True, telling us that it is indeed in our set.
We can connect sets by using typical mathematical set operators, namely |, for union, and &, for intersection. Using | or & will return exactly what you would expect if you are familiar with sets in mathematics.
End of explanation
print animal_set & things_i_like # You can also write things_i_like & animal_set with no difference
Explanation: Pairing two sets together with | combines the sets, removing any repetitions to make every set element unique.
End of explanation
my_dict = {"High Fantasy": ["Wheel of Time", "Lord of the Rings"],
"Sci-fi": ["Book of the New Sun", "Neuromancer", "Snow Crash"],
"Weird Fiction": ["At the Mountains of Madness", "The House on the Borderland"]}
Explanation: Pairing two sets together with & will calculate the intersection of both sets, returning a set that only contains what they have in common.
If you are interested in learning more about the built-in functions for sets, feel free to check out the documentation.
Dictionaries
Another essential data structure in Python is the dictionary. Dictionaries are defined with a combination of curly braces ({}) and colons (:). The braces define the beginning and end of a dictionary and the colons indicate key-value pairs. A dictionary is essentially a set of key-value pairs. The key of any entry must be an immutable data type. This makes both strings and tuples candidates. Keys can be both added and deleted.
In the following example, we have a dictionary composed of key-value pairs where the key is a genre of fiction (string) and the value is a list of books (list) within that genre. Since a collection is still considered a single entity, we can use one to collect multiple variables or values into one key-value pair.
End of explanation
print my_dict["Sci-fi"]
Explanation: After defining a dictionary, we can access any individual value by indicating its key in brackets.
End of explanation
my_dict["Sci-fi"] = "I can't read"
print my_dict["Sci-fi"]
Explanation: We can also change the value associated with a given key
End of explanation
my_dict["Historical Fiction"] = ["Pillars of the Earth"]
print my_dict["Historical Fiction"]
print my_dict
Explanation: Adding a new key-value pair is as simple as defining it.
End of explanation
first_string = '"Beware the Jabberwock, my son! /The jaws that bite, the claws that catch! /'
second_string = 'Beware the Jubjub bird, and shun /The frumious Bandersnatch!"/'
third_string = first_string + second_string
print third_string
Explanation: String Shenanigans
We already know that strings are generally used for text. We can used built-in operations to combine, split, and format strings easily, depending on our needs.
The + symbol indicates concatenation in string language. It will combine two strings into a longer string.
End of explanation
my_string = 'Supercalifragilisticexpialidocious'
print 'The first letter is: ', my_string[0] # Uppercase S
print 'The last letter is: ', my_string[-1] # lowercase s
print 'The second to last letter is: ', my_string[-2] # lowercase u
print 'The first five characters are: ', my_string[0:5] # Remember: slicing doesn't include the final element!
print 'Reverse it!: ', my_string[::-1]
Explanation: Strings are also indexed much in the same way that lists are.
End of explanation
print 'Count of the letter i in Supercalifragilisticexpialidocious: ', my_string.count('i')
print 'Count of "li" in the same word: ', my_string.count('li')
Explanation: Built-in objects and classes often have special functions associated with them that are called methods. We access these methods by using a period ('.'). We will cover objects and their associated methods more in another lecture!
Using string methods we can count instances of a character or group of characters.
End of explanation
print 'The first time i appears is at index: ', my_string.find('i')
Explanation: We can also find the first instance of a character or group of characters in a string.
End of explanation
print "All i's are now a's: ", my_string.replace('i', 'a')
print "It's raining cats and dogs".replace('dogs', 'more cats')
Explanation: As well as replace characters in a string.
End of explanation
my_string = "I can't hear you"
print my_string.upper()
my_string = "I said HELLO"
print my_string.lower()
Explanation: There are also some methods that are unique to strings. The function upper() will convert all characters in a string to uppercase, while lower() will convert all characters in a string to lowercase!
End of explanation
my_string = "{0} {1}".format('Marco', 'Polo')
print my_string
my_string = "{1} {0}".format('Marco', 'Polo')
print my_string
Explanation: String Formatting
Using the format() method we can add in variable values and generally format our strings.
End of explanation
print 'insert %s here' % 'value'
Explanation: We use braces ({}) to indicate parts of the string that will be filled in later and we use the arguments of the format() function to provide the values to substitute. The numbers within the braces indicate the index of the value in the format() arguments.
See the format() documentation for additional examples.
If you need some quick and dirty formatting, you can instead use the % symbol, called the string formatting operator.
End of explanation
print 'There are %s cats in my %s' % (13, 'apartment')
Explanation: The % symbol basically cues Python to create a placeholder. Whatever character follows the % (in the string) indicates what sort of type the value put into the placeholder will have. This character is called a conversion type. Once the string has been closed, we need another % that will be followed by the values to insert. In the case of one value, you can just put it there. If you are inserting more than one value, they must be enclosed in a tuple.
End of explanation
print 5 == 5
print 5 > 5
Explanation: In these examples, the %s indicates that Python should convert the values into strings. There are multiple conversion types that you can use to get more specific with the the formatting. See the string formatting documentation for additional examples and more complete details on use.
Logical Operators
Basic Logic
Logical operators deal with boolean values, as we briefly covered before. If you recall, a bool takes on one of two values, True or False (or $1$ or $0$). The basic logical statements that we can make are defined using the built-in comparators. These are == (equal), != (not equal), < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to).
End of explanation
m = 2
n = 23
print m < n
Explanation: These comparators also work in conjunction with variables.
End of explanation
statement_1 = 10 > 2
statement_2 = 4 <= 6
print "Statement 1 truth value: {0}".format(statement_1)
print "Statement 2 truth value: {0}".format(statement_2)
print "Statement 1 and Statement 2: {0}".format(statement_1 and statement_2)
Explanation: We can string these comparators together to make more complex logical statements using the logical operators or, and, and not.
End of explanation
print ((2 < 3) and (3 > 0)) or ((5 > 6) and not (4 < 2))
Explanation: The or operator performs a logical or calculation. This is an inclusive or, so if either component paired together by or is True, the whole statement will be True. The and statement only outputs True if all components that are anded together are True. Otherwise it will output False. The not statement simply inverts the truth value of whichever statement follows it. So a True statement will be evaluated as False when a not is placed in front of it. Similarly, a False statement will become True when a not is in front of it.
Say that we have two logical statements, or assertions, $P$ and $Q$. The truth table for the basic logical operators is as follows:
| P | Q | not P| P and Q | P or Q|
|:-----:|:-----:|:---:|:---:|:---:|
| True | True | False | True | True |
| False | True | True | False | True |
| True | False | False | False | True |
| False | False | True | False | False |
We can string multiple logical statements together using the logical operators.
End of explanation
# Similar to how float() and int() work, bool() forces a value to be considered a boolean!
print bool('')
print bool('I have character!')
print bool([])
print bool([1, 2, 3])
Explanation: Logical statements can be as simple or complex as we like, depending on what we need to express. Evaluating the above logical statement step by step we see that we are evaluating (True and True) or (False and not False). This becomes True or (False and True), subsequently becoming True or False, ultimately being evaluated as True.
Truthiness
Data types in Python have a fun characteristic called truthiness. What this means is that most built-in types will evaluate as either True or False when a boolean value is needed (such as with an if-statement). As a general rule, containers like strings, tuples, dictionaries, lists, and sets, will return True if they contain anything at all and False if they contain nothing.
End of explanation
# This is the basic format of an if statement. This is a vacuous example.
# The string "Condition" will always evaluated as True because it is a
# non-empty string. he purpose of this code is to show the formatting of
# an if-statement.
if "Condition":
# This block of code will execute because the string is non-empty
# Everything on these indented lines
print True
else:
# So if the condition that we examined with if is in fact False
# This block of code will execute INSTEAD of the first block of code
# Everything on these indented lines
print False
# The else block here will never execute because "Condition" is a non-empty string.
i = 4
if i == 5:
print 'The variable i has a value of 5'
Explanation: And so on, for the other collections and containers. None also evaluates as False. The number 1 is equivalent to True and the number 0 is equivalent to False as well, in a boolean context.
If-statements
We can create segments of code that only execute if a set of conditions is met. We use if-statements in conjunction with logical statements in order to create branches in our code.
An if block gets entered when the condition is considered to be True. If condition is evaluated as False, the if block will simply be skipped unless there is an else block to accompany it. Conditions are made using either logical operators or by using the truthiness of values in Python. An if-statement is defined with a colon and a block of indented text.
End of explanation
i = 4
if i == 5:
print "All lines in this indented block are part of this block"
print 'The variable i has a value of 5'
else:
print "All lines in this indented block are part of this block"
print 'The variable i is not equal to 5'
Explanation: Because in this example i = 4 and the if-statement is only looking for whether i is equal to 5, the print statement will never be executed. We can add in an else statement to create a contingency block of code in case the condition in the if-statement is not evaluated as True.
End of explanation
i = 1
if i == 1:
print 'The variable i has a value of 1'
elif i == 2:
print 'The variable i has a value of 2'
elif i == 3:
print 'The variable i has a value of 3'
else:
print "I don't care what i is"
Explanation: We can implement other branches off of the same if-statement by using elif, an abbreviation of "else if". We can include as many elifs as we like until we have exhausted all the logical branches of a condition.
End of explanation
i = 10
if i % 2 == 0:
if i % 3 == 0:
print 'i is divisible by both 2 and 3! Wow!'
elif i % 5 == 0:
print 'i is divisible by both 2 and 5! Wow!'
else:
print 'i is divisible by 2, but not 3 or 5. Meh.'
else:
print 'I guess that i is an odd number. Boring.'
Explanation: You can also nest if-statements within if-statements to check for further conditions.
End of explanation
i = 5
j = 12
if i < 10 and j > 11:
print '{0} is less than 10 and {1} is greater than 11! How novel and interesting!'.format(i, j)
Explanation: Remember that we can group multiple conditions together by using the logical operators!
End of explanation
my_string = "Carthago delenda est"
if my_string == "Carthago delenda est":
print 'And so it was! For the glory of Rome!'
else:
print 'War elephants are TERRIFYING. I am staying home.'
Explanation: You can use the logical comparators to compare strings!
End of explanation
if 'a' in my_string or 'e' in my_string:
print 'Those are my favorite vowels!'
Explanation: As with other data types, == will check for whether the two things on either side of it have the same value. In this case, we compare whether the value of the strings are the same. Using > or < or any of the other comparators is not quite so intuitive, however, so we will stay from using comparators with strings in this lecture. Comparators will examine the lexicographical order of the strings, which might be a bit more in-depth than you might like.
Some built-in functions return a boolean value, so they can be used as conditions in an if-statement. User-defined functions can also be constructed so that they return a boolean value. This will be covered later with function definition!
The in keyword is generally used to check membership of a value within another value. We can check memebership in the context of an if-statement and use it to output a truth value.
End of explanation
i = 5
while i > 0: # We can write this as 'while i:' because 0 is False!
i -= 1
print 'I am looping! {0} more to go!'.format(i)
Explanation: Here we use in to check whether the variable my_string contains any particular letters. We will later use in to iterate through lists!
Loop Structures
Loop structures are one of the most important parts of programming. The for loop and the while loop provide a way to repeatedly run a block of code repeatedly. A while loop will iterate until a certain condition has been met. If at any point after an iteration that condition is no longer satisfied, the loop terminates. A for loop will iterate over a sequence of values and terminate when the sequence has ended. You can instead include conditions within the for loop to decide whether it should terminate early or you could simply let it run its course.
End of explanation
for i in range(5):
print 'I am looping! I have looped {0} times!'.format(i + 1)
Explanation: With while loops we need to make sure that something actually changes from iteration to iteration so that that the loop actually terminates. In this case, we use the shorthand i -= 1 (short for i = i - 1) so that the value of i gets smaller with each iteration. Eventually i will be reduced to 0, rendering the condition False and exiting the loop.
A for loop iterates a set number of times, determined when you state the entry into the loop. In this case we are iterating over the list returned from range(). The for loop selects a value from the list, in order, and temporarily assigns the value of i to it so that operations can be performed with the value.
End of explanation
my_list = {'cats', 'dogs', 'lizards', 'cows', 'bats', 'sponges', 'humans'} # Lists all the animals in the world
mammal_list = {'cats', 'dogs', 'cows', 'bats', 'humans'} # Lists all the mammals in the world
my_new_list = set()
for animal in my_list:
if animal in mammal_list:
# This adds any animal that is both in my_list and mammal_list to my_new_list
my_new_list.add(animal)
print my_new_list
Explanation: Note that in this for loop we use the in keyword. Use of the in keyword is not limited to checking for membership as in the if-statement example. You can iterate over any collection with a for loop by using the in keyword.
In this next example, we will iterate over a set because we want to check for containment and add to a new set.
End of explanation
i = 10
while True:
if i == 14:
break
i += 1 # This is shorthand for i = i + 1. It increments i with each iteration.
print i
for i in range(5):
if i == 2:
break
print i
Explanation: There are two statements that are very helpful in dealing with both for and while loops. These are break and continue. If break is encountered at any point while a loop is executing, the loop will immediately end.
End of explanation
i = 0
while i < 5:
i += 1
if i == 3:
continue
print i
Explanation: The continue statement will tell the loop to immediately end this iteration and continue onto the next iteration of the loop.
End of explanation
for i in range(5):
loop_string = 'I transcend the loop!'
print 'I am eternal! I am {0} and I exist everywhere!'.format(i)
print 'I persist! My value is {0}'.format(i)
print loop_string
Explanation: This loop skips printing the number $3$ because of the continue statement that executes when we enter the if-statement. The code never sees the command to print the number $3$ because it has already moved to the next iteration. The break and continue statements are further tools to help you control the flow of your loops and, as a result, your code.
The variable that we use to iterate over a loop will retain its value when the loop exits. Similarly, any variables defined within the context of the loop will continue to exist outside of it.
End of explanation
my_dict = {'firstname' : 'Inigo', 'lastname' : 'Montoya', 'nemesis' : 'Rugen'}
for key in my_dict:
print key
Explanation: We can also iterate over a dictionary!
End of explanation
for key in my_dict:
print my_dict[key]
Explanation: If we just iterate over a dictionary without doing anything else, we will only get the keys. We can either use the keys to get the values, like so:
End of explanation
for key, value in my_dict.iteritems():
print key, ':', value
Explanation: Or we can use the iteritems() function to get both key and value at the same time.
End of explanation
def hello_world():
Prints Hello, world!
print 'Hello, world!'
hello_world()
for i in range(5):
hello_world()
Explanation: The iteritems() function creates a tuple of each key-value pair and the for loop stores unpacks that tuple into key, value on each separate execution of the loop!
Functions
A function is a reusable block of code that you can call repeatedly to make calculations, output data, or really do anything that you want. This is one of the key aspects of using a programming language. To add to the built-in functions in Python, you can define your own!
End of explanation
def see_the_scope():
in_function_string = "I'm stuck in here!"
see_the_scope()
print in_function_string
Explanation: Functions are defined with def, a function name, a list of parameters, and a colon. Everything indented below the colon will be included in the definition of the function.
We can have our functions do anything that you can do with a normal block of code. For example, our hello_world() function prints a string every time it is called. If we want to keep a value that a function calculates, we can define the function so that it will return the value we want. This is a very important feature of functions, as any variable defined purely within a function will not exist outside of it.
End of explanation
def free_the_scope():
in_function_string = "Anything you can do I can do better!"
return in_function_string
my_string = free_the_scope()
print my_string
Explanation: The scope of a variable is the part of a block of code where that variable is tied to a particular value. Functions in Python have an enclosed scope, making it so that variables defined within them can only be accessed directly within them. If we pass those values to a return statement we can get them out of the function. This makes it so that the function call returns values so that you can store them in variables that have a greater scope.
In this case specifically,including a return statement allows us to keep the string value that we define in the function.
End of explanation
def multiply_by_five(x):
Multiplies an input number by 5
return x * 5
n = 4
print n
print multiply_by_five(n)
Explanation: Just as we can get values out of a function, we can also put values into a function. We do this by defining our function with parameters.
End of explanation
def calculate_area(length, width):
Calculates the area of a rectangle
return length * width
l = 5
w = 10
print 'Area: ', calculate_area(l, w)
print 'Length: ', l
print 'Width: ', w
def calculate_volume(length, width, depth):
Calculates the volume of a rectangular prism
return length * width * depth
Explanation: In this example we only had one parameter for our function, x. We can easily add more parameters, separating everything with a comma.
End of explanation
def sum_values(*args):
sum_val = 0
for i in args:
sum_val += i
return sum_val
print sum_values(1, 2, 3)
print sum_values(10, 20, 30, 40, 50)
print sum_values(4, 2, 5, 1, 10, 249, 25, 24, 13, 6, 4)
Explanation: If we want to, we can define a function so that it takes an arbitrary number of parameters. We tell Python that we want this by using an asterisk (*).
End of explanation
def test_args(*args):
print type(args)
test_args(1, 2, 3, 4, 5, 6)
Explanation: The time to use *args as a parameter for your function is when you do not know how many values may be passed to it, as in the case of our sum function. The asterisk in this case is the syntax that tells Python that you are going to pass an arbitrary number of parameters into your function. These parameters are stored in the form of a tuple.
End of explanation
def has_a_vowel(word):
Checks to see whether a word contains a vowel
If it doesn't contain a conventional vowel, it
will check for the presence of 'y' or 'w'. Does
not check to see whether those are in the word
in a vowel context.
vowel_list = ['a', 'e', 'i', 'o', 'u']
for vowel in vowel_list:
if vowel in word:
return True
# If there is a vowel in the word, the function returns, preventing anything after this loop from running
return False
my_word = 'catnapping'
if has_a_vowel(my_word):
print 'How surprising, an english word contains a vowel.'
else:
print 'This is actually surprising.'
def point_maker(x, y):
Groups x and y values into a point, technically a tuple
return x, y
Explanation: We can put as many elements into the args tuple as we want to when we call the function. However, because args is a tuple, we cannot modify it after it has been created.
The args name of the variable is purely by convention. You could just as easily name your parameter *vars or *things. You can treat the args tuple like you would any other tuple, easily accessing arg's values and iterating over it, as in the above sum_values(*args) function.
Our functions can return any data type. This makes it easy for us to create functions that check for conditions that we might want to monitor.
Here we define a function that returns a boolean value. We can easily use this in conjunction with if-statements and other situations that require a boolean.
End of explanation
a = point_maker(0, 10)
b = point_maker(5, 3)
def calculate_slope(point_a, point_b):
Calculates the linear slope between two points
return (point_b[1] - point_a[1])/(point_b[0] - point_a[0])
print "The slope between a and b is {0}".format(calculate_slope(a, b))
Explanation: This above function returns an ordered pair of the input parameters, stored as a tuple.
End of explanation
print "The slope-intercept form of the line between a and b, using point a, is: y - {0} = {2}(x - {1})".format(a[1], a[0], calculate_slope(a, b))
Explanation: And that one calculates the slope between two points!
End of explanation |
55 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Target distribution
We use the peaks function from matlab, modified so it is positive
Step2: Heat bath
The "heat bath" refers to a modified version of the distribution in which we vary the temperature.
Step3: SA algorithm
Step4: Run experiments | Python Code:
import jax
import jax.numpy as jnp
import numpy as np
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
from IPython import display
try:
import probml_utils as pml
except:
%pip install -qq git+https://github.com/probml/probml-utils.git
import probml_utils as pml
from mpl_toolkits.mplot3d import Axes3D
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book2/06/simulated_annealing_2d_demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Simulated annealing on a 2d surface
Code is based on
https://krischer.github.io/seismo_live_build/html/Seismic%20Inverse%20Problems/Probabilistic%20Inversion/pi_simann_wrapper.html
and modified by murphyk@, Neoanarika@
End of explanation
def abs_peaks_func(x, y):
# in contrast to the peaks function: all negative values are multiplied by (-1)
return jnp.abs(
3.0 * (1 - x) ** 2 * jnp.exp(-(x**2) - (y + 1) ** 2)
- 10.0 * (x / 5 - x**3 - y**5) * jnp.exp(-(x**2) - y**2)
- 1.0 / 3 * jnp.exp(-((x + 1) ** 2) - y**2)
)
# Generate a pdf
# the following steps generate a pdf; this is equivalent to the function "peaks(n)" in matlab
n = 100 # number of dimension
pdf = np.zeros([n, n])
sigma = jnp.zeros([n, n])
# s = jnp.zeros([n, n])
x = -3.0
for i in range(0, n):
y = -3.0
for j in range(0, n):
pdf[j, i] = abs_peaks_func(x, y)
y = y + 6.0 / (n - 1)
x = x + 6.0 / (n - 1)
pdf = jnp.array(pdf)
pdf = pdf / pdf.max()
energy = -jnp.log(pdf)
def plot_3d_surface(x, y, pdf, title=None, fig_name=None):
fig = plt.figure()
ax = fig.gca(projection="3d")
x, y = jnp.meshgrid(x, y)
surf = ax.plot_surface(y, x, pdf, rstride=2, cstride=2, cmap=plt.cm.coolwarm, linewidth=0.1)
if fig_name:
pml.savefig(fig_name)
if title:
ax.set_title(title)
plt.tight_layout()
# Plot the 3D plot of pdf
# --------------------------
# %matplotlib inline
X = jnp.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
Y = jnp.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
plot_3d_surface(Y, X, pdf, fig_name="sim_anneal_2d_peaks.pdf")
# Plot the 3D plot of Energy function
# --------------------------
plot_3d_surface(X, Y, energy, fig_name="sim_anneal_2d_energy")
Explanation: Target distribution
We use the peaks function from matlab, modified so it is positive:
$$
p(x,y) \propto |3 (1-x)^2 e^{-x^2 - (y+1)^2}
- 10 (\frac{x}{5} - x^3 - y^5) e^{-x^2 -y^2}
- \frac{1}{3} e^{-(x+1)^2 - y^2} |
$$
End of explanation
temperature = 10 # initial temperature for the plots
stepT = 4 # how many steps should the Temperature be *0.2 for
x = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
y = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
for i in range(0, stepT):
sigma = np.exp(-(energy) / temperature)
sigma = sigma / sigma.max()
ttl = "T={:0.2f}".format(temperature)
temperature = temperature * 0.2
plot_3d_surface(x, y, sigma, title="T={:0.2f}".format(temperature), fig_name=f"sim_anneal_2d_cooled{i}")
Explanation: Heat bath
The "heat bath" refers to a modified version of the distribution in which we vary the temperature.
End of explanation
def sim_anneal(proposal="gaussian", sigma=10, seed=jax.random.PRNGKey(0)):
# jnp.random.seed(42)
seed1, seed2 = jax.random.split(seed)
x_start = jnp.array(
[
jnp.floor(jax.random.uniform(seed1, minval=0, maxval=100)),
jnp.floor(jax.random.uniform(seed2, minval=0, maxval=100)),
]
) # x_start
xcur = x_start.astype(int) # x current
n_samples = 300 # number of samples to keep
T = 1 # start temperature
alpha = 0.99 # cooling schedule
# list of visited points, temperatures, probabilities
x_hist = xcur # will be (N,2) array
prob_hist = []
temp_hist = []
nreject = 0
iis = 0 # number of accepted points
n_proposed_points = 0 # num proposed points
while n_proposed_points < n_samples:
_, seed = jax.random.split(seed)
n_proposed_points = n_proposed_points + 1
if proposal == "uniform":
seeds = jax.random.split(seed)
xnew = jnp.array(
[
jnp.floor(jax.random.uniform(seeds[0], minval=0, maxval=100)),
jax.random.uniform(seeds[1], minval=0, maxval=100),
]
)
# print(xnew)
elif proposal == "gaussian":
xnew = xcur + jax.random.normal(seed, shape=(2,)) * sigma
xnew = jnp.maximum(xnew, 0)
xnew = jnp.minimum(xnew, 99)
else:
raise ValueError("Unknown proposal")
xnew = xnew.astype(int)
# compare energies
Ecur = energy[xcur[0], xcur[1]]
Enew = energy[xnew[0], xnew[1]]
deltaE = Enew - Ecur
# print([n_proposed_points, xcur, xnew, Ecur, Enew, deltaE])
temp_hist.append(T)
T = alpha * T
p_accept = jnp.exp(-1.0 * deltaE / T)
# print(p_accept)
p_accept = min(1, p_accept)
test = jax.random.uniform(jax.random.split(seed)[0], minval=0, maxval=1)
# print(test)
if test <= p_accept:
xcur = xnew
iis = iis + 1
else:
nreject += 1
x_hist = jnp.vstack((x_hist, xcur))
prob_hist.append(pdf[xcur[0], xcur[1]])
n_proposed_points = n_proposed_points + 1
print(f"jnproposed {n_proposed_points}, naccepted {iis}, nreject {nreject}")
return x_hist, prob_hist, temp_hist
Explanation: SA algorithm
End of explanation
pml.latexify(width_scale_factor=2, fig_height=1.5)
proposals = ["gaussian", "uniform"]
x_hist = {}
prob_hist = {}
temp_hist = {}
for proposal in proposals:
print(proposal)
x_hist[proposal], prob_hist[proposal], temp_hist[proposal] = sim_anneal(
proposal=proposal, seed=jax.random.PRNGKey(25)
)
for proposal in proposals:
plt.figure()
plt.plot(
temp_hist[proposal],
"r--",
label="temperature",
)
plt.plot(prob_hist[proposal], "g-", label="probability")
# pml.savefig(f"sim_anneal_2d_temp_vs_time_{proposal}.pdf")
plt.xlabel("iteration")
sns.despine()
plt.legend(bbox_to_anchor=(0.55, 0.35), fontsize=8)
pml.savefig(f"sim_anneal_2d_temp_and_prob_vs_time_{proposal}.pdf")
# Plot points visited
global_markersize = 5 if pml.is_latexify_enabled() else 10
step_markersize = 2 if pml.is_latexify_enabled() else 4
for proposal in proposals:
probs = prob_hist[proposal]
xa = x_hist[proposal]
fig = plt.figure()
ax = plt.gca()
contour = ax.imshow(pdf.transpose(), aspect="auto", extent=[0, 100, 100, 0], interpolation="none")
# contour = plt.contourf(pdf.transpose(), aspect="auto", extent=[0, 100, 100, 0])
fig.colorbar(contour, ax=ax)
# Starting point with white cirlce
# ax.plot(xa[0, 0], xa[0, 1], "ro", markersize=10)
# Global maximm with red cirlce
ind = np.unravel_index(np.argmax(pdf, axis=None), pdf.shape)
ax.plot(xa[:, 0], xa[:, 1], "w.", markersize=step_markersize) # Plot the steps with white +4
ax.plot(ind[0], ind[1], "ro", markersize=global_markersize, label="global maxima") # max point
ax.set_ylabel("$x2$")
ax.set_xlabel("$x1$")
# plt.legend(framealpha=0.5)
pml.savefig(f"sim_anneal_2d_samples_{proposal}.pdf")
Explanation: Run experiments
End of explanation |
56 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a Generative Adversarial Network on MNIST
In this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Step1: To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).
Step2: Let's view some of the images to get an idea of what they look like.
Step3: Now we can create our GAN. Like in the last tutorial, it consists of two parts
Step4: Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times.
One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn.
WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results.
Step5: Let's generate some data and see how the results look. | Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Training a Generative Adversarial Network on MNIST
In this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
import deepchem as dc
import tensorflow as tf
from deepchem.models.optimizers import ExponentialDecay
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Reshape
import matplotlib.pyplot as plot
import matplotlib.gridspec as gridspec
%matplotlib inline
mnist = tf.keras.datasets.mnist.load_data(path='mnist.npz')
images = mnist[0][0].reshape((-1, 28, 28, 1))/255
dataset = dc.data.NumpyDataset(images)
Explanation: To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).
End of explanation
def plot_digits(im):
plot.figure(figsize=(3, 3))
grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05)
for i, g in enumerate(grid):
ax = plot.subplot(g)
ax.set_xticks([])
ax.set_yticks([])
ax.imshow(im[i,:,:,0], cmap='gray')
plot_digits(images)
Explanation: Let's view some of the images to get an idea of what they look like.
End of explanation
class DigitGAN(dc.models.WGAN):
def get_noise_input_shape(self):
return (10,)
def get_data_input_shapes(self):
return [(28, 28, 1)]
def create_generator(self):
return tf.keras.Sequential([
Dense(7*7*8, activation=tf.nn.relu),
Reshape((7, 7, 8)),
Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation=tf.nn.relu, padding='same'),
Conv2DTranspose(filters=1, kernel_size=5, strides=2, activation=tf.sigmoid, padding='same')
])
def create_discriminator(self):
return tf.keras.Sequential([
Conv2D(filters=32, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),
Conv2D(filters=64, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),
Dense(1, activation=tf.math.softplus)
])
gan = DigitGAN(learning_rate=ExponentialDecay(0.001, 0.9, 5000))
Explanation: Now we can create our GAN. Like in the last tutorial, it consists of two parts:
The generator takes random noise as its input and produces output that will hopefully resemble the training data.
The discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which.
This time we will use a different style of GAN called a Wasserstein GAN (or WGAN for short). In many cases, they are found to produce better results than conventional GANs. The main difference between the two is in the discriminator (often called a "critic" in this context). Instead of outputting the probability of a sample being real training data, it tries to learn how to measure the distance between the training distribution and generated distribution. That measure can then be directly used as a loss function for training the generator.
We use a very simple model. The generator uses a dense layer to transform the input noise into a 7x7 image with eight channels. That is followed by two convolutional layers that upsample it first to 14x14, and finally to 28x28.
The discriminator does roughly the same thing in reverse. Two convolutional layers downsample the image first to 14x14, then to 7x7. A final dense layer produces a single number as output. In the last tutorial we used a sigmoid activation to produce a number between 0 and 1 that could be interpreted as a probability. Since this is a WGAN, we instead use a softplus activation. It produces an unbounded positive number that can be interpreted as a distance.
End of explanation
def iterbatches(epochs):
for i in range(epochs):
for batch in dataset.iterbatches(batch_size=gan.batch_size):
yield {gan.data_inputs[0]: batch[0]}
gan.fit_gan(iterbatches(100), generator_steps=0.2, checkpoint_interval=5000)
Explanation: Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times.
One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn.
WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results.
End of explanation
plot_digits(gan.predict_gan_generator(batch_size=16))
Explanation: Let's generate some data and see how the results look.
End of explanation |
57 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b> Python rehearsal</b></font></p>
DS Data manipulation, analysis and visualization in Python
May/June, 2021
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
I measure air pressure
Step1: <div class="alert alert-info">
<b>REMEMBER</b>
Step2: <div class="alert alert-danger">
<b>DON'T</b>
Step5: <div class="alert alert-success">
<b>EXERCISE</b>
Step6: Instead of having the functions in a notebook, importing the function from a file can be done as importing a function from an installed package. Save the function barometric_formula in a file called barometric_formula.py and add the required import statement import math on top of the file. Next, run the following cell
Step7: <div class="alert alert-info">
<b>REMEMBER</b>
Step8: <div class="alert alert-warning">
<b>Notice</b>
Step9: I want to apply my function to each of these measurements
I want to calculate the barometric formula for each of these measured values.
Step10: <div class="alert alert-success">
<b>EXERCISE</b>
Step11: <div class="alert alert-success">
<b>EXERCISE</b>
Step12: The power of numpy
Step13: <div class="alert alert-info">
<b>REMEMBER</b>
Step14: <div class="alert alert-info">
<b>REMEMBER</b>
Step15: <div class="alert alert-info">
<b>REMEMBER</b>
Step16: You can use this as a filter to select elements of an array
Step17: or, also to change the values in the array corresponding to these conditions
Step18: Intermezzo
Step19: <div class="alert alert-success">
<b>EXERCISE</b>
Step20: <div class="alert alert-success">
<b>EXERCISE</b>
Step21: <div class="alert alert-success">
<b>EXERCISE</b>
Step22: <div class="alert alert-success">
<b>EXERCISE</b>
Step23: <div class="alert alert-success">
<b>EXERCISE</b>
Step24: I also have measurement locations
Step25: <div class="alert alert alert-success">
<b>EXERCISE</b>
Step26: I also measure temperature
Step27: Python dictionaries are a convenient way to store multiple types of data together, to not have too much different variables
Step28: But
Step29: when a table would be more appropriate... Pandas! | Python Code:
pressure_hPa = 1010 # hPa
Explanation: <p><font size="6"><b> Python rehearsal</b></font></p>
DS Data manipulation, analysis and visualization in Python
May/June, 2021
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
I measure air pressure
End of explanation
import math
# ...modules and libraries...
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>: Use meaningful variable names
</div>
I'm measuring at sea level, what would be the air pressure of this measured value on other altitudes?
I'm curious what the equivalent pressure would be on other alitudes...
The barometric formula, sometimes called the exponential atmosphere or isothermal atmosphere, is a formula used to model how the pressure (or density) of the air changes with altitude. The pressure drops approximately by 11.3 Pa per meter in first 1000 meters above sea level.
$$P=P_0 \cdot \exp \left[\frac{-g \cdot M \cdot h}{R \cdot T}\right]$$
see https://www.math24.net/barometric-formula/ or https://en.wikipedia.org/wiki/Atmospheric_pressure
where:
* $T$ = standard temperature, 288.15 (K)
* $R$ = universal gas constant, 8.3144598, (J/mol/K)
* $g$ = gravitational acceleration, 9.81 (m/s$^2$)
* $M$ = molar mass of Earth's air, 0.02896 (kg/mol)
and:
* $P_0$ = sea level pressure (hPa)
* $h$ = height above sea level (m)
Let's implement this...
To calculate the formula, I need the exponential operator. Pure Python provide a number of mathematical functions, e.g. https://docs.python.org/3.7/library/math.html#math.exp within the math library
End of explanation
standard_temperature = 288.15
gas_constant = 8.31446
gravit_acc = 9.81
molar_mass_earth = 0.02896
Explanation: <div class="alert alert-danger">
<b>DON'T</b>: <code>from os import *</code>. Just don't!
</div>
End of explanation
height = 2500
pressure_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
# ...function/definition for barometric_formula...
def barometric_formula(pressure_sea_level, height=2500):
Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
barometric_formula(pressure_hPa, 2000)
barometric_formula(pressure_hPa)
# ...formula not valid above 11000m...
# barometric_formula(pressure_hPa, 12000)
def barometric_formula(pressure_sea_level, height=2500):
Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
if height > 11000:
raise Exception("Barometric formula only valid for heights lower than 11000m above sea level")
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
# ...combining logical statements...
height > 11000 or pressure_hPa < 9000
# ...load function from file...
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the equivalent air pressure at the altitude of 2500 m above sea level for our measured value of <code>pressure_hPa</code> (1010 hPa)</li>
</ul>
</div>
End of explanation
from barometric_formula import barometric_formula
Explanation: Instead of having the functions in a notebook, importing the function from a file can be done as importing a function from an installed package. Save the function barometric_formula in a file called barometric_formula.py and add the required import statement import math on top of the file. Next, run the following cell:
End of explanation
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
# ...check methods of lists... append vs insert
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>Write functions to prevent copy-pasting of code and maximize reuse</li>
<li>Add documentation to functions for your future self</li>
<li>Named arguments provide default values</li>
<li>Import functions from a file just as other modules</li>
</ul>
</div>
I measure air pressure multiple times
We can store these in a Python list:
End of explanation
# ...list is a container...
Explanation: <div class="alert alert-warning">
<b>Notice</b>:
<ul>
<li>A list is a general container, so can exist of mixed data types as well.</li>
</ul>
</div>
End of explanation
# ...for loop... dummy example
Explanation: I want to apply my function to each of these measurements
I want to calculate the barometric formula for each of these measured values.
End of explanation
for pressure in pressures_hPa:
print(barometric_formula(pressure, 3000))
# ...list comprehensions...
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Write a <code>for</code> loop that prints the adjusted value for altitude 3000m for each of the pressures in <code>pressures_hPa</code> </li>
</ul>
</div>
End of explanation
pressures_hPa_adjusted = [barometric_formula(pressure, 3000) for pressure in pressures_hPa]
pressures_hPa_adjusted
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Write a <code>for</code> loop as a list comprehension to calculate the adjusted value for altitude 3000m for each of the pressures in <code>pressures_hPa</code> and store these values in a new variable <code>pressures_hPa_adjusted</code></li>
</ul>
</div>
End of explanation
import numpy as np
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
np_pressures_hPa = np.array([1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001])
# ...slicing/subselecting is similar...
print(np_pressures_hPa[0], pressures_hPa[0])
Explanation: The power of numpy
End of explanation
# ...original function using numpy array instead of list... do both
np_pressures_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> <code>[]</code> for accessing elements
<li> <code>[start:end:step]</code>
</ul>
</div>
End of explanation
lots_of_pressures = np.random.uniform(990, 1040, 1000)
%timeit [barometric_formula(pressure, 3000) for pressure in list(lots_of_pressures)]
%timeit lots_of_pressures * np.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>: The operations do work on all elements of the array at the same time, you don't need a <strike>`for` loop<strike>
</div>
It is also a matter of calculation speed:
End of explanation
np_pressures_hPa
np_pressures_hPa > 1000
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>: for calculations, numpy outperforms python
</div>
Boolean indexing and filtering (!)
End of explanation
boolean_mask = np_pressures_hPa > 1000
np_pressures_hPa[boolean_mask]
Explanation: You can use this as a filter to select elements of an array:
End of explanation
boolean_mask = np_pressures_hPa < 900
np_pressures_hPa[boolean_mask] = 900
np_pressures_hPa
Explanation: or, also to change the values in the array corresponding to these conditions:
End of explanation
AR = np.random.randint(0, 20, 15)
AR
Explanation: Intermezzo: Exercises boolean indexing:
End of explanation
sum(AR > 10)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Count the number of values in AR that are larger than 10
_Tip:_ You can count with True = 1 and False = 0 and take the sum of these values
</div>
End of explanation
AR[AR%2 == 0] = 0
AR
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Change all even numbers of `AR` into zero-values.
</div>
End of explanation
AR[1::2] = 30
AR
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Change all even positions of matrix AR into the value 30
</div>
End of explanation
AR2 = np.random.random(10)
AR2
np.sqrt(AR2[AR2 > np.percentile(AR2, 75)])
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values
_Tip_: numpy provides a function `percentile` to calculate a given percentile
</div>
End of explanation
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[np.isclose(AR3, -99)] = np.nan
AR3
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Convert all values -99 of the array AR3 into Nan-values
_Tip_: that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`
</div>
End of explanation
location = 'Ghent - Sterre'
# ...check methods of strings... split, upper,...
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
Explanation: I also have measurement locations
End of explanation
[location.lower() for location in locations]
Explanation: <div class="alert alert alert-success">
<b>EXERCISE</b>: Use a list comprehension to convert all locations to lower case.
_Tip:_ check the available methods of lists by writing: `location.` + TAB button
</div>
End of explanation
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
temperature_degree = [23, 20, 17, 8, 12, 5, 16, 22, -2, 16]
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
Explanation: I also measure temperature
End of explanation
measurement = {}
measurement['pressure_hPa'] = 1010
measurement['temperature'] = 23
measurement
# ...select on name, iterate over keys or items...
measurements = {'pressure_hPa': pressures_hPa,
'temperature_degree': temperature_degree,
'location': locations}
measurements
Explanation: Python dictionaries are a convenient way to store multiple types of data together, to not have too much different variables:
End of explanation
for idx, pressure in enumerate(measurements['pressure_hPa']):
if measurements['location'][idx].startswith("Ghent") and \
measurements['temperature_degree'][idx]< 10:
print(barometric_formula(pressure, 3000))
Explanation: But: I want to apply my barometric function to measurements taken in Ghent when the temperature was below 10 degrees...
End of explanation
import pandas as pd
measurements = pd.DataFrame(measurements)
measurements
barometric_formula(measurements[(measurements["location"].str.contains("Ghent")) &
(measurements["temperature_degree"] < 10)]["pressure_hPa"], 3000)
Explanation: when a table would be more appropriate... Pandas!
End of explanation |
58 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Causal discovery with TIGRAMITE
TIGRAMITE is a time series analysis python module. It allows to reconstruct graphical models (conditional independence graphs) from discrete or continuously-valued time series based on the PCMCI framework and create high-quality plots of the results.
PCMCI is described here
Step1: Prediction
Tigramite also contains a class tigramite.models.Prediction to perform predictions based on the sklearn models. The Prediction class includes a wrapper around run_pc_stable from the PCMCI class to perform predictor selection. Consider the following data generation process
Step2: We initialize the Prediction class with cond_ind_model=ParCorr(). Secondly, we choose sklearn.linear_model.LinearRegression() here for prediction. Last, we scale the data via data_transform. The class takes care of rescaling the data for prediction. The parameters train_indices and test_indices are used to divide the data up into a training set and test set. The test set is optional since new data can be supplied later. The training set is used to select predictors and fit the model.
Step3: Now, we estimate causal predictors using get_predictors for the target variable 2 taking into account a maximum past lag of tau_max. We use pc_alpha=None which optimizes the parameter based on the Akaike score. Note that the predictors are different for each prediction horizon. For example, at a prediction horizon of steps_ahead=1 we get the causal parents from the model plus some others
Step4: Note, that get_predictors is based only on the first step of PCMCI and skips the MCI step since correct false positive rates are not that relevant for prediction and the first step alone is faster. Now, we set steps_ahead=2 and get different predictors
Step5: These predictors now efficiently avoid overfitting in the following model fit. Here one can specify whether multiple target variables should be fit at once (assuming that for all of these predictors have been estimated beforehand).
Step6: Now we are ready to predict the target variable at the test samples
Step7: We can also predict other new data by supplying a new dataframe to new_data. Because we have more samples here, the estimate of NRMSE is more reliable.
Step8: This prediction is much better than using all past variables which leads to overfitting
Step9: Before, we rescaled the data before fitting which requires us to also rescale the test data. We can also leave the data unscaled
Step10: Note the different scales on the x- and y-axes.
Last, let's try a Gaussian process regressor in conjunction with a GPDC predictor selection. Here we supply cond_ind_params and prediction_model_params because the sklearn defaults don't work well here. | Python Code:
# Imports
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
## use `%matplotlib notebook` for interactive figures
# plt.style.use('ggplot')
import sklearn
import tigramite
from tigramite import data_processing as pp
from tigramite.toymodels import structural_causal_processes as toys
from tigramite import plotting as tp
from tigramite.pcmci import PCMCI
from tigramite.independence_tests import ParCorr, GPDC, CMIknn, CMIsymb
from tigramite.models import LinearMediation, Prediction
Explanation: Causal discovery with TIGRAMITE
TIGRAMITE is a time series analysis python module. It allows to reconstruct graphical models (conditional independence graphs) from discrete or continuously-valued time series based on the PCMCI framework and create high-quality plots of the results.
PCMCI is described here:
J. Runge, P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic,
Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996 (2019)
https://advances.sciencemag.org/content/5/11/eaau4996
For further versions of PCMCI (e.g., PCMCI+, LPCMCI, etc.), see the corresponding tutorials.
This tutorial explains how to use PCMCI to obtain optimal predictors. See the following paper for theoretical background:
Runge, Jakob, Reik V. Donner, and Jürgen Kurths. 2015. “Optimal Model-Free Prediction from Multivariate Time Series.” Phys. Rev. E 91 (5): 052909. https://doi.org/10.1103/PhysRevE.91.052909.
Last, the following Nature Communications Perspective paper provides an overview of causal inference methods in general, identifies promising applications, and discusses methodological challenges (exemplified in Earth system sciences):
https://www.nature.com/articles/s41467-019-10105-3
End of explanation
np.random.seed(42)
T = 150
links_coeffs = {0: [((0, -1), 0.6)],
1: [((1, -1), 0.6), ((0, -1), 0.8)],
2: [((2, -1), 0.5), ((1, -1), 0.7)], # ((0, -1), c)],
}
N = len(links_coeffs)
data, true_parents = toys.var_process(links_coeffs, T=T)
dataframe = pp.DataFrame(data, var_names = [r'$X^0$', r'$X^1$', r'$X^2$'])
Explanation: Prediction
Tigramite also contains a class tigramite.models.Prediction to perform predictions based on the sklearn models. The Prediction class includes a wrapper around run_pc_stable from the PCMCI class to perform predictor selection. Consider the following data generation process:
End of explanation
pred = Prediction(dataframe=dataframe,
cond_ind_test=ParCorr(), #CMIknn ParCorr
prediction_model = sklearn.linear_model.LinearRegression(),
# prediction_model = sklearn.gaussian_process.GaussianProcessRegressor(),
# prediction_model = sklearn.neighbors.KNeighborsRegressor(),
data_transform=sklearn.preprocessing.StandardScaler(),
train_indices= range(int(0.8*T)),
test_indices= range(int(0.8*T), T),
verbosity=1
)
Explanation: We initialize the Prediction class with cond_ind_model=ParCorr(). Secondly, we choose sklearn.linear_model.LinearRegression() here for prediction. Last, we scale the data via data_transform. The class takes care of rescaling the data for prediction. The parameters train_indices and test_indices are used to divide the data up into a training set and test set. The test set is optional since new data can be supplied later. The training set is used to select predictors and fit the model.
End of explanation
target = 2
tau_max = 5
predictors = pred.get_predictors(
selected_targets=[target],
steps_ahead=1,
tau_max=tau_max,
pc_alpha=None
)
graph = np.zeros((N, N, tau_max+1), dtype='bool')
for j in [target]:
for p in predictors[j]:
graph[p[0], j, abs(p[1])] = 1
# Plot time series graph
tp.plot_time_series_graph(
figsize=(6, 3),
# node_aspect=2.,
val_matrix=np.ones(graph.shape),
graph=graph,
var_names=None,
link_colorbar_label='',
); plt.show()
Explanation: Now, we estimate causal predictors using get_predictors for the target variable 2 taking into account a maximum past lag of tau_max. We use pc_alpha=None which optimizes the parameter based on the Akaike score. Note that the predictors are different for each prediction horizon. For example, at a prediction horizon of steps_ahead=1 we get the causal parents from the model plus some others:
End of explanation
tau_max = 30
steps_ahead = 2
target = 2
all_predictors = pred.get_predictors(
selected_targets=[target],
steps_ahead=steps_ahead,
tau_max=tau_max,
pc_alpha=None
)
graph = np.zeros((N, N, tau_max + 1), dtype='bool')
for j in [target]:
for p in all_predictors[j]:
graph[p[0], j, abs(p[1])] = 1
# Plot time series graph
tp.plot_time_series_graph(
figsize=(18, 5),
node_size=0.05,
node_aspect=.3,
val_matrix=np.ones(graph.shape),
graph=graph,
var_names=None,
link_colorbar_label='',
label_fontsize=24
); plt.show()
Explanation: Note, that get_predictors is based only on the first step of PCMCI and skips the MCI step since correct false positive rates are not that relevant for prediction and the first step alone is faster. Now, we set steps_ahead=2 and get different predictors:
End of explanation
pred.fit(target_predictors=all_predictors,
selected_targets=[target],
tau_max=tau_max)
Explanation: These predictors now efficiently avoid overfitting in the following model fit. Here one can specify whether multiple target variables should be fit at once (assuming that for all of these predictors have been estimated beforehand).
End of explanation
predicted = pred.predict(target)
true_data = pred.get_test_array()[0]
plt.scatter(true_data, predicted)
plt.title(r"NRMSE = %.2f" % (np.abs(true_data - predicted).mean()/true_data.std()))
plt.plot(true_data, true_data, 'k-')
plt.xlabel('True test data')
plt.ylabel('Predicted test data')
plt.show()
Explanation: Now we are ready to predict the target variable at the test samples:
End of explanation
new_data = pp.DataFrame(toys.var_process(links_coeffs, T=200)[0])
predicted = pred.predict(target, new_data=new_data)
true_data = pred.get_test_array()[0]
plt.scatter(true_data, predicted)
plt.title(r"NRMSE = %.2f" % (np.abs(true_data - predicted).mean()/true_data.std()))
plt.plot(true_data, true_data, 'k-')
plt.xlabel('True test data')
plt.ylabel('Predicted test data')
plt.show()
Explanation: We can also predict other new data by supplying a new dataframe to new_data. Because we have more samples here, the estimate of NRMSE is more reliable.
End of explanation
whole_predictors = {2:[(i, -tau) for i in range(3) for tau in range(1, tau_max+1)]}
pred.fit(target_predictors=whole_predictors,
selected_targets=[target],
tau_max=tau_max)
# new_data = pp.DataFrame(toys.var_process(links_coeffs, T=100)[0])
predicted = pred.predict(target, new_data=new_data)
# predicted = pred.predict(target)
true_data = pred.get_test_array()[0]
plt.scatter(true_data, predicted)
plt.plot(true_data, true_data, 'k-')
plt.title(r"NRMSE = %.2f" % (np.abs(true_data - predicted).mean()/true_data.std()))
plt.xlabel('True test data')
plt.ylabel('Predicted test data')
plt.show()
Explanation: This prediction is much better than using all past variables which leads to overfitting:
End of explanation
pred = Prediction(dataframe=dataframe,
cond_ind_test=ParCorr(),
prediction_model = sklearn.linear_model.LinearRegression(),
# data_transform=sklearn.preprocessing.StandardScaler(),
train_indices= range(int(0.8*T)),
test_indices= range(int(0.8*T), T),
verbosity=1
)
pred.fit(target_predictors=all_predictors,
selected_targets=[target],
tau_max=tau_max)
predicted = pred.predict(target)
# predicted = pred.predict(target)
true_data = pred.get_test_array()[0]
plt.scatter(true_data, predicted)
plt.plot(true_data, true_data, 'k-')
plt.title(r"NRMSE = %.2f" % (np.abs(true_data - predicted).mean()/true_data.std()))
plt.xlabel('True test data')
plt.ylabel('Predicted test data')
plt.show()
Explanation: Before, we rescaled the data before fitting which requires us to also rescale the test data. We can also leave the data unscaled:
End of explanation
tau_max = 10
steps_ahead = 2
target = 2
T = 500
dataframe = pp.DataFrame(toys.var_process(links_coeffs, T=T)[0])
pred = Prediction(dataframe=dataframe,
cond_ind_test=GPDC(), #CMIknn ParCorr
prediction_model = sklearn.gaussian_process.GaussianProcessRegressor(alpha=0.,
kernel=sklearn.gaussian_process.kernels.RBF() +
sklearn.gaussian_process.kernels.WhiteKernel()),
# prediction_model = sklearn.neighbors.KNeighborsRegressor(),
data_transform=sklearn.preprocessing.StandardScaler(),
train_indices= range(int(0.8*T)),
test_indices= range(int(0.8*T), T),
verbosity=1
)
all_predictors = pred.get_predictors(
selected_targets=[target],
steps_ahead=steps_ahead,
tau_max=tau_max,
pc_alpha=0.2
)
pred.fit(target_predictors=all_predictors,
selected_targets=[target],
tau_max=tau_max)
predicted = pred.predict(target)
# predicted = pred.predict(target)
true_data = pred.get_test_array()[0]
plt.scatter(true_data, predicted)
plt.plot(true_data, true_data, 'k-')
plt.title(r"NRMSE = %.2f" % (np.abs(true_data - predicted).mean()/true_data.std()))
plt.xlabel('True test data')
plt.ylabel('Predicted test data')
plt.show()
Explanation: Note the different scales on the x- and y-axes.
Last, let's try a Gaussian process regressor in conjunction with a GPDC predictor selection. Here we supply cond_ind_params and prediction_model_params because the sklearn defaults don't work well here.
End of explanation |
59 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Organized high-throughput calculations
Step3: This functional takes a few arguments, amongst which an output directory, and writes a file to disk. That's pretty much it.
However, you'll notice that it returns an object of class Extract. We'll create this class in a second. This class is capable of checking whether the functional did run correctly or not (Extract.success attribute is True or False). For VASP or Espresso, it is also capable of parsing output files to recover quantities, like the total energy or the eigenvalues.
This class is not completely necessary to create the Job Folder, but knowing when a job a successful and being able to easily process it's ouput are really nice features to have.
The following is a dummy Extraction classs for the dummy functional. It knows to check for the existence of an OUTCAR file (a dummy OUTCAR, not a real one) and how to parse it.
Step4: Creating and accessing job-folders
Job-folders can be created with two simple lines of codes
Step5: To add further job-folders, one can do
Step6: As you can, see job-folders can be given any structure that on-disk directories can. What is more, a job-folder can access other job-folders with the same kind of syntax that one would use (on unices) to access other directories
Step7: And trying to access non-existing folders will get you in trouble
Step8: Furthermore, job-folders know what they are
Step9: Who they're parents are
Step10: They know about their sub-folders
Step11: As well as their ancestral lineage all the way to the first matriarch
Step12: A Job-folder that executes code
The whole point of a job-folder is to create an architecture for calculations. Each job-folder can contain at most a single calculation. A calculation is setup by passing to the job-folder a function and the parameters for calling it.
Step13: In the above, the function functional from the dummy module created previously is imported into the namespace. The special attribute job.functional is set to functional. Two arguments, structure and value, are specified by adding the to the dictionary job.params. Please note that the third line does not contain parenthesis
Step14: Assuming that you the unix program tree, the following will show that an OUTCAR file was created in the right directory
Step15: Running the job-folder jobA is exactly equivalent to calling the functional directly
Step16: We can now iterate over executable subfolders
Step17: Or subsets of executable folders
Step18: Saving to disk using the python API
Jobfolders can be saved to and loaded from disk using python functions | Python Code:
%%writefile dummy.py
def functional(structure, outdir=None, value=False, **kwargs):
A dummy functional
from copy import deepcopy
from pickle import dump
from random import random
from py.path import local
structure = deepcopy(structure)
structure.value = value
outdir = local(outdir)
outdir.ensure(dir=True)
dump((random(), structure, value, functional), outdir.join('OUTCAR').open('wb'))
return Extract(outdir)
Explanation: Organized high-throughput calculations: job-folders
Pylada provides tools to organize high-throughput calculations in a systematic
manner. The whole high-throughput experience revolves around job-folders.
These are convenient ways of organizing actual calculations. They can be though
of as folders on a file system, or directories in unix parlance, each one
dedicated to running a single actual calculation (eg launching :ref:VASP
<vasp_ug> once). The added benefits beyond creating the same file-structure
with bash are:
the ability to create a tree of folders/calculations using the power of the
python programming language. No more copy-pasting files and unintelligible
bash scripts!
the ability to launch all folders simultaneously
the ability to collect the results across all folders simultaneously, all
within python, and with all of python's goodies. E.g. no more copy-pasting
into excel by hand. Just do the summing, and multiplying, and graphing
there and then.
Actually, there are a lot more benefits. Having everything - from input to
output - within the same modern and efficient programming language means there
is no limit to what can be achieved.
The following describes how job-folders are created. The fun bits,
launching jobs, collecting results, manipulating all job-folders
simultaneously, can be found in the next section. Indeed, all of these are
intrinsically linked to the Pylada's IPython interface.
Prep: creating a dummy functional
First off, we will need a functional. Rather that use something heavy, like VASP, we will use a dummy functional which does pretty much nothing... We will write it to a file, so that it can be imported later on.
End of explanation
%%writefile -a dummy.py
def Extract(outdir=None):
An extraction function for a dummy functional
from os import getcwd
from collections import namedtuple
from pickle import load
from py.path import local
if outdir == None:
outdir = local()()
Extract = namedtuple('Extract', ['success', 'directory',
'energy', 'structure', 'value', 'functional'])
outdir = local(outdir)
if not outdir.check():
return Extract(False, str(outdir), None, None, None, None)
if not outdir.join('OUTCAR').check(file=True):
return Extract(False, str(outdir), None, None, None, None)
with outdir.join('OUTCAR').open('rb') as file:
structure, energy, value, functional = load(file)
return Extract(True, outdir, energy, structure, value, functional)
functional.Extract = Extract
Explanation: This functional takes a few arguments, amongst which an output directory, and writes a file to disk. That's pretty much it.
However, you'll notice that it returns an object of class Extract. We'll create this class in a second. This class is capable of checking whether the functional did run correctly or not (Extract.success attribute is True or False). For VASP or Espresso, it is also capable of parsing output files to recover quantities, like the total energy or the eigenvalues.
This class is not completely necessary to create the Job Folder, but knowing when a job a successful and being able to easily process it's ouput are really nice features to have.
The following is a dummy Extraction classs for the dummy functional. It knows to check for the existence of an OUTCAR file (a dummy OUTCAR, not a real one) and how to parse it.
End of explanation
from pylada.jobfolder import JobFolder
root = JobFolder()
Explanation: Creating and accessing job-folders
Job-folders can be created with two simple lines of codes:
End of explanation
jobA = root / 'jobA'
jobB = root / 'another' / 'jobB'
jobBprime = root / 'another' / 'jobB' / 'prime'
Explanation: To add further job-folders, one can do:
End of explanation
assert jobA['/'] is root
assert jobA['../another/jobB'] is jobB
assert jobB['prime'] is jobBprime
assert jobBprime['../../'] is not jobB
Explanation: As you can, see job-folders can be given any structure that on-disk directories can. What is more, a job-folder can access other job-folders with the same kind of syntax that one would use (on unices) to access other directories:
End of explanation
try:
root['..']
except KeyError:
pass
else:
raise Exception("I expected an error")
Explanation: And trying to access non-existing folders will get you in trouble:
End of explanation
jobA.name
Explanation: Furthermore, job-folders know what they are:
End of explanation
jobB.parent.name
Explanation: Who they're parents are:
End of explanation
assert 'prime' in jobB
assert '/jobA' in jobBprime
Explanation: They know about their sub-folders:
End of explanation
assert jobB.root is root
Explanation: As well as their ancestral lineage all the way to the first matriarch:
End of explanation
from pylada.crystal.binary import zinc_blende
from dummy import functional
jobA.functional = functional
jobA.params['structure'] = zinc_blende()
jobA.params['value'] = 5
Explanation: A Job-folder that executes code
The whole point of a job-folder is to create an architecture for calculations. Each job-folder can contain at most a single calculation. A calculation is setup by passing to the job-folder a function and the parameters for calling it.
End of explanation
directory = "tmp/" + jobA.name[1:]
result = jobA.compute(outdir=directory)
assert result.success
Explanation: In the above, the function functional from the dummy module created previously is imported into the namespace. The special attribute job.functional is set to functional. Two arguments, structure and value, are specified by adding the to the dictionary job.params. Please note that the third line does not contain parenthesis: this is not a function call, it merely saves a reference to the function with the object of calling it later. 'C' aficionados should think a saving a pointer to a function.
Warning: The reference to functional is deepcopied: the instance that is saved to jod-folder is not the one that was passed to it. On the other hand, the parameters (jobA.params) are held by reference rather than by value.
Tip: To force a job-folder to hold a functional by reference rather than by value, do:
Python
jobA._functional = functional
The parameters in job.params should be pickleable so that the folder can be saved to disk later. Jobfolder.functional must be a
pickleable and callable. Setting Jobfolder.functional to
something else will immediately fail. In practice, this means it can be a
function or a callable class, as long as that function or class is imported from a module. It cannot be defined in __main__, e.g. the script that you run to create the job-folders. And that's why the dummy functional in this example is written to it's own dummy.py file.
That said, we can now execute each jobA by calling the function compute:
End of explanation
%%bash
[ ! -e tree ] || tree tmp/
Explanation: Assuming that you the unix program tree, the following will show that an OUTCAR file was created in the right directory:
End of explanation
from pylada.jobfolder import JobFolder
from pylada.crystal.binary import zinc_blende
root = JobFolder()
structures = ['diamond', 'diamond/alloy', 'GaAs']
stuff = [0, 1, 2]
species = [('Si', 'Si'), ('Si', 'Ge'), ('Ga', 'As')]
for name, value, species in zip(structures, stuff, species):
job = root / name
job.functional = functional
job.params['value'] = value
job.params['structure'] = zinc_blende()
for atom, specie in zip(job.structure, species):
atom.type = specie
print(root)
Explanation: Running the job-folder jobA is exactly equivalent to calling the functional directly:
python
functional(structure=zinc_blende(), value=5, outdir='tmp/jobA')
In practice, what we have done is created an interface where any program can be called in the same way. This will be extremly useful when launching many jobs simultaneously.
Creating multiple executable jobs
The crux of this setup is the ability to create jobs programmatically:
Finally, let's not that executable job-folders (i.e. for which jofolder.functional is set) can be easily iterated over with jobfolder.keys(), jobfolder.values(), and jobfolder.items().
End of explanation
print(list(root.keys()))
Explanation: We can now iterate over executable subfolders:
End of explanation
for jobname, job in root['diamond'].items():
print("diamond/", jobname, " with ", len(job.params['structure']), " atoms")
Explanation: Or subsets of executable folders:
End of explanation
from pylada.jobfolder import load, save
save(root, 'root.dict', overwrite=True) # saves to file
root = load('root.dict') # loads from file
print(root)
Explanation: Saving to disk using the python API
Jobfolders can be saved to and loaded from disk using python functions:
End of explanation |
60 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review Classification using Active Learning
Author
Step1: Loading and preprocessing the data
We will be using the IMDB reviews dataset for our experiments. This dataset has 50,000
reviews in total, including training and testing splits. We will merge these splits and
sample our own, balanced training, validation and testing sets.
Step2: Active learning starts with labeling a subset of data.
For the ratio sampling technique that we will be using, we will need well-balanced training,
validation and testing splits.
Step3: Fitting the TextVectorization layer
Since we are working with text data, we will need to encode the text strings as vectors which
would then be passed through an Embedding layer. To make this tokenization process
faster, we use the map() function with its parallelization functionality.
Step4: Creating Helper Functions
Step5: Creating the Model
We create a small bidirectional LSTM model. When using Active Learning, you should make sure
that the model architecture is capable of overfitting to the initial data.
Overfitting gives a strong hint that the model will have enough capacity for
future, unseen data.
Step6: Training on the entire dataset
To show the effectiveness of Active Learning, we will first train the model on the entire
dataset containing 40,000 labeled samples. This model will be used for comparison later.
Step7: Training via Active Learning
The general process we follow when performing Active Learning is demonstrated below | Python Code:
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import re
import string
tfds.disable_progress_bar()
Explanation: Review Classification using Active Learning
Author: Darshan Deshpande<br>
Date created: 2021/10/29<br>
Last modified: 2021/10/29<br>
Description: Demonstrating the advantages of active learning through review classification.
Introduction
With the growth of data-centric Machine Learning, Active Learning has grown in popularity
amongst businesses and researchers. Active Learning seeks to progressively
train ML models so that the resultant model requires lesser amount of training data to
achieve competitive scores.
The structure of an Active Learning pipeline involves a classifier and an oracle. The
oracle is an annotator that cleans, selects, labels the data, and feeds it to the model
when required. The oracle is a trained individual or a group of individuals that
ensure consistency in labeling of new data.
The process starts with annotating a small subset of the full dataset and training an
initial model. The best model checkpoint is saved and then tested on a balanced test
set. The test set must be carefully sampled because the full training process will be
dependent on it. Once we have the initial evaluation scores, the oracle is tasked with
labeling more samples; the number of data points to be sampled is usually determined by
the business requirements. After that, the newly sampled data is added to the training
set, and the training procedure repeats. This cycle continues until either an
acceptable score is reached or some other business metric is met.
This tutorial provides a basic demonstration of how Active Learning works by
demonstrating a ratio-based (least confidence) sampling strategy that results in lower
overall false positive and negative rates when compared to a model trained on the entire
dataset. This sampling falls under the domain of uncertanity sampling, in which new
datasets are sampled based on the uncertanity that the model outputs for the
corresponding label. In our example, we compare our model's false positive and false
negative rates and annotate the new data based on their ratio.
Some other sampling techniques include:
Committee sampling:
Using multiple models to vote for the best data points to be sampled
Entropy reduction:
Sampling according to an entropy threshold, selecting more of the samples that produce the highest entropy score.
Minimum margin based sampling:
Selects data points closest to the decision boundary
Importing required libraries
End of explanation
dataset = tfds.load(
"imdb_reviews",
split="train + test",
as_supervised=True,
batch_size=-1,
shuffle_files=False,
)
reviews, labels = tfds.as_numpy(dataset)
print("Total examples:", reviews.shape[0])
Explanation: Loading and preprocessing the data
We will be using the IMDB reviews dataset for our experiments. This dataset has 50,000
reviews in total, including training and testing splits. We will merge these splits and
sample our own, balanced training, validation and testing sets.
End of explanation
val_split = 2500
test_split = 2500
train_split = 7500
# Separating the negative and positive samples for manual stratification
x_positives, y_positives = reviews[labels == 1], labels[labels == 1]
x_negatives, y_negatives = reviews[labels == 0], labels[labels == 0]
# Creating training, validation and testing splits
x_val, y_val = (
tf.concat((x_positives[:val_split], x_negatives[:val_split]), 0),
tf.concat((y_positives[:val_split], y_negatives[:val_split]), 0),
)
x_test, y_test = (
tf.concat(
(
x_positives[val_split : val_split + test_split],
x_negatives[val_split : val_split + test_split],
),
0,
),
tf.concat(
(
y_positives[val_split : val_split + test_split],
y_negatives[val_split : val_split + test_split],
),
0,
),
)
x_train, y_train = (
tf.concat(
(
x_positives[val_split + test_split : val_split + test_split + train_split],
x_negatives[val_split + test_split : val_split + test_split + train_split],
),
0,
),
tf.concat(
(
y_positives[val_split + test_split : val_split + test_split + train_split],
y_negatives[val_split + test_split : val_split + test_split + train_split],
),
0,
),
)
# Remaining pool of samples are stored separately. These are only labeled as and when required
x_pool_positives, y_pool_positives = (
x_positives[val_split + test_split + train_split :],
y_positives[val_split + test_split + train_split :],
)
x_pool_negatives, y_pool_negatives = (
x_negatives[val_split + test_split + train_split :],
y_negatives[val_split + test_split + train_split :],
)
# Creating TF Datasets for faster prefetching and parallelization
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
pool_negatives = tf.data.Dataset.from_tensor_slices(
(x_pool_negatives, y_pool_negatives)
)
pool_positives = tf.data.Dataset.from_tensor_slices(
(x_pool_positives, y_pool_positives)
)
print(f"Initial training set size: {len(train_dataset)}")
print(f"Validation set size: {len(val_dataset)}")
print(f"Testing set size: {len(test_dataset)}")
print(f"Unlabeled negative pool: {len(pool_negatives)}")
print(f"Unlabeled positive pool: {len(pool_positives)}")
Explanation: Active learning starts with labeling a subset of data.
For the ratio sampling technique that we will be using, we will need well-balanced training,
validation and testing splits.
End of explanation
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
return tf.strings.regex_replace(
stripped_html, f"[{re.escape(string.punctuation)}]", ""
)
vectorizer = layers.TextVectorization(
3000, standardize=custom_standardization, output_sequence_length=150
)
# Adapting the dataset
vectorizer.adapt(
train_dataset.map(lambda x, y: x, num_parallel_calls=tf.data.AUTOTUNE).batch(256)
)
def vectorize_text(text, label):
text = vectorizer(text)
return text, label
train_dataset = train_dataset.map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
pool_negatives = pool_negatives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
pool_positives = pool_positives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
val_dataset = val_dataset.batch(256).map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
)
test_dataset = test_dataset.batch(256).map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
)
Explanation: Fitting the TextVectorization layer
Since we are working with text data, we will need to encode the text strings as vectors which
would then be passed through an Embedding layer. To make this tokenization process
faster, we use the map() function with its parallelization functionality.
End of explanation
# Helper function for merging new history objects with older ones
def append_history(losses, val_losses, accuracy, val_accuracy, history):
losses = losses + history.history["loss"]
val_losses = val_losses + history.history["val_loss"]
accuracy = accuracy + history.history["binary_accuracy"]
val_accuracy = val_accuracy + history.history["val_binary_accuracy"]
return losses, val_losses, accuracy, val_accuracy
# Plotter function
def plot_history(losses, val_losses, accuracies, val_accuracies):
plt.plot(losses)
plt.plot(val_losses)
plt.legend(["train_loss", "val_loss"])
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
plt.plot(accuracies)
plt.plot(val_accuracies)
plt.legend(["train_accuracy", "val_accuracy"])
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.show()
Explanation: Creating Helper Functions
End of explanation
def create_model():
model = keras.models.Sequential(
[
layers.Input(shape=(150,)),
layers.Embedding(input_dim=3000, output_dim=128),
layers.Bidirectional(layers.LSTM(32, return_sequences=True)),
layers.GlobalMaxPool1D(),
layers.Dense(20, activation="relu"),
layers.Dropout(0.5),
layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
return model
Explanation: Creating the Model
We create a small bidirectional LSTM model. When using Active Learning, you should make sure
that the model architecture is capable of overfitting to the initial data.
Overfitting gives a strong hint that the model will have enough capacity for
future, unseen data.
End of explanation
def train_full_model(full_train_dataset, val_dataset, test_dataset):
model = create_model()
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
# We will save the best model at every epoch and load the best one for evaluation on the test set
history = model.fit(
full_train_dataset.batch(256),
epochs=20,
validation_data=val_dataset,
callbacks=[
keras.callbacks.EarlyStopping(patience=4, verbose=1),
keras.callbacks.ModelCheckpoint(
"FullModelCheckpoint.h5", verbose=1, save_best_only=True
),
],
)
# Plot history
plot_history(
history.history["loss"],
history.history["val_loss"],
history.history["binary_accuracy"],
history.history["val_binary_accuracy"],
)
# Loading the best checkpoint
model = keras.models.load_model("FullModelCheckpoint.h5")
print("-" * 100)
print(
"Test set evaluation: ",
model.evaluate(test_dataset, verbose=0, return_dict=True),
)
print("-" * 100)
return model
# Sampling the full train dataset to train on
full_train_dataset = (
train_dataset.concatenate(pool_positives)
.concatenate(pool_negatives)
.cache()
.shuffle(20000)
)
# Training the full model
full_dataset_model = train_full_model(full_train_dataset, val_dataset, test_dataset)
Explanation: Training on the entire dataset
To show the effectiveness of Active Learning, we will first train the model on the entire
dataset containing 40,000 labeled samples. This model will be used for comparison later.
End of explanation
def train_active_learning_models(
train_dataset,
pool_negatives,
pool_positives,
val_dataset,
test_dataset,
num_iterations=3,
sampling_size=5000,
):
# Creating lists for storing metrics
losses, val_losses, accuracies, val_accuracies = [], [], [], []
model = create_model()
# We will monitor the false positives and false negatives predicted by our model
# These will decide the subsequent sampling ratio for every Active Learning loop
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
# Defining checkpoints.
# The checkpoint callback is reused throughout the training since it only saves the best overall model.
checkpoint = keras.callbacks.ModelCheckpoint(
"AL_Model.h5", save_best_only=True, verbose=1
)
# Here, patience is set to 4. This can be set higher if desired.
early_stopping = keras.callbacks.EarlyStopping(patience=4, verbose=1)
print(f"Starting to train with {len(train_dataset)} samples")
# Initial fit with a small subset of the training set
history = model.fit(
train_dataset.cache().shuffle(20000).batch(256),
epochs=20,
validation_data=val_dataset,
callbacks=[checkpoint, early_stopping],
)
# Appending history
losses, val_losses, accuracies, val_accuracies = append_history(
losses, val_losses, accuracies, val_accuracies, history
)
for iteration in range(num_iterations):
# Getting predictions from previously trained model
predictions = model.predict(test_dataset)
# Generating labels from the output probabilities
rounded = tf.where(tf.greater(predictions, 0.5), 1, 0)
# Evaluating the number of zeros and ones incorrrectly classified
_, _, false_negatives, false_positives = model.evaluate(test_dataset, verbose=0)
print("-" * 100)
print(
f"Number of zeros incorrectly classified: {false_negatives}, Number of ones incorrectly classified: {false_positives}"
)
# This technique of Active Learning demonstrates ratio based sampling where
# Number of ones/zeros to sample = Number of ones/zeros incorrectly classified / Total incorrectly classified
if false_negatives != 0 and false_positives != 0:
total = false_negatives + false_positives
sample_ratio_ones, sample_ratio_zeros = (
false_positives / total,
false_negatives / total,
)
# In the case where all samples are correctly predicted, we can sample both classes equally
else:
sample_ratio_ones, sample_ratio_zeros = 0.5, 0.5
print(
f"Sample ratio for positives: {sample_ratio_ones}, Sample ratio for negatives:{sample_ratio_zeros}"
)
# Sample the required number of ones and zeros
sampled_dataset = pool_negatives.take(
int(sample_ratio_zeros * sampling_size)
).concatenate(pool_positives.take(int(sample_ratio_ones * sampling_size)))
# Skip the sampled data points to avoid repetition of sample
pool_negatives = pool_negatives.skip(int(sample_ratio_zeros * sampling_size))
pool_positives = pool_positives.skip(int(sample_ratio_ones * sampling_size))
# Concatenating the train_dataset with the sampled_dataset
train_dataset = train_dataset.concatenate(sampled_dataset).prefetch(
tf.data.AUTOTUNE
)
print(f"Starting training with {len(train_dataset)} samples")
print("-" * 100)
# We recompile the model to reset the optimizer states and retrain the model
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
history = model.fit(
train_dataset.cache().shuffle(20000).batch(256),
validation_data=val_dataset,
epochs=20,
callbacks=[
checkpoint,
keras.callbacks.EarlyStopping(patience=4, verbose=1),
],
)
# Appending the history
losses, val_losses, accuracies, val_accuracies = append_history(
losses, val_losses, accuracies, val_accuracies, history
)
# Loading the best model from this training loop
model = keras.models.load_model("AL_Model.h5")
# Plotting the overall history and evaluating the final model
plot_history(losses, val_losses, accuracies, val_accuracies)
print("-" * 100)
print(
"Test set evaluation: ",
model.evaluate(test_dataset, verbose=0, return_dict=True),
)
print("-" * 100)
return model
active_learning_model = train_active_learning_models(
train_dataset, pool_negatives, pool_positives, val_dataset, test_dataset
)
Explanation: Training via Active Learning
The general process we follow when performing Active Learning is demonstrated below:
The pipeline can be summarized in five parts:
Sample and annotate a small, balanced training dataset
Train the model on this small subset
Evaluate the model on a balanced testing set
If the model satisfies the business criteria, deploy it in a real time setting
If it doesn't pass the criteria, sample a few more samples according to the ratio of
false positives and negatives, add them to the training set and repeat from step 2 till
the model passes the tests or till all available data is exhausted.
For the code below, we will perform sampling using the following formula:<br/>
Active Learning techniques use callbacks extensively for progress tracking. We will be
using model checkpointing and early stopping for this example. The patience parameter
for Early Stopping can help minimize overfitting and the time required. We have set it
patience=4 for now but since the model is robust, we can increase the patience level if
desired.
Note: We are not loading the checkpoint after the first training iteration. In my
experience working on Active Learning techniques, this helps the model probe the
newly formed loss landscape. Even if the model fails to improve in the second iteration,
we will still gain insight about the possible future false positive and negative rates.
This will help us sample a better set in the next iteration where the model will have a
greater chance to improve.
End of explanation |
61 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get Data
Step1: Basic Heat map
Step2: Hide tick_labels and color axis using 'axes_options'
Step3: Non Uniform Heat map
Step4: Alignment of the data with respect to the grid
For a N-by-N matrix, N+1 points along the row or the column are assumed to be end points.
Step5: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid.
The grid extends infinitely in the other direction. By default, the grid extends infintely
towards the bottom and the right.
Step6: By changing the row_align and column_align properties, the grid can extend in the opposite direction
Step7: For N+1 points on any direction, the grid extends infintely in both directions
Step8: Changing opacity and stroke
Step9: Selections on the grid map
Selection on the GridHeatMap works similar to excel. Clicking on a cell selects the cell, and deselects the previous selection. Using the Ctrl key allows multiple cells to be selected, while the Shift key selects the range from the last cell in the selection to the current cell.
Step10: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
Step11: Registering on_element_click event handler | Python Code:
np.random.seed(0)
data = np.random.randn(10, 10)
Explanation: Get Data
End of explanation
from ipywidgets import *
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data)
fig
grid_map.display_format = '.2f'
grid_map.font_style = {'font-size': '16px', 'fill':'blue', 'font-weight': 'bold'}
Explanation: Basic Heat map
End of explanation
axes_options = {'column': {'visible': False}, 'row': {'visible': False}, 'color': {'visible': False}}
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, axes_options=axes_options)
fig
Explanation: Hide tick_labels and color axis using 'axes_options'
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(), 'y': LinearScale(reverse=True)})
## The data along the rows is not uniform. Hence the 5th row(from top) of the map
## is twice the height of the remaining rows.
row_data = np.arange(10)
row_data[5:] = np.arange(6, 11)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
print(row_data.shape)
print(column_data.shape)
print(data.shape)
Explanation: Non Uniform Heat map
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(), 'y': LinearScale(reverse=True)})
row_data = np.arange(11)
column_data = np.arange(10, 21)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
Explanation: Alignment of the data with respect to the grid
For a N-by-N matrix, N+1 points along the row or the column are assumed to be end points.
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
Explanation: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid.
The grid extends infinitely in the other direction. By default, the grid extends infintely
towards the bottom and the right.
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align='end')
fig
Explanation: By changing the row_align and column_align properties, the grid can extend in the opposite direction
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={'x': LinearScale(),
'y': LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(9)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align='end')
fig
Explanation: For N+1 points on any direction, the grid extends infintely in both directions
End of explanation
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, opacity=0.3, stroke='white', axes_options=axes_options)
fig
Explanation: Changing opacity and stroke
End of explanation
data = np.random.randn(10, 10)
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, interactions={'click':'select'},
selected_style={'stroke': 'blue', 'stroke-width': 3},
axes_options=axes_options)
fig
Explanation: Selections on the grid map
Selection on the GridHeatMap works similar to excel. Clicking on a cell selects the cell, and deselects the previous selection. Using the Ctrl key allows multiple cells to be selected, while the Shift key selects the range from the last cell in the selection to the current cell.
End of explanation
grid_map.selected
Explanation: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
End of explanation
import numpy as np
from IPython.display import display
np.random.seed(0)
data = np.random.randn(10, 10)
figure = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, interactions={'click': 'select'},
selected_style={'stroke': 'blue', 'stroke-width': 3})
from ipywidgets import Output
out = Output()
@out.capture()
def print_event(self, target):
print(target)
# test
print_event(1, 'test output')
grid_map.on_element_click(print_event)
display(figure)
display(out)
Explanation: Registering on_element_click event handler
End of explanation |
62 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tarea 5
Luego de descargar las imágenes en los filtros F475W y F850LP del objeto VCC1316 (M87) se siguen los pasos de la primera tarea para generar el catálogo.
De Sirianni et. al (2005) se obtiene la escala de WFC (0.05''/px) y los zeropoint en el sistema AB (según la tabla 10) y se ejecuta Sextractor.
Se corrige por apertura según la tabla 3 (2 pixeles de radio a escala de 0.05''/px corresponden a 0.1'').
Se corrige por reddening (para un SED tipo E) según la tabla 14 y el valor (B-V) de NED
Step1: Se permitió que Sextractor quitara la galaxia según la estimación del cielo. Se deja la check image (-background) como ejemplo de resultado. Funciona bien excepto para el jet y para el centro de la galaxia. Las detecciones están gobernadas por cúmulos globulares.
<img src="ds9.jpeg" width="500">
Step2: Para encontrar la distancia se recurre al dato de magnitud absoluta esperada de $-8.4$ (Jordán et al (2006)) y la magnitud aparente obtenida del histograma $24.3$
Step3: Se obtiene que la distancia es 34.67 Mpc. Dada la distancia conocida (16.5 Mpc) se esperaría una magnitud aparente de 22.09 magnitudes. Se infiere que hubo algún error en la calibración que provocó que se obtuviera una distancia de más del doble de lo esperado.
Chandra
Se decarga una imagen con 98.55ks de exposición desde el archivo de Chandra | Python Code:
from astropy.io import fits
import numpy as np
f475 = fits.open('hst_9401_02_acs_wfc_f475w_drz.fits')
f850 = fits.open('hst_9401_02_acs_wfc_f850lp_drz.fits')
f475[1].writeto('sci_f475w_m87.fits',clobber=True)
f475[2].writeto('invvar_f475w_m87.fits',clobber=True)
f850[1].writeto('sci_f850lp_m87.fits',clobber=True)
f850[2].writeto('invvar_f850lp_m87.fits',clobber=True)
f475.close()
f850.close()
!sextractor sci_f475w_m87.fits -c f475w.sex
!sextractor sci_f850lp_m87.fits -c f850lp.sex
Explanation: Tarea 5
Luego de descargar las imágenes en los filtros F475W y F850LP del objeto VCC1316 (M87) se siguen los pasos de la primera tarea para generar el catálogo.
De Sirianni et. al (2005) se obtiene la escala de WFC (0.05''/px) y los zeropoint en el sistema AB (según la tabla 10) y se ejecuta Sextractor.
Se corrige por apertura según la tabla 3 (2 pixeles de radio a escala de 0.05''/px corresponden a 0.1'').
Se corrige por reddening (para un SED tipo E) según la tabla 14 y el valor (B-V) de NED
End of explanation
from astropy import units as u
from astropy.coordinates import SkyCoord
# Se cargan listas con RA y DEC para cada imagen
RA475 = np.loadtxt('f475w.cat',usecols=(3,))
DE475 = np.loadtxt('f475w.cat',usecols=(4,))
RA850 = np.loadtxt('f850lp.cat',usecols=(3,))
DE850 = np.loadtxt('f850lp.cat',usecols=(4,))
# Match por parte de astropy. El catalogo del filtro f850lp contiene mas objetos
c = SkyCoord(ra=RA475*u.degree, dec=DE475*u.degree)
catalog = SkyCoord(ra=RA850*u.degree, dec=DE850*u.degree)
idx = c.match_to_catalog_sky(catalog)
# Del catalogo f475w.cat se extraen las filas que indica el match
matches = list(idx[0])
f475w = np.loadtxt('f475w.cat')
f850lp = np.loadtxt('f850lp.cat')
out = []
BV = 0.083-0.063
j = 0
for i in matches:
out.append(np.concatenate(
[f475w[j]+ 2.5*np.log10(0.669)- (3.591*BV),
f850lp[i]+ 2.5*np.log10(0.538)- (1.472*BV)]))
j = j+1
# Salida a archivo
np.savetxt('m87_match_f475w_f850lp.cat',out,
fmt='%d\t%.4f\t%.4f\t%.7f\t%.7f\t%d\t%.4f\t%.4f\t%.7f\t%.7f',
header='f475wN\tf475wMAG\tf475wMAGERR\tf475wALPHA\tf475wDELTA\tf850lpN\tf850lpMAG\tf850lpMAGERR\tf850lpALPHA\tf814wDELTA')
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import ascii
tbl = ascii.read('m87_match_f475w_f850lp.cat')
plt.figure(figsize=(10,10))
plt.hist(tbl["f475wMAG"] - tbl["f850lpMAG"], bins=220)
plt.xlabel("$m_{F475W} - m_{F850LP}$", fontsize=20)
plt.ylabel("N", fontsize=20)
plt.xlim(0, 2)
plt.show()
plt.close()
plt.figure(figsize=(10,10))
plt.hist(tbl["f475wMAG"], histtype = 'step', color='b',label='$mF475W$',bins=50)
plt.hist(tbl["f850lpMAG"], histtype = 'step', color='r',label='$mF850LP$',bins=50)
plt.legend()
plt.xticks(list(plt.xticks()[0]) + [24.3])
plt.axvline(x=24.3,linewidth=2, color='g')
plt.xlabel("$m_{F475W}, m_{F850LP}$", fontsize=20)
plt.ylabel("N", fontsize=20)
plt.show()
plt.close()
Explanation: Se permitió que Sextractor quitara la galaxia según la estimación del cielo. Se deja la check image (-background) como ejemplo de resultado. Funciona bien excepto para el jet y para el centro de la galaxia. Las detecciones están gobernadas por cúmulos globulares.
<img src="ds9.jpeg" width="500">
End of explanation
m = 24.3
dm = m+8.4
print dm
print 10**((dm+5)/5)
Explanation: Para encontrar la distancia se recurre al dato de magnitud absoluta esperada de $-8.4$ (Jordán et al (2006)) y la magnitud aparente obtenida del histograma $24.3$
End of explanation
!sextractor chandra.fits -c chandra.sex
RAchan = np.loadtxt('chandra.cat',usecols=(3,))
DEchan = np.loadtxt('chandra.cat',usecols=(4,))
# Match por parte de astropy. El catalogo de chandra contiene menos objetos
c = SkyCoord(ra=RAchan*u.degree, dec=DEchan*u.degree)
catalog = SkyCoord(ra=RA850*u.degree, dec=DE850*u.degree)
idx = c.match_to_catalog_sky(catalog)
# Del catalogo f850lp.cat se extraen las filas que indica el match
matches = list(idx[0])
f850lp = np.loadtxt('f850lp.cat')
chandra = np.loadtxt('chandra.cat')
out = []
j = 0
for i in matches:
out.append(np.concatenate(
[chandra[j],
f850lp[i]+ 2.5*np.log10(0.538)- (1.472*BV)]))
j = j+1
# Salida a archivo
np.savetxt('match_chandra.cat',out,
fmt='%d\t%.4f\t%.4f\t%.7f\t%.7f\t%d\t%.4f\t%.4f\t%.7f\t%.7f',
header='f475wN\tf475wMAG\tf475wMAGERR\tf475wALPHA\tf475wDELTA\tf850lpN\tf850lpMAG\tf850lpMAGERR\tf850lpALPHA\tf814wDELTA')
Explanation: Se obtiene que la distancia es 34.67 Mpc. Dada la distancia conocida (16.5 Mpc) se esperaría una magnitud aparente de 22.09 magnitudes. Se infiere que hubo algún error en la calibración que provocó que se obtuviera una distancia de más del doble de lo esperado.
Chandra
Se decarga una imagen con 98.55ks de exposición desde el archivo de Chandra:
<img src="chandra.jpeg" width="500">
Se ejecuta sextractor en la imagen de Chandra, sin muchas configuraciones ni se hacen calibraciones de magnitud en vista de que solo se pretende saber si hay matches. De los 55 objetos detectados por sextractor todos tienen match en el catálogo del filtro F850W.
End of explanation |
63 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3.4 build a spam classifier (a more challenging exercise)
3.4.1 Download examples of spam and ham from Apache SpamAssassin’s public datasets.
Downloaded 20021010 dataset
Unzip the datasets and familiarize yourself with the data format.
Step1: Use email module
Step2: Train test split
Step3: Preprocessing html to plain text
Step4: Find the spam email with text/html contents
Step5: Return email's content as plain text
Step6: We found an email, 1047 and it doesn't have any context. It's spam//00467.5b733c506b7165424a0d4a298e67970f, as you can see the in the following, it does have content.
Step7: Throw in stemming
Step8: Transformer to convert emails to word counter
Step9: Create a pipeline
Step10: Apply the logistic regression
Step11: Precision and Recall score for test dataset | Python Code:
import os
import glob
HAM_DIR = os.path.join('datasets', 'easy_ham')
SPAM_DIR = os.path.join('datasets', 'spam')
ham_files = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_files = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_files), ham_files[0], ham_files[-1]
len(spam_files), spam_files[0], spam_files[-1]
Explanation: 3.4 build a spam classifier (a more challenging exercise)
3.4.1 Download examples of spam and ham from Apache SpamAssassin’s public datasets.
Downloaded 20021010 dataset
Unzip the datasets and familiarize yourself with the data format.
End of explanation
import email
import email.policy
SPM_PATH = './datasets'
def load_email(is_spam, filename, spam_path=SPM_PATH):
directory = 'spam' if is_spam else 'easy_ham'
with open(os.path.join(spam_path, directory, filename), 'rb') as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_email = [load_email(False, name) for name in ham_files]
spam_email = [load_email(True, name) for name in spam_files]
# print(ham_email[13].get_content().strip())
print(ham_email[13].get_payload()[1].get_content_type())
print(spam_email[6].get_content().strip())
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return f'multipart({", ".join([get_email_structure(sub_email) for sub_email in payload])})'
else:
return email.get_content_type()
get_email_structure(ham_email[2])
ham_structures = list(map(get_email_structure, ham_email))
ham_structures.index('multipart(text/plain, application/pgp-signature)')
import pandas as pd
ham_df = pd.DataFrame({'type': ham_structures})
ham_df['type'].value_counts()
spam_structures = list(map(get_email_structure, spam_email))
spam_df = pd.DataFrame({'type': spam_structures})
spam_df['type'].value_counts()
for header, value in spam_email[0].items():
print(f'{header} : {value}')
spam_email[0]['Subject']
Explanation: Use email module
End of explanation
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_email + spam_email)
y = np.concatenate([np.zeros(len(ham_email)), np.ones(len(spam_email))])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Explanation: Train test split
End of explanation
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub(r'<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub(r'<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub(r'<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
Explanation: Preprocessing html to plain text
End of explanation
html_spam_emails = [email for email in X_train[y_train == 1] if get_email_structure(email) == 'text/html']
sample_html_spam = html_spam_emails[7]
sample_html_spam.get_content().strip()[:1000]
print(html_to_plain_text(sample_html_spam.get_content())[:1000])
Explanation: Find the spam email with text/html contents
End of explanation
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
def email_to_text_2(email):
ret = []
for part in email.walk():
ctype = part.get_content_type()
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
ret.append((ctype, type(content), content[:200]))
return ret
def get_num_of_parts(email):
return len(list(email.walk()))
def count_plain_html_part(email):
return sum([part.get_content_type() in ("text/plain", "text/html") for part in email.walk()])
email_to_text_2(spam_email[466])
[(index, get_num_of_parts(email)) for index, email in enumerate(spam_email) if get_num_of_parts(email) > 1][:5]
[(index, count_plain_html_part(email)) for index, email in enumerate(X_train) if count_plain_html_part(email) == 0]
index = 1047
print(email_to_text(X_train[index]), '...', y_train[index])
Explanation: Return email's content as plain text
End of explanation
y_train[1047]
get_email_structure(X_train[1047])
for part in X_train[1047].walk():
print(part.get_content_type())
print(html_to_plain_text(str(part.get_payload()))[:200])
print(email_to_text(sample_html_spam)[:1000], '...')
Explanation: We found an email, 1047 and it doesn't have any context. It's spam//00467.5b733c506b7165424a0d4a298e67970f, as you can see the in the following, it does have content.
End of explanation
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(f'{word} => {stemmer.stem(word)}')
import urlextract
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
Explanation: Throw in stemming
End of explanation
from sklearn.base import BaseEstimator, TransformerMixin
from collections import Counter
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ''
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = sorted(url_extractor.find_urls(text, only_unique=True), key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, ' URL ')
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+)*)?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index+1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows, cols, data = [], [], []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
# Here if a word is not in 'vocabulary_', then the column is 0.
# Seems like if multiple data has the same row and colmun, the data is the summation
# See the code in the next box
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size+1))
rows = [0, 0, 0]
cols = [0, 0, 1]
data = [3, 2, 1]
m = csr_matrix((data, (rows, cols)), shape=(1, 2))
m.toarray()
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
print(vocab_transformer.most_common_)
X_few_vectors.toarray()
vocab_transformer.vocabulary_
X_few_wordcounts[1].most_common()[:10]
Explanation: Transformer to convert emails to word counter
End of explanation
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
('email_to_wordcount', EmailToWordCounterTransformer()),
('wordcount_to_vector', WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
X_train_transformed.toarray().shape
Explanation: Create a pipeline
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(solver='lbfgs', random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
Explanation: Apply the logistic regression
End of explanation
from sklearn.metrics import precision_score, recall_score, accuracy_score
X_test_transformed = preprocess_pipeline.fit_transform(X_test)
log_clf = LogisticRegression(solver='lbfgs', random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
y_test.shape
accuracy_score(y_pred, y_test)
precision_score(y_pred, y_test)
recall_score(y_pred, y_test)
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(solver="lbfgs", random_state=42, max_iter=1000)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
y_train_pred = log_clf.predict(X_train_transformed)
accuracy_score(y_train, y_train_pred)
y_test_pred = log_clf.predict(X_test_transformed)
accuracy_score(y_test, y_test_pred)
Explanation: Precision and Recall score for test dataset
End of explanation |
64 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Limpieza del dataset de Mortalidad de INEGI
1. Introduccion
Indicadores que salen de este dataset
Step1: Descarga de datos
Todos los datos se encuentran en un solo archivo
Step2: Exploracion del Dataset
Step3: Subconjunto de trabajo
Step4: En el campo "PRESUNTO", la clave 2 identifica homicidios.
Step5: Asignación de causas de defuncion
El dataset tiene identificadas 127 causas de defunción (Por homicidio, pues ya se identificaron únicamente los casos donde el PRESUNTO es homicidio)
Step6: Creación del dataset estándar y exportación a Excel | Python Code:
descripciones = {
'P0813' : 'Homicidios Intencionales',
}
# Librerías utilizadas
import pandas as pd
import sys
import urllib
import os
import csv
import zipfile
from simpledbf import Dbf5
import matplotlib.pyplot as plt
%matplotlib inline
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
# URL Fuente
remote_path = r'http://www.beta.inegi.org.mx/contenidos/proyectos/registros/vitales/mortalidad/microdatos/defunciones/2016/defunciones_base_datos_2016_dbf.zip'
# Carpeta destino Local
local_path = r'D:\PCCS\00_RawData\01_CSV\INEGI\Defunciones\defunciones_base_datos_2016_dbf.zip'
Explanation: Limpieza del dataset de Mortalidad de INEGI
1. Introduccion
Indicadores que salen de este dataset:
ID | DESCRIPCION
----- | --------------
P0813 | Homicidios Intencionales
End of explanation
# Descarga de archivo
if os.path.isfile(local_path):
print('Ya existe el archivo: {}'.format(local_path))
else:
print('Descargando {} ... ... ... ... ... '.format(local_path))
urllib.request.urlretrieve(remote_path, local_path) #
print('se descargó {}'.format(local_path))
# Descompresión de archivo
target = r'D:\PCCS\00_RawData\01_CSV\INEGI\Defunciones'
descomprimir = zipfile.ZipFile(local_path, 'r')
print('Iniciando descompresión')
descomprimir.extractall(target)
descomprimir.close
print('Descompresión terminada en {}'.format(target))
# Listado de archivos
files = os.listdir(target)
x = 0
for file in files:
print('{} - {}'.format(x, file))
x += 1
# Se utilizará el dataset contenido en el archivo DEFUN16.dbf (Posición 5)
path_to_dbf = r'{}\{}'.format(target,files[5])
dataset = Dbf5(path_to_dbf, codec='mbcs').to_dataframe()
dataset.head()
len(dataset)
# Seleccion de variables
x = 0
for i in dataset:
print('{} - {}'.format(x, i))
x += 1
# lista de variables seleccionadas
Variables = [0, 1, 6, 7, 10, 12, 16, 24, 26, 30, 43, 54, 55]
Variables = list(list(dataset)[i] for i in Variables)
Variables
dataset = dataset[Variables]
dataset.head()
# Tipos de datos en variables
dataset.dtypes
Explanation: Descarga de datos
Todos los datos se encuentran en un solo archivo
End of explanation
years = sorted(dataset['ANIO_OCUR'].unique())
yearsize = {}
for year in years:
yearsize[year] = len(dataset[dataset['ANIO_OCUR'] == year])
len(dataset)
# El set1 tiene la suma de las defunciones registradas entre 1923 y 1990
set1 = 0
for year in years:
# print(year)
if year > 1990:
break
set1 += yearsize[year]
set1
# El set 2 tiene la suma de las defunciones registradas entre 1991 y 2000
set2 = 0
for year in years:
if year < 1991:
continue
if year > 2000:
break
# print(year)
set2 += yearsize[year]
set2
yearsize2 = {'1923-1990':set1,
'1991-2000':set2}
for year in years:
if year < 2001:
continue
# print(year)
yearsize2[str(year)] = yearsize[year]
# Numero de defunciones registradas en cada periodo
for k,v in yearsize2.items():
print('{} : {}'.format(k, v))
Explanation: Exploracion del Dataset
End of explanation
#Subconjunto de años para el estudio
dataset = dataset.loc[dataset['ANIO_OCUR'].isin(range(2010, 2017))]
Explanation: Subconjunto de trabajo
End of explanation
# Subconjunto de homicidios (El identificador 2 corresponde a homicidios)
dataset = dataset.loc[dataset['PRESUNTO'] == 2]
dataset.head()
for year in sorted(list(dataset['ANIO_OCUR'].unique())):
print('{} : {}'.format(year, len(dataset[dataset['ANIO_OCUR'] == year])))
Explanation: En el campo "PRESUNTO", la clave 2 identifica homicidios.
End of explanation
len(dataset['CAUSA_DEF'].unique())
dataset['CAUSA_DEF'].unique()
# Se utilizará el dataset contenido en el archivo CATMINDE.dbf (Posición 3)
path_to_desc = r'{}\{}'.format(target,files[3])
descripciones = Dbf5(path_to_desc, codec='mbcs').to_dataframe()
descripciones.head()
#Asignacion de columna con descripciones de causa de defunción
dataframe = dataset.merge(descripciones, left_on='CAUSA_DEF', right_on = 'CLAVE')
dataframe.head()
Explanation: Asignación de causas de defuncion
El dataset tiene identificadas 127 causas de defunción (Por homicidio, pues ya se identificaron únicamente los casos donde el PRESUNTO es homicidio)
End of explanation
# Concatenar claves estatales y municipales para obtener CVE_MUN
# Municipio donde se registró el deceso
dataframe['CVE_MUN_REGIS'] = dataframe.ENT_REGIS.map(str)+dataframe.MUN_REGIS
# Municipio donde ocurrió el deceso
dataframe['CVE_MUN_OCURR'] = dataframe.ENT_OCURR.map(str)+dataframe.MUN_OCURR
# Municipio donde ocurrió la lesión que provocó el deceso
dataframe['CVE_MUN_OCULES'] = dataframe.ENT_OCULES.map(str)+dataframe.MUN_OCULES
dataframe.head()
# Eliminar columnas redundantes
del(dataframe['ENT_REGIS'])
del(dataframe['MUN_REGIS'])
del(dataframe['ENT_OCURR'])
del(dataframe['MUN_OCURR'])
del(dataframe['ENT_OCULES'])
del(dataframe['MUN_OCULES'])
del(dataframe['CLAVE'])
# Renombrar nombre de la causa de defuncion
dataframe.rename(columns={'NOMBRE' : 'NOMBRE_CAUSA_DEF'}, inplace = True)
# Se asigna el municipio de ocurrencia como indice de la tabla
dataframe.set_index('CVE_MUN_OCURR', inplace=True)
dataframe.head()
#Reordenar Columnas
list(dataframe)
# Metadatos estándar
metadatos = {
'Nombre del Dataset': 'INEGI - Registros administrativos de mortalidad al año 2016',
'Descripcion del dataset': 'Originalmente, el formato de captación para las defunciones generales era una boleta colectiva, en la cual las fuentes informantes reportaban las defunciones que registraban durante el mes. A partir del año 1987, el formato principal es el certificado o acta de defunción y el cuaderno para defunciones accidentales y violentas del Ministerio Público.',
'Disponibilidad Temporal': '1923 a 2016',
'Periodo de actualizacion': 'Anual',
'Nivel de Desagregacion': 'Caso',
'Notas': None,
'Fuente': 'INEGI',
'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/registros/vitales/mortalidad/',
'Dataset base': None,
}
metadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None)
metadatos.columns = ['Descripcion']
metadatos = metadatos.rename_axis('Metadato')
metadatos
variables = {
'ENT_REGIS': 'Entidad de registro.',
'MUN_REGIS': 'Municipio de registro.',
'ENT_OCURR': 'Entidad de ocurrencia.',
'MUN_OCURR': 'Municipio de ocurrencia.',
'CAUSA_DEF': 'Causa de la defunción (clave).',
'SEXO': 'Sexo del (la) fallecido (a).\n'
'1: Hombre\n'
'2: Mujer\n'
'9: No especificado',
'ANIO_OCUR': 'Año de ocurrencia.',
'ESCOLARIDA': 'Nivel de escolaridad del (la) fallecido (a) (escolaridad).\n'
'1: Sin escolaridad\n'
'2: Preescolar\n'
'3: Primaria incompleta\n'
'4: Primaria completa\n'
'5: Secundaria incompleta\n'
'6: Secundaria completa\n'
'7: Bachillerato o preparatoria incompleto\n'
'8: Bachillerato o preparatoria completo\n'
'9: Profesional\n'
'10: Posgrado\n'
'88: No aplica a menores de 3 años\n'
'99: No especificado',
'PRESUNTO': 'Tipo de defunción (presunto). 2: Homicidio',
'ASIST_MEDI': 'Condición de atención médica.\n'
'1: Con Asistencia Medica\n'
'2: Sin Asistencia Medica\n'
'9: No especificada',
'VIO_FAMI': 'Condición de violencia familiar.\n'
'1: Hubo violencia familiar\n'
'2: No hubo violencia familiar\n'
'2: No aplica cuando no es homicidio\n'
'9: No especificado',
'ENT_OCULES': 'Entidad de ocurrencia de la lesión.',
'MUN_OCULES': 'Municipio de ocurrencia de la lesión.',
}
variables = pd.DataFrame.from_dict(variables, orient='index', dtype=None)
variables.columns = ['Descripcion']
variables = variables.rename_axis('Mnemonico')
variables
variables['VIO_FAMI']
# Guardar el dataset
file = r'D:\PCCS\01_Dmine\Datasets\INEGI\Defunciones\defunciones.xlsx'
writer = pd.ExcelWriter(file)
dataframe.to_excel(writer, sheet_name = 'DATOS')
metadatos.to_excel(writer, sheet_name = 'METADATOS')
variables.to_excel(writer, sheet_name = 'VARIABLES')
writer.save()
print('---------------TERMINADO---------------')
Explanation: Creación del dataset estándar y exportación a Excel
End of explanation |
65 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 12
Step1: Computers by definition cannot generate truly random numbers. The Mersenne Twister is a widely-used algorithm for generating pseudo random numbers form a deterministic process. That is, while the numbers generated from the algorithm are not random in the literal sense, they exhibit distributional qualities that make them indistinguishable from truly random numbers.
A nice feature of pseudo random numbers is that they can be replicated by specifying the seed, or starting point, for the random number generating algorithm.
Step2: Example
Draw 500 values each from the $\mathcal{N}(0,1)$ and $\mathcal{N}(0,2^2)$ distributions. Plot.
Step3: The white noise process
In the previous example, we created two variables that stored draws from normal distrbutions with means of zero but with different standard deviations. Both of the variables were simulations of whit noise processes. A white noise process is a random variable $\epsilon_t$ with constant mean and constant variance. We are concerned only with zero-mean white noise process and we'll often denote that a variable is a zero-mean white noise process with the following shorthand notation
Step4: Example
Simulate an AR(1) process for 51 periods using the following parameter values
Step5: Notice that if $-1< \rho < 1$, then $\mu$ is the expected value of the process. That is, when $-1< \rho < 1$, the process will fluctuate around $\mu$. But if $\rho>1$ or $\rho<-1$, the process will explode away from $\mu$.
Step6: Example
Construct a $2\times2$ grid of AR(1) processes simulated for 51 periods with $\sigma = 1$ and $\mu = 0$.
Use the following values for $\rho$
Step7: The random walk process
The random walk process is an AR(1) process with $\rho=1$ | Python Code:
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
Explanation: Class 12: Stochastic Time Series Processes
Simulating normal random variables with Numpy
The numpy.random module has bunch of functions for generating random variables and evaluating probability and cumulative density functions for a wide variety of probability distributions. Learn more about the module here:
https://docs.scipy.org/doc/numpy/reference/routines.random.html
We're going to make use of the numpy.random.normal() function to crate arrays of random draws from the normal distribution. The function takes three arguments:
* loc: the mean of the distribution (default=0)
* scale: the standard deviation of the distribution (default=1)
* size: how many to numbers to draw (default = None)
Evidently the default is to draw numbers from the standard normal distribution.
End of explanation
# Set the seed for the random number generator
np.random.seed(129)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
Explanation: Computers by definition cannot generate truly random numbers. The Mersenne Twister is a widely-used algorithm for generating pseudo random numbers form a deterministic process. That is, while the numbers generated from the algorithm are not random in the literal sense, they exhibit distributional qualities that make them indistinguishable from truly random numbers.
A nice feature of pseudo random numbers is that they can be replicated by specifying the seed, or starting point, for the random number generating algorithm.
End of explanation
# Set the seed for the random number generator
np.random.seed(129)
# Create two arrays:
# x: 500 draws from the normal(0,1) distribution
# y: 500 draws from the normal(0,2) distribution
x = np.random.normal(loc=0,scale=1,size=500)
y = np.random.normal(loc=0,scale=2,size=500)
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.plot(y,lw=3,alpha = 0.6,label='$\sigma=2$')
plt.grid(linestyle=':')
plt.legend(ncol=2,loc='lower right')
Explanation: Example
Draw 500 values each from the $\mathcal{N}(0,1)$ and $\mathcal{N}(0,2^2)$ distributions. Plot.
End of explanation
# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=0.5
sigma=1
x = np.zeros(T)
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':')
Explanation: The white noise process
In the previous example, we created two variables that stored draws from normal distrbutions with means of zero but with different standard deviations. Both of the variables were simulations of whit noise processes. A white noise process is a random variable $\epsilon_t$ with constant mean and constant variance. We are concerned only with zero-mean white noise process and we'll often denote that a variable is a zero-mean white noise process with the following shorthand notation:
\begin{align}
\epsilon_t & \sim \text{WN}(0,\sigma^2),
\end{align}
where $\sigma^2$ is the variance of the processes. Strictly speaking, a white noise process can follow any distribution as long as the mean and and variance are constant, but we'll concentrate exclusively white noise process drawn from the normal distribution.
The AR(1) process
A random variable $X_t$ is an autoregressive of order 1 process or AR(1) process if it can be written in the following form:
\begin{align}
X_t & (1+\rho)\mu + \rho X_{t+1} + \epsilon_t,
\end{align}
where $\rho$ and $\mu$ are constants and $\epsilon \sim \text{WN}(0,\sigma^2)$. The AR(1) process is the stochastic analog of the first-order difference equation.
Example
Simulate an AR(1) process for 51 periods using the following parameter values:
\begin{align}
\rho & = 0.5\
\mu & = 1 \
\sigma & = 1
\end{align}
End of explanation
# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=1.5
sigma=1
import time
# Wait for 5 seconds
x = np.zeros(T)
x[:] = np.NAN
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':')
Explanation: Example
Simulate an AR(1) process for 51 periods using the following parameter values:
\begin{align}
\rho & = 1.5\
\mu & = 1 \
\sigma & = 1
\end{align}
End of explanation
def ar1(mu=0,rho=0,sigma=1,x0=0,T=25):
'''Funciton for simulating an AR(1) process for T periods
Args:
mu (float): mean of the AR(1) process
rho (float): autoregressive parameter
sigma (float): standard deviation of the white noise process
x0 (float): initial value of the process
T (int): number of periods to simulate
Returns:
numpy array
'''
# initialize x array
x = np.zeros(T)
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
return x
Explanation: Notice that if $-1< \rho < 1$, then $\mu$ is the expected value of the process. That is, when $-1< \rho < 1$, the process will fluctuate around $\mu$. But if $\rho>1$ or $\rho<-1$, the process will explode away from $\mu$.
End of explanation
fig = plt.figure(figsize=(12,8))
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax1 = fig.add_subplot(2,2,1)
ax1.plot(y,lw=3,alpha=0.7)
ax1.set_title('$X_t = \epsilon_t$')
ax1.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax2 = fig.add_subplot(2,2,2)
ax2.plot(y,lw=3,alpha=0.7)
ax2.set_title('$X_t = 0.5\cdot X_{t-1} + \epsilon_t$')
ax2.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0.9,sigma=1,x0=0,T=51)
ax3 = fig.add_subplot(2,2,3)
ax3.plot(y,lw=3,alpha=0.7)
ax3.set_title('$X_t = 0.9\cdot X_{t-1} + \epsilon_t$')
ax3.grid()
np.random.seed(129)
y = ar1(mu=0,rho=-0.5,sigma=1,x0=0,T=51)
ax4 = fig.add_subplot(2,2,4)
ax4.plot(y,lw=3,alpha=0.7)
ax4.set_title('$X_t = -0.5\cdot X_{t-1} + \epsilon_t$')
ax4.grid()
Explanation: Example
Construct a $2\times2$ grid of AR(1) processes simulated for 51 periods with $\sigma = 1$ and $\mu = 0$.
Use the following values for $\rho$:
* Top-left: $\rho=0$
* Top-right: $\rho=0.5$
* lower-left: $\rho=0.9$
* lower-left: $\rho=-0.5$
Be sure to use the same seed for each simulation so you can see how changing $\rho$ affects the output
End of explanation
np.random.seed(129)
for i in range(7):
plt.plot(ar1(rho=1,T=501))
plt.grid()
plt.title('Five random walk processes')
Explanation: The random walk process
The random walk process is an AR(1) process with $\rho=1$:
\begin{align}
X_t = X_{t-1} + \epsilon_t
\end{align}
The random walk process has an important place in finance since the evidence suggests that stock prices follow a random walk process.
Example
Simulate 7 random walk processes for 501 periods. Set $\sigma = 1$. Plot all 7 simulated processes on the same axes.
End of explanation |
66 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Widgets and Interactions
Step1: Add to the function to allow amplitude to be varied and aadd in an additional slider to vary both f and a
may want to limit ylim
Step2: Climate data
Step3: Plotting some live (ish) earthquake data...
Download the data first
Step4: This is great but one cool enhancement would be to make the size of the point represent the magnitude of the earthquake.
Here's one way to do it | Python Code:
!conda install -y netcdf4
from netCDF4 import Dataset, num2date, date2num
from numpy import *
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
x = linspace(0, 1, 100) # generates a hundred values between 0 and 1
f = 2
a = 3
plt.plot(x, sin(2*pi*x*f))
def pltsin(f):
plt.plot(x, sin(2*pi*x*f))
pltsin(0.5)
Explanation: Widgets and Interactions
End of explanation
interact(pltsin, f=(1, 10, 0.2), x = (1, 10, 0.2))
def pltsina(f, a):
plt.plot(x, a*sin(2*pi*x*f))
plt.ylim(-10.5, 10.5)
interact(pltsina, f=(1, 10, 0.2), a = (1, 10, 0.2))
Explanation: Add to the function to allow amplitude to be varied and aadd in an additional slider to vary both f and a
may want to limit ylim
End of explanation
f=Dataset ('ncep-data/air.sig995.2013.nc') # get individual data set out of the right folder
air = f.variables['air'] # get variable
plt.imshow(air[0,:,:]) # display first timestep
# Create function to browse through the days
def sh(time):
plt.imshow(air[time,:,:])
# Now make it interactive
interact(sh, time=(0, 355, 1))
# Browse variable
def sh(time =0, var='air', year = '2013'):
f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')
vv=f.variables[var]
plt.imshow(vv[time,:,:])
#Give a list of variables
variabs =['air', 'uwnd', 'vwnd', 'rhum']
year = ['2013', '2014', '2015']
# Now interact with it
interact(sh, time=(0, 355, 1), year = year, var=variabs)
help(sh)
from mpl_toolkits.basemap import Basemap
# create north polar sterographic projection
m=Basemap(projection='npstere', boundinglat=60, lon_0=0, resolution ='l')
m.fillcontinents(color='gray', lake_color='gray')
m.drawparallels(arange(-80.,81.,20.))
m.drawmeridians(arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
# Set up some variables
lon = f.variables['lon'][:]
lat = f.variables['lat'][:]
lon, lat = meshgrid(lon, lat)
x, y = m(lon, lat)
def sh(time =0, var='air', year = '2013'):
f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')
vv=f.variables[var]
tt=f.variables['time']
dd=num2date(tt[time], tt.units)
m.fillcontinents(color='gray', lake_color='gray')
m.drawparallels(arange(-80.,81.,20.))
m.drawmeridians(arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
cs = m.contourf(x, y, vv[time,:,:]-273.15)
interact(sh, year=year, time=(0,355,1), var=variabs)
my_map = Basemap (projection='merc', lat_0=0, lon_0=30,
resolution='h', area_thresh=1000.0,
llcrnrlon=29, llcrnrlat=-1,
urcrnrlon=31, urcrnrlat=1)
# area threshold states how rivers etc look - scale, resolution sets resolution, llcrnlon etc sets box,
# lat and lon decide where you look
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='coral')
my_map.drawmapboundary()
my_map.drawmeridians(arange(0,360,30))
my_map.drawparallels(arange(-90, 90, 30))
lon=30
lat=0
x,y=my_map(lon, lat)
my_map.plot(x, y, 'bo', markersize=7.2)
plt.show() # here the function that decides actually plots
# This just lets the output of the following code samples
# display inline on this page, at an appropirate size
from pylab import rcParams
# Create a simple basemap
my_map = Basemap (projection='ortho', lat_0=50, lon_0=0,
resolution='l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red', lake_color='gray')
plt.show()
Explanation: Climate data
End of explanation
#Check the first few lats and longs
import csv
# Open the earthquake data file.
filename = '1.0_week.csv'
# Create empty lists for the latitudes and longitudes.
lats, lons, mags = [], [], []
# Read through the entire file, skip the first line,
# and pull out just the lats and lons.
with open(filename) as f:
# Create a csv reader object.
reader = csv.reader(f)
# Ignore the header row.
next(reader)
# Store the latitudes and longitudes in the appropriate lists.
for row in reader:
lats.append(float(row[1]))
lons.append(float(row[2]))
mags.append(float(row[4]))
# Display the first 5 lats and lons.
print('lats', lats[0:5])
print('lons', lons[0:5])
print('mags', mags[0:5])
### And now create a plot of these on a map projection
import csv
# Open the earthquake data file.
filename = '1.0_week.csv'
# Create empty lists for the latitudes and longitudes.
lats, lons, mags = [], [], []
# Read through the entire file, skip the first line,
# and pull out just the lats and lons.
with open(filename) as f:
# Create a csv reader object.
reader = csv.reader(f)
# Ignore the header row.
next(reader)
# Store the latitudes and longitudes in the appropriate lists.
for row in reader:
lats.append(float(row[1]))
lons.append(float(row[2]))
mags.append(float(row[4]))
# --- Build Map ---
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
eq_map = Basemap(projection='robin', resolution = 'l', area_thresh = 1000.0,
lat_0=52, lon_0=0)
eq_map.drawcoastlines()
eq_map.drawcountries()
eq_map.fillcontinents(color = 'coral')
eq_map.drawmapboundary()
eq_map.drawmeridians(np.arange(0, 360, 30))
eq_map.drawparallels(np.arange(-90, 90, 30))
min_marker_size = 1
for lon, lat, mags in zip(lons, lats, mags):
x,y = eq_map(lon, lat)
msize = mags * min_marker_size
eq_map.plot(x, y, , markersize=msize)
if mags >= 5.0
eqcolor = 'r'
elif: mags >= 1.0 and <= 3.0
eqcolor = 'g'
elif: <= 1.0
eqcolor = 'y
eq_map.plot(x, y, eqcolor, markersize=msize)
plt.show()
Explanation: Plotting some live (ish) earthquake data...
Download the data first: http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_week.csv
This will download a file locally- move it into your working directory. Alternatively, use the historic dataset provided in this repo.
End of explanation
x,y
Explanation: This is great but one cool enhancement would be to make the size of the point represent the magnitude of the earthquake.
Here's one way to do it:
Read the magnitudes into a list along with their respective lat and long
Loop through the list, plotting one point at a time
As the magnitudes start at 1.0, you can just use the magnitude directly as the scale factor
To get the marker size, multiply the magnitude by the smallest dot you want on the map.
Add an extra enhancement of colour:
make small earthquakes
See if you can get similar data, perhaps for Whale sightings, and plot those on a map.
You might even have some of your own data to plot..
End of explanation |
67 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Processing in Python
Tanmoy Dasgupta
[email protected] | Assistant Professor | Department of Electrical Engineering | Techno India University, Kolkata
I colud not get the sepll checekr wroikng. So this ntoebook mgiht cnotain erorrs / toyps.
This tutorial is supposed to be an introduction to the different scientific packages in python that can be utilized to perform different tasks related to image processing. I have inherently assumed that the reader already has good exposure on core Python and some exposure on Numpy, Scipy and Matplotlib. It is also assumed that the reader knows how to start IPython Notebooks.
This notebook contains materails that are useful and new. However, things that are useful are not new and the things that are new are not always useful. Feel free to improve it and send me suggestions.
Packages You Need
Python 2.7 or Python 3.4
Choose Python 2.7 is you want to settle with the past. Choose 3.4 if you want to go with the future.
NumPy
NumPy is the fundamental package for scientific computing with Python. It contains among other things
Step1: In case you didn't have any error so far, you're good to go!
What is an image anyway?
Short answer It is just a 2 dimensional array (grayscale image) or a set of Three 2 dimensional arrays (colour image).
Long answer
Loading an Image as a Numpy array
Step2: Check the data type and the size of the array $A$
Step3: So, our image is a colour image and it has a resolution of $400 \times 267$. It is imported as an N-dimenstional array object available from numpy.
Step4: Now, let us segrigate the Red, Green and the Blue channels
Step5: Now that we know how to read and display an image. Let us do this
Step6: See! A random image! Now let us create a random color image!
Step7: Image Enhancement Techniques
The principal objective of Image Enhancement is to process an image so that the result is more suitable than the original image for a particular application.
However, there are mainly TWO approaches towards image enhancement. A saptial domain approach and a frequency domain approach.
Spatial Domain Approach
It refers to the image plane itself and involves direct manipuation of the pixels of an image.
Frequency Domain Technique
Frequency domain processing techniques are based on modifying the Fourier Transform of an image.
Spatial Domain Approaches
Image processing functions in the spatial domain are often of the form $ g(x,y) = \mathcal{T}[f(x, y)]$, where, $f(x, y)$ is the input image and $g(x, y)$ is the processed output image. $\mathcal{T}$ is an operation on $f$ defined over some neighbourhood of the pixel at the location $(x, y)$. Usually a neighbourhood of $3\times 3$ (or sometimes $1\times 1$) is assumed about the pixel at $(x, y)$.
Spatial domain techniques include Pint processing, Image subtraction, Spatial filtering, Image averaging, etc.
Point processing include Contrast stretching, Gray-level slicing, Bit-plane slicing, Histogram processing, etc.
Spatial filtering includes Low pass filtering, Median filtering, High-pass filtering, etc.
Histogram and Contrast
The word histogram in the context of an image simply means a histogram plot of the pixel intensity vs number of pixels.
Now, let us go back to the original image of the Macaw. Let us plot the histogram of the pixel intensity. First convert the original image into a grayscale image. This RGB to grayscale conversion would use the formula (more on this formula later)
$$X_{Gray} = [0.299\quad 0.587\quad 0.144] \cdot \left[\begin{array}{c}
X_{R}\
X_{G}\
X_{B}
\end{array}\right].$$
Step8: Let us now narrow the contrast the image
Contrast Stretching
The possible causes for low contrast images are
1. poor illumination
2. lack of dynamic range in imaging sensor
3. wrong setting of the lens aperture during image acquisition.
Contrast stretching attempts to increase the dynamic range of the gray levels of the image being processed. For a neighbourhood of size $1 \times 1$, contast stretching is usually done by a gray level transformation of the form $s = \mathcal{W}[r]$, where $r$ is the gray level of $f(x, y)$ at $(x, y)$, $s$ is the gray level of $g(x, y)$ at $(x, y)$ and $\mathcal{W}$ is a graylevel transformation function.
Now, let us load a low contrast image.
Step9: As we can see that the image has low contrast, we will use the following transformation function to stretch the contrast of the image.
$$s=\mathcal{W}(r)=\begin{cases}
0, & r<103\
\frac{255}{132}(r-103), & 103\le r\le235\
255, & r>235
\end{cases}$$
Step10: See that, now the pixel intensities are spread over wider range $[0, 255]$. This is known as linear contrast streatching.
Negative
If you subtract the pixel intensities of the original image from 255, what you get is essentially the negative of the image.
$$ X_{neg} = \left[\begin{array}{ccc}
255 & \cdots & 255\
\vdots & \ddots & \vdots\
255 & \cdots & 255
\end{array}\right]
{m\times n} - X{m\times n}.$$
The transformation function in this case looks like this.
Step11: Compare the histogram of the negative with that of the original.
Dynamic Range Compression
to be done later
Power Law (Gamma) Transformations / Gamma Corrections
Gamma correction involves a nonlinear transformation of the form $s = \mathcal{W}[r] = 255\,c\,\left(\dfrac{r}{255}\right)^\gamma$, where, $c$ and $\gamma$ are positive constants. The following plot shows a $r$-vs-$s$ plot for different values of $\gamma$.
Step12: Gray-level Slicing
Sometimes we need to highlight a specific range of gray levels in a image. Possible application areas are finding masses of water in satellite imagery, enhancement of flaws in x-ray images, etc.
There are two basic approaches towards gray-level slicing.
1. Binary Thresholding
Step13: Now let us use these ideas in real life!
Load Scikit-image. It has built-in image data sets. More at http
Step14: By means of visual inspection we find that the texts in the scanned page has intensity values higher than $150$ and the background has intensity values lower than that.
(N.B. This is a very crude method! We will automate this later.)
So, we would consider a thresholding function that will search the image pixel by pixel. If the intensity of a pixel is greater than or equal to 150, it will be assigned a value of 255 and if its intensity falls below 150, a zero will be assigned in its place.
Step15: Change the value of the threshold in the above programme and see the changes!
Now consider gradual thresholding. We consider that the region of interest lies between intensity values of 100 and 170.
Step16: Bit Plane Slicing
Sometimes it is desirable to highlight the contribution made by specific bits to the total image appearance.
The image can be imagined to be composed of Eight 1-bit planes -- Plane 0 for the LSB plane and Plane 7 for the MSB.
The higher order bits contain visually significant data, the lower order plane contain more subtle details.
This can be acomplished by doing a Bitwise AND operation. For example, say the intensity of a pixel is 246 in decimal. In binary this would be $(1111\,0110)_2$. So in order to find the value of the 6th bit, one has to siimply do this $(1111\,0110)_2 \odot (0100\,0000)_2$. The result will simply produce the value of the 6th bit. In other words, $246 \odot 64$ will give you the value of the 6th bit.
Step17: Spatial Filtering
Low pass filters attenuate or eliminate high frequncy components in the Fourier domain. Low pass filtering gives rise to image blurring.
High pass filters attenuate or eliminate low frequncy components in the Fourier domain. High pass filtering gives rise to sharpening of edges and other sharp details.
How to implement?
Utilize suitable 2D masks of suitable size, e.g. $3 \times 3$, $5 \times 5$ or $7 \times 7$.
In most of our use cases, we shall restrict ourselves to $3 \times 3$ masks. The mask is applied to certain pixels. Upon application, the mask calculates a wighted sum of the neighbourhood of the concerned pixel and the result substitutes the original pixel. Here is how it works.
Consider that there is a pixel of intensity $z_5$. The $3 \times 3$ neighbourhood of the pixel can be seen as $$\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right].$$
Now consider that the mask that is applied on the pixel with intensity $z_5$ looks like $$\left[\begin{array}{ccc}
w_{1} & w_{2} & w_{3}\
w_{4} & w_{5} & w_{6}\
w_{7} & w_{8} & w_{9}
\end{array}\right].$$
Then, it will substitute $z_5$ by $w_1 z_1 + w_2 z_2 + \cdots + w_9 z_9$. Thus, $$z_{5{new}}=\sum_{i=1}^9 w_i z_i.$$
Thus,
$$\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right] \otimes
\left[\begin{array}{ccc}
w_{1} & w_{2} & w_{3}\
w_{4} & w_{5} & w_{6}\
w_{7} & w_{8} & w_{9}
\end{array}\right] =
\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & \sum_{i=1}^9 w_i z_i & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right].$$
It does not do any changes to any other pixel anyway. The mask is centred on the image pixel whose new intensity value is tobe calculated. This calculation is performed for each pixel seperately by moving the mask to centre it on the pixel under consideration.
Smoothig spatial filters
Low pass spatial filtering
Examples of low pass spatial filter masks are
$$\mathcal{L}_1 = \dfrac{1}{9}\left[\begin{array}{ccc}
1 & 1 & 1\
1 & 1 & 1\
1 & 1 & 1
\end{array}\right]; \quad\quad
\mathcal{L}_2 = \dfrac{1}{16}\left[\begin{array}{ccc}
1 & 2 & 1\
2 & 4 & 2\
1 & 2 & 1
\end{array}\right]$$
A low pass filter must have all positive coefficients.
For a low pass spatial filter mask shown as $\mathcal{L}_1$, the operation is also popularly termed as neighbourhood averaging. This averaging causes blurring and loss of sharpness.
For a filter mask shown in $\mathcal{L}_2$, it is called weighted averaging.
Median filtering
Median filters are nonlinear (why?) filters employed with an objective of noise reduction, withot bluring.
$$\underset{\textrm{image section under consideration}}{\underbrace{\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right]}} \underset{\textrm{median filtering}}{\Rightarrow}
\underset{\textrm{result of median filtering}}{\underbrace{\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & \textrm{med}{z_{5}} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right]}},$$
where $\textrm{med}{z_{5}}$ is the median of $z_1, z_2, \cdots, z_9$. Median value can be easily calculated by arranging $z_1, z_2, \cdots, z_9$ in ascending order of maginitude and then finding the value that is in the middle position.
This filter is most effective when the noise pattern consists of niose-like components and it is of utmost importance to preserve edge sharpness.
Now we shall use scikit-image. There are many built in filters. Check http
Step18: See that the low pass filter significantly reduces the noise level.
Now let us apply median filtering to the same image.
Step19: As one can easily see, an image corrupted with speckle noise can be better denoised with mean filtering. Now check an image with salt and pepper noise.
Step20: See the difference!
Sharpening spatial filters
Derivative filters
The differentiation operation is expected to sharpen an image.
One can use either first derivative or second derivative information.
Digital approximation of first derivative
Step21: Another Example
Step22: Laplacian filter
A second order derivative filter can be implemented by employing a Laplacian mask. The Laplacian of an image function $f(x, y)$ of two variables is defined as $\nabla ^2 f(x, y) = \dfrac{\partial ^2f(x,y)}{\partial x^2} + \dfrac{\partial ^2f(x,y)}{\partial y^2}.$
Thus, $\nabla ^2 f(x, y) = f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x,y)$.
So, a Laplacian mask would look like
$$\nabla^2_\perp=\underset{\textrm{Laplacian mask}}{\underbrace{\left[\begin{array}{rrr}
0 & 1 & 0\
1 & -4 & 1\
0 & 1 & 0
\end{array}\right]}}; \quad
\nabla^2_\odot=\underset{\textrm{Omnidirectional Laplacian mask}}{\underbrace{\left[\begin{array}{rrr}
1 & 1 & 1\
1 & -8 & 1\
1 & 1 & 1
\end{array}\right]}}.$$
The second one, here, considers four directions 1. horizontal, 2. vertical, 3. +45$^\circ$ and 4. -45$^\circ$, whereas, the first one only considers the horizontal and vertical directions.
However, there is a problem regarding the direct implementation of a Laplacian mask. Being a second derivative operation, it highlights intensity discontinuities in an image, and in the process de-emphasizes image regions having slow variations in intensity profile. SO, in order to preserve the original background features and yet perform sharpening operation, the Laplacian operator is utilized in the following manner
Step23: You can also use Scikit-image to achieve the same goals.
Step24: Now, check out Python Imaging Library (fork
Step25: An important point to note
Step26: Frequency Domain Approaches
Once we are comfortable with the spatial domain image enhancement techniques described above, we are ready to jump into a completely different approach towards image processing. Instead of directly manipulating the pixels in an image, we will now manipulate the Fourier Transform of the image. We would utilize the concept of 2D Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT) and the Convolution theorem in 2 dimensions.
The 2D DFT pair for an image function $f(x,y)$ can be expressed as $$F(u, v) = \frac{1}{MN} \sum {x=0}^{M-1} \sum {y=0}^{N-1} f(x,y) \exp\left[-j2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)\right],$$ for $x=0,1,\cdots,M-1$ and $y=0,1,\cdots,N-1$, and
$$f(x,y) =\sum {u=0}^{M-1} \sum {v=0}^{N-1} f(x,y) \exp\left[j2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)\right],$$ for $u=0,1,\cdots,M-1$ and $v=0,1,\cdots,N-1$.
The convolution theorem in 2D states that, $$h(x,y) f(x,y) \rightleftharpoons H(u,v)F(u,v),$$ and $$ H(u,v)F(u,v) \rightleftharpoons h(x,y)f(x,y).$$
In image enhancement problems, $f(x,y)$ is the input image, $g(x,y)$ is the output image and it is obtained by the application of a linear position invariant operator $h(x,y)$ on $f(x,y)$. Thus, $g(x,y)=h(x,y) f(x,y),$ and $G(u,v)=H(u,v) F(u,v)$. Here, $G(u,v)$, $H(u,v)$, and $F(u,v)$ are the DFTs of $g(x,y)$, $h(x,y)$ and $f(x,y)$ respectively. $H(u,v)$ is often called the process transfer function.
The main goal in frequency domain approach in image enhancement is to select a suitable $H(u,v)$ such that $g(x,y)$ exhibit some highlighted feature of $f(x,y)$.
Image processing in frequency domain usually involves the following steps
Step27: Now we will see how to find Fourier Transform using Numpy. Numpy has an FFT package to do this. np.fft.fft2() provides us the frequency transform which will be a complex array. Its first argument is the input image, which is grayscale. Second argument is optional which decides the size of output array. If it is greater than size of input image, input image is padded with zeros before calculation of FFT. If it is less than input image, input image will be cropped. If no arguments passed, Output array size will be same as input.
Now once you got the result, zero frequency component (DC component) will be at top left corner. If you want to bring it to center, you need to shift the result by $\frac{N}{2}$ in both the directions. This is simply done by the function, np.fft.fftshift(). (It is more easier to analyze). Once you found the frequency transform, you can find the magnitude spectrum.
Step28: See, You can see more whiter region at the center showing low frequency content is more.
So you found the frequency transform Now you can do some operations in frequency domain, like high pass filtering and reconstruct the image, ie find inverse DFT. For that you simply remove the low frequencies by masking with a rectangular window of size $60\times 60$. Then apply the inverse shift using np.fft.ifftshift() so that DC component again come at the top-left corner. Then find inverse FFT using np.ifft2() function. The result, again, will be a complex number. You can take its absolute value.
Step29: The result shows High Pass Filtering is an edge detection operation. This also shows that most of the image data is present in the Low frequency region of the spectrum. Anyway we have seen how to find DFT, IDFT etc in Numpy. Now let’s see how to do it in OpenCV.
If you closely watch the result, especially the last image in JET color, you can see some artifacts. It shows some ripple like structures there, and it is called ringing effects. It is caused by the rectangular window we used for masking. This mask is converted to sinc shape which causes this problem. So rectangular windows is not used for filtering. Better option is Gaussian Windows.
Fourier Transform in OpenCV
OpenCV provides the functions cv2.dft() and cv2.idft() for this. It returns the same result as previous, but with two channels. First channel will have the real part of the result and second channel will have the imaginary part of the result. The input image should be converted to np.float32 first. We will see how to do it.
Step30: So, now we have to do inverse DFT. In previous session, we created a HPF, this time we will see how to remove high frequency contents in the image, ie we apply LPF to image. It actually blurs the image. For this, we create a mask first with high value (1) at low frequencies, ie we pass the LF content, and 0 at HF region.
Step31: Note | Python Code:
%pylab inline
from __future__ import division #Python 2.X and 3.X Compatibility
from __future__ import print_function #Python 2.X and 3.X Compatibility
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
Explanation: Image Processing in Python
Tanmoy Dasgupta
[email protected] | Assistant Professor | Department of Electrical Engineering | Techno India University, Kolkata
I colud not get the sepll checekr wroikng. So this ntoebook mgiht cnotain erorrs / toyps.
This tutorial is supposed to be an introduction to the different scientific packages in python that can be utilized to perform different tasks related to image processing. I have inherently assumed that the reader already has good exposure on core Python and some exposure on Numpy, Scipy and Matplotlib. It is also assumed that the reader knows how to start IPython Notebooks.
This notebook contains materails that are useful and new. However, things that are useful are not new and the things that are new are not always useful. Feel free to improve it and send me suggestions.
Packages You Need
Python 2.7 or Python 3.4
Choose Python 2.7 is you want to settle with the past. Choose 3.4 if you want to go with the future.
NumPy
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
1. a powerful N-dimensional array object
2. sophisticated (broadcasting) functions
3. tools for integrating C/C++ and Fortran code
4. useful linear algebra, Fourier transform, and random number capabilities
SciPy
The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization.
Matplotlib
Matplotlib is a python 2D and 3D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala MATLAB® or Mathematica®), web application servers, and different graphical user interface toolkits.
IPython
IPython provides a rich architecture for interactive computing with:
Powerful interactive shells (terminal and Qt-based).
A browser-based notebook with support for code, rich text, mathematical expressions, inline plots and other rich media.
Support for interactive data visualization and use of GUI toolkits.
Flexible, embeddable interpreters to load into your own projects.
Easy to use, high performance tools for parallel computing.
Python Imaging Library and Scikit-Image
These packages (among many others) have custom modules for image processing.
OpenCV
OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. It has C++, C, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS.
Installing
Installing all these packages might seem a little too much! Don't worry. You can install all of the above (and many other packages) just by downloading and installing any of the following Python Distributions :
Continuum Anaconda Free (as in 'Freedom'). Both Python 2.7 and 3.4 are available. Availabe for Linux, Mac and Windows.
Enthought Canopy Free + Subscription based. Only Python 2.7. Availabe for Linux, Mac and Windows. Full featured product subscription is freely availabe for Academic use.
WinPython Portable. Free. Both Python 2.7 and 3.4 are available. Windows only.
Getting Started
Unlike almost all other IPython notebooks, I will NOT import Pylab Magic. But if you really want to, you can do it by uncommenting (removing the #) and running the following line.
End of explanation
A = plt.imread('images/macaw.jpg')
print(A)
Explanation: In case you didn't have any error so far, you're good to go!
What is an image anyway?
Short answer It is just a 2 dimensional array (grayscale image) or a set of Three 2 dimensional arrays (colour image).
Long answer
Loading an Image as a Numpy array
End of explanation
print(np.shape(A))
print(type(A))
print(A.dtype)
Explanation: Check the data type and the size of the array $A$
End of explanation
plt.imshow(A)
plt.show()
Explanation: So, our image is a colour image and it has a resolution of $400 \times 267$. It is imported as an N-dimenstional array object available from numpy.
End of explanation
A_red = A[:, :, 0]
A_green = A[:, :, 1]
A_blue = A[:, :, 2]
plt.figure()
plt.imshow(A_red, cmap=cm.gray) #For a single channel / grayscale image you need to mention the colourmap
plt.title('The RED channel')
plt.figure()
plt.imshow(A_green, cmap=cm.gray)
plt.title('The GREEN channel')
plt.figure()
plt.imshow(A_blue, cmap=cm.gray)
plt.title('The BLUE channel')
plt.show()
plt.figure()
plt.imshow(A_red)
plt.show()
Explanation: Now, let us segrigate the Red, Green and the Blue channels
End of explanation
#create a random uint8 array of size 300x300 with numbers from 0 to 255
x = np.random.randint(0, 256, (300, 400)).astype('uint8')
plt.imshow(x, cmap=cm.gray)
Explanation: Now that we know how to read and display an image. Let us do this:
End of explanation
X = np.random.randint(0, 256, (300, 400, 3)).astype('uint8')
plt.imshow(X)
Explanation: See! A random image! Now let us create a random color image!
End of explanation
from PIL import Image #Python Imaging Library
A_gray = Image.open('images/macaw.jpg','r')
A_gray = A_gray.convert('L')
temp = np.asarray(A_gray.getdata(), dtype=np.float64).reshape((A_gray.size[1], A_gray.size[0]))
A_gr = np.asarray(temp, dtype=np.uint8)
plt.imshow(A_gr, cmap=cm.gray)
plt.show()
plt.hist(A_gr.flatten(), 256, range=(0, 255), fc='k', ec='k');
Explanation: Image Enhancement Techniques
The principal objective of Image Enhancement is to process an image so that the result is more suitable than the original image for a particular application.
However, there are mainly TWO approaches towards image enhancement. A saptial domain approach and a frequency domain approach.
Spatial Domain Approach
It refers to the image plane itself and involves direct manipuation of the pixels of an image.
Frequency Domain Technique
Frequency domain processing techniques are based on modifying the Fourier Transform of an image.
Spatial Domain Approaches
Image processing functions in the spatial domain are often of the form $ g(x,y) = \mathcal{T}[f(x, y)]$, where, $f(x, y)$ is the input image and $g(x, y)$ is the processed output image. $\mathcal{T}$ is an operation on $f$ defined over some neighbourhood of the pixel at the location $(x, y)$. Usually a neighbourhood of $3\times 3$ (or sometimes $1\times 1$) is assumed about the pixel at $(x, y)$.
Spatial domain techniques include Pint processing, Image subtraction, Spatial filtering, Image averaging, etc.
Point processing include Contrast stretching, Gray-level slicing, Bit-plane slicing, Histogram processing, etc.
Spatial filtering includes Low pass filtering, Median filtering, High-pass filtering, etc.
Histogram and Contrast
The word histogram in the context of an image simply means a histogram plot of the pixel intensity vs number of pixels.
Now, let us go back to the original image of the Macaw. Let us plot the histogram of the pixel intensity. First convert the original image into a grayscale image. This RGB to grayscale conversion would use the formula (more on this formula later)
$$X_{Gray} = [0.299\quad 0.587\quad 0.144] \cdot \left[\begin{array}{c}
X_{R}\
X_{G}\
X_{B}
\end{array}\right].$$
End of explanation
girl = plt.imread('images/low_contrast.jpg')
plt.imshow(girl)
plt.hist(girl.flatten(), 256, range=(0, 255), fc='k', ec='k');
maxi = np.amax(girl)
mini = np.amin(girl)
intensity_range = maxi - mini
print('lowest intensity:', mini, ', highest intensity:', maxi, ', spread:', intensity_range)
Explanation: Let us now narrow the contrast the image
Contrast Stretching
The possible causes for low contrast images are
1. poor illumination
2. lack of dynamic range in imaging sensor
3. wrong setting of the lens aperture during image acquisition.
Contrast stretching attempts to increase the dynamic range of the gray levels of the image being processed. For a neighbourhood of size $1 \times 1$, contast stretching is usually done by a gray level transformation of the form $s = \mathcal{W}[r]$, where $r$ is the gray level of $f(x, y)$ at $(x, y)$, $s$ is the gray level of $g(x, y)$ at $(x, y)$ and $\mathcal{W}$ is a graylevel transformation function.
Now, let us load a low contrast image.
End of explanation
r = np.arange(0, 256, 1)
s = np.zeros(shape(r))
s = (255/intensity_range)*(r - mini)
s[r<mini] = 0
s[r>maxi] = 255
plt.plot(r, s)
plt.axis([0, 260, -5, 260])
xlabel('r')
ylabel('s')
title('gray-level transformation function $\mathcal{W}$')
plt.grid()
girl_high = ((girl.astype('float64') - mini) * 255 / intensity_range).astype('uint8')
plt.imshow(girl_high, cmap=cm.gray)
plt.hist(girl_high.flatten(), 256, range=(0, 255), fc='k', ec='k');
Explanation: As we can see that the image has low contrast, we will use the following transformation function to stretch the contrast of the image.
$$s=\mathcal{W}(r)=\begin{cases}
0, & r<103\
\frac{255}{132}(r-103), & 103\le r\le235\
255, & r>235
\end{cases}$$
End of explanation
r = np.arange(0, 256, 1)
s = np.zeros(shape(r))
s = 255 - r
plt.plot(r, s)
plt.axis([0, 260, -5, 260])
xlabel('r')
ylabel('s')
title('$\mathcal{W}$ for finding the negative of an image')
plt.grid()
girl_neg = (255*np.ones(shape(girl_high)) - girl_high).astype('uint8')
plt.imshow(girl_neg, cmap=cm.gray)
plt.hist(girl_neg.flatten(), 256, range=(0, 255), fc='k', ec='k');
Explanation: See that, now the pixel intensities are spread over wider range $[0, 255]$. This is known as linear contrast streatching.
Negative
If you subtract the pixel intensities of the original image from 255, what you get is essentially the negative of the image.
$$ X_{neg} = \left[\begin{array}{ccc}
255 & \cdots & 255\
\vdots & \ddots & \vdots\
255 & \cdots & 255
\end{array}\right]
{m\times n} - X{m\times n}.$$
The transformation function in this case looks like this.
End of explanation
c = 1
r = np.arange(0, 256)
for gamma in [0.04, 0.10, 0.20, 0.40, 0.67, 1, 1.5, 2.5, 5, 10, 25]:
s = 255*c*(r/255)**gamma
plt.plot(r, s)
plt.axis([0, 255, 0, 255])
plt.xlabel('r')
plt.ylabel('s')
plt.title('Gamma correction $s = 255\,c\,(r/255)^\gamma$')
xray_orig = plt.imread('images/chestxray.jpg')
figure(figsize=(20,7))
subplot(1, 2, 1)
plt.imshow(xray_orig, cmap=cm.gray)
plt.title('original')
subplot(1, 2, 2)
plt.hist(xray_orig.flatten(), 256, range=(2, 255), fc='k', ec='k');
c = 1.0
gamma = 2.0
figure(figsize=(20,7))
subplot(1, 2, 1)
xray_gamma1 = (255*c*(xray_orig / 255)**gamma).astype('uint8')
plt.imshow(xray_gamma1, cmap=cm.gray)
plt.title('$c=1.0$, $\gamma = 2.0$')
subplot(1, 2, 2)
plt.hist(xray_gamma1.flatten(), 256, range=(2, 255), fc='k', ec='k');
c = 1.0
gamma = 0.5
figure(figsize=(20,7))
subplot(1, 2, 1)
xray_gamma2 = (255*c*(xray_orig / 255)**gamma).astype('uint8')
plt.imshow(xray_gamma2, cmap=cm.gray)
plt.title('$c=1.0$, $\gamma = 0.5$')
subplot(1, 2, 2)
plt.hist(xray_gamma2.flatten(), 256, range=(2, 255), fc='k', ec='k');
Explanation: Compare the histogram of the negative with that of the original.
Dynamic Range Compression
to be done later
Power Law (Gamma) Transformations / Gamma Corrections
Gamma correction involves a nonlinear transformation of the form $s = \mathcal{W}[r] = 255\,c\,\left(\dfrac{r}{255}\right)^\gamma$, where, $c$ and $\gamma$ are positive constants. The following plot shows a $r$-vs-$s$ plot for different values of $\gamma$.
End of explanation
r = np.arange(0, 256)
s = np.zeros(shape(r))
s[:] = 200
s[r<120] = 10
s[r>180] = 10
plt.subplot(1, 2, 1)
plot(r, s)
plt.axis([0, 255, 0, 255])
plt.title('Binary thresholding function')
r = np.arange(0, 256)
s = np.arange(0, 256)
a = r>120
b = r<180
s[a * b] = 200
plt.subplot(1, 2, 2)
plot(r, s)
plt.axis([0, 255, 0, 255])
plt.title('Gradual thresholding function')
Explanation: Gray-level Slicing
Sometimes we need to highlight a specific range of gray levels in a image. Possible application areas are finding masses of water in satellite imagery, enhancement of flaws in x-ray images, etc.
There are two basic approaches towards gray-level slicing.
1. Binary Thresholding : all gray levels in the range of interest are displayed using a high value and the rest using a low value.
2. Gradual Thresholding : desired range of gray levels are brightened but the background and the gray-level tonalities are preserved.
For exaple, let the range of interest be in $[120\,\, 180]$. Then the corresponding transformation functions would look like the following.
End of explanation
from skimage import data
scanned = data.page()
plt.imshow(scanned, cmap=cm.gray)
plt.hist(scanned.flatten(), 256, range=(0, 255), fc='k', ec='k');
Explanation: Now let us use these ideas in real life!
Load Scikit-image. It has built-in image data sets. More at http://scikit-image.org/docs/dev/api/skimage.data.html
End of explanation
thres = np.zeros(shape(scanned)).astype('uint8')
threshold = 150
thres[scanned<threshold] = 0
thres[scanned>=threshold] = 255
plt.imshow(thres, cmap=cm.gray)
Explanation: By means of visual inspection we find that the texts in the scanned page has intensity values higher than $150$ and the background has intensity values lower than that.
(N.B. This is a very crude method! We will automate this later.)
So, we would consider a thresholding function that will search the image pixel by pixel. If the intensity of a pixel is greater than or equal to 150, it will be assigned a value of 255 and if its intensity falls below 150, a zero will be assigned in its place.
End of explanation
thres1 = copy(scanned)
threshold_hi = 170
threshold_lo = 100
thres1[scanned<threshold_lo] = 0
thres1[scanned>threshold_hi] = 255
plt.imshow(thres1, cmap=cm.gray)
Explanation: Change the value of the threshold in the above programme and see the changes!
Now consider gradual thresholding. We consider that the region of interest lies between intensity values of 100 and 170.
End of explanation
plane7 = A_gr & 128*np.ones(shape(A_gr)).astype('uint8')
plane6 = A_gr & 64*np.ones(shape(A_gr)).astype('uint8')
plane5 = A_gr & 32*np.ones(shape(A_gr)).astype('uint8')
plane4 = A_gr & 16*np.ones(shape(A_gr)).astype('uint8')
plane3 = A_gr & 8*np.ones(shape(A_gr)).astype('uint8')
plane2 = A_gr & 4*np.ones(shape(A_gr)).astype('uint8')
plane1 = A_gr & 2*np.ones(shape(A_gr)).astype('uint8')
plane0 = A_gr & 1*np.ones(shape(A_gr)).astype('uint8')
plt.figure(figsize=(20,7))
plt.subplot(2, 4, 1)
plt.imshow(plane7, cmap=cm.gray)
plt.title('Plane 7 (MSB)')
plt.subplot(2, 4, 2)
plt.imshow(plane6, cmap=cm.gray)
plt.title('Plane 6')
plt.subplot(2, 4, 3)
plt.imshow(plane5, cmap=cm.gray)
plt.title('Plane 5')
plt.subplot(2, 4, 4)
plt.imshow(plane4, cmap=cm.gray)
plt.title('Plane 4')
plt.subplot(2, 4, 5)
plt.imshow(plane3, cmap=cm.gray)
plt.title('Plane 3')
plt.subplot(2, 4, 6)
plt.imshow(plane2, cmap=cm.gray)
plt.title('Plane 2')
plt.subplot(2, 4, 7)
plt.imshow(plane1, cmap=cm.gray)
plt.title('Plane 1')
plt.subplot(2, 4, 8)
plt.imshow(plane0, cmap=cm.gray)
plt.title('Plane 0 (LSB)')
Explanation: Bit Plane Slicing
Sometimes it is desirable to highlight the contribution made by specific bits to the total image appearance.
The image can be imagined to be composed of Eight 1-bit planes -- Plane 0 for the LSB plane and Plane 7 for the MSB.
The higher order bits contain visually significant data, the lower order plane contain more subtle details.
This can be acomplished by doing a Bitwise AND operation. For example, say the intensity of a pixel is 246 in decimal. In binary this would be $(1111\,0110)_2$. So in order to find the value of the 6th bit, one has to siimply do this $(1111\,0110)_2 \odot (0100\,0000)_2$. The result will simply produce the value of the 6th bit. In other words, $246 \odot 64$ will give you the value of the 6th bit.
End of explanation
#mean filtering
from scipy import ndimage
lena_noisy = plt.imread('images/lena_noisy.png')
mask1 = (1/9)*np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) #LP mask
mask2 = (1/16)*np.array([[1, 2, 1], [2, 1, 2], [1, 2, 1]]) #LP mask
result1 = ndimage.convolve(lena_noisy, mask1, mode='constant', cval=0.0)
result2 = ndimage.convolve(lena_noisy, mask2, mode='constant', cval=0.0)
figure(figsize=(15, 5))
subplot(1, 3, 1)
imshow(lena_noisy, cmap=cm.gray)
title('Original')
subplot(1, 3, 2)
imshow(result1, cmap=cm.gray)
title('Filtered with LP mask $\mathcal{L}_1$')
subplot(1, 3, 3)
imshow(result2, cmap=cm.gray)
title('Filtered with LP mask $\mathcal{L}_2$')
Explanation: Spatial Filtering
Low pass filters attenuate or eliminate high frequncy components in the Fourier domain. Low pass filtering gives rise to image blurring.
High pass filters attenuate or eliminate low frequncy components in the Fourier domain. High pass filtering gives rise to sharpening of edges and other sharp details.
How to implement?
Utilize suitable 2D masks of suitable size, e.g. $3 \times 3$, $5 \times 5$ or $7 \times 7$.
In most of our use cases, we shall restrict ourselves to $3 \times 3$ masks. The mask is applied to certain pixels. Upon application, the mask calculates a wighted sum of the neighbourhood of the concerned pixel and the result substitutes the original pixel. Here is how it works.
Consider that there is a pixel of intensity $z_5$. The $3 \times 3$ neighbourhood of the pixel can be seen as $$\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right].$$
Now consider that the mask that is applied on the pixel with intensity $z_5$ looks like $$\left[\begin{array}{ccc}
w_{1} & w_{2} & w_{3}\
w_{4} & w_{5} & w_{6}\
w_{7} & w_{8} & w_{9}
\end{array}\right].$$
Then, it will substitute $z_5$ by $w_1 z_1 + w_2 z_2 + \cdots + w_9 z_9$. Thus, $$z_{5{new}}=\sum_{i=1}^9 w_i z_i.$$
Thus,
$$\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right] \otimes
\left[\begin{array}{ccc}
w_{1} & w_{2} & w_{3}\
w_{4} & w_{5} & w_{6}\
w_{7} & w_{8} & w_{9}
\end{array}\right] =
\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & \sum_{i=1}^9 w_i z_i & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right].$$
It does not do any changes to any other pixel anyway. The mask is centred on the image pixel whose new intensity value is tobe calculated. This calculation is performed for each pixel seperately by moving the mask to centre it on the pixel under consideration.
Smoothig spatial filters
Low pass spatial filtering
Examples of low pass spatial filter masks are
$$\mathcal{L}_1 = \dfrac{1}{9}\left[\begin{array}{ccc}
1 & 1 & 1\
1 & 1 & 1\
1 & 1 & 1
\end{array}\right]; \quad\quad
\mathcal{L}_2 = \dfrac{1}{16}\left[\begin{array}{ccc}
1 & 2 & 1\
2 & 4 & 2\
1 & 2 & 1
\end{array}\right]$$
A low pass filter must have all positive coefficients.
For a low pass spatial filter mask shown as $\mathcal{L}_1$, the operation is also popularly termed as neighbourhood averaging. This averaging causes blurring and loss of sharpness.
For a filter mask shown in $\mathcal{L}_2$, it is called weighted averaging.
Median filtering
Median filters are nonlinear (why?) filters employed with an objective of noise reduction, withot bluring.
$$\underset{\textrm{image section under consideration}}{\underbrace{\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right]}} \underset{\textrm{median filtering}}{\Rightarrow}
\underset{\textrm{result of median filtering}}{\underbrace{\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & \textrm{med}{z_{5}} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right]}},$$
where $\textrm{med}{z_{5}}$ is the median of $z_1, z_2, \cdots, z_9$. Median value can be easily calculated by arranging $z_1, z_2, \cdots, z_9$ in ascending order of maginitude and then finding the value that is in the middle position.
This filter is most effective when the noise pattern consists of niose-like components and it is of utmost importance to preserve edge sharpness.
Now we shall use scikit-image. There are many built in filters. Check http://scikit-image.org/docs/stable/api/skimage.filters.html for more.
End of explanation
from skimage.morphology import disk #needed for the mask
from skimage.filters.rank import median
lena_med = median(lena_noisy, disk(3))
figure(figsize=(10, 5))
subplot(1, 2, 1)
imshow(lena_noisy, cmap=cm.gray)
title('Original')
subplot(1, 2, 2)
imshow(lena_med, cmap=cm.gray)
title('Median filtering')
Explanation: See that the low pass filter significantly reduces the noise level.
Now let us apply median filtering to the same image.
End of explanation
salt = plt.imread('images/saltandpepper.jpg')
mask = (1/9)*np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]])
salt_lp = ndimage.convolve(salt, mask, mode='constant', cval=0.0)
salt_med = median(salt, disk(2))
figure(figsize=(15, 5))
subplot(1, 3, 1)
imshow(salt, cmap=cm.gray)
title('Original')
subplot(1, 3, 2)
imshow(salt_lp, cmap=cm.gray)
title('Filtered with LP mask $\mathcal{L}_1$')
subplot(1, 3, 3)
imshow(salt_med, cmap=cm.gray)
title('Median filtering')
Explanation: As one can easily see, an image corrupted with speckle noise can be better denoised with mean filtering. Now check an image with salt and pepper noise.
End of explanation
from skimage import filters, data
camera = data.camera()
#apply sobel gradient
sobel_camera = filters.sobel(camera)
#apply prewitt gradient
prewitt_camera = filters.prewitt(camera)
figure(figsize=(15, 5))
subplot(1, 3, 1)
imshow(camera, cmap=cm.gray)
title('Original')
subplot(1, 3, 2)
imshow(sobel_camera, cmap=cm.gray)
title('Result of a Sobel gradient')
subplot(1, 3, 3)
imshow(prewitt_camera, cmap=cm.gray)
title('Result of a Prewitt Gradient')
Explanation: See the difference!
Sharpening spatial filters
Derivative filters
The differentiation operation is expected to sharpen an image.
One can use either first derivative or second derivative information.
Digital approximation of first derivative : $\dfrac{\partial f(x,y)}{\partial x} = f(x+1, y) - f(x, y), $ and $\dfrac{\partial f(x,y)}{\partial y} = f(x, y+1) - f(x, y).$
Constraints : The response of a first derivative filter must be
1. zero in areas of constant intensity,
2. must be non-zero at the onset of an intensity step or ramp,
3. nonzero along ramps.
Digital approximation of second derivative : $\dfrac{\partial ^2f(x,y)}{\partial x^2} = f(x+1, y) - 2 f(x, y) +f(x-1, y),$ and $\dfrac{\partial ^2f(x,y)}{\partial y^2} = f(x, y+1) - 2 f(x, y) +f(x, y-1).$
Constraints : The response of a second order derivative filter must be
1. zero in areas of constant intensity,
2. non-zero at the onset and the end of an intensity step or ramp,
3. zero along ramps of constant slope.
Implementing a first derivative filter for image sharpening
A first derivative image sharpening filter can be implemented by by applying the Gradient function. The gradient of a function $f(x, y)$ at coordinates $(x, y)$ is defined as the 2D column vector
$$\nabla f(x,y)\equiv\mathrm{grad}(f)\equiv \mathbf{g} \equiv \left[\begin{array}{c}
g_{x}(x,y)\
g_{y}(x,y)
\end{array}\right]=\left[\begin{array}{c}
\frac{\partial f(x,y)}{\partial x}\
\frac{\partial f(x,y)}{\partial y}
\end{array}\right].$$
The magnitude (length) of vector $\nabla f$ is given by $M(x, y) = ||\nabla f||= \sqrt{\mathbf{g}^T \mathbf{g}} = \sqrt{g_x^2 + g_y^2}$. In image processing, this last expression is often approximated as $|g_x| + |g_y|$.
$M(x, y)$ is an image of the same size of the original and called the gradient image. The computation of this gradient is the basis of various approaches to develop first derivative filter.
If $$\left[\begin{array}{ccc}
z_{1} & z_{2} & z_{3}\
z_{4} & z_{5} & z_{6}\
z_{7} & z_{8} & z_{9}
\end{array}\right]$$
is the image section under consideration, then, according to the above theory, after the application of first derivative filtering, the new value of $z_5$ will be $$M(x,y) = [(z_8 - z_5)^2 + (z_6 - z_5)^2]^{1/2} \approx |z_8 - z_5| + |z_6 - z_5|.$$ Another implementation involves cross-differences: $$M(x,y) = [(z_9 - z_5)^2 + (z_8 - z_6)^2]^{1/2} \approx |z_9 - z_5| + |z_8 - z_6|.$$ But there is one problem : masks of even size are awkward to implement (why?). Hence an approximation with $3 \times 3$ neighbourhood is preferred. The mostly used first order derivative masks are Sobel masks and Prewitt masks. They are like follows :
$$\mathcal{S}{y}=\underset{\textrm{Sobel horizontal derivative}}{\underbrace{\left[\begin{array}{ccc}
-1 & -2 & -1\
0 & 0 & 0\
1 & 2 & 1
\end{array}\right]}}; \quad
\mathcal{S}{x}=\underset{\textrm{Sobel vertical derivative}}{\underbrace{\left[\begin{array}{ccc}
-1 & 0 & 1\
-2 & 0 & 2\
-1 & 0 & 1
\end{array}\right]}}; \quad
\mathcal{P}{y}=\underset{\textrm{Prewitt horizontal derivative}}{\underbrace{\left[\begin{array}{ccc}
-1 & -1 & -1\
0 & 0 & 0\
1 & 1 & 1
\end{array}\right]}}; \quad
\mathcal{P}{x}=\underset{\textrm{Prewitt vertical derivative}}{\underbrace{\left[\begin{array}{ccc}
-1 & 0 & 1\
-1 & 0 & 1\
-1 & 0 & 1
\end{array}\right]}}
.$$
Now, lets get to work!
End of explanation
from skimage import filters, data
xray = plt.imread('images/chestxray.jpg')
#apply sobel gradient
sobel_xray = filters.sobel(xray)
#apply prewitt gradient
prewitt_xray = filters.prewitt(xray)
figure(figsize=(15, 5))
subplot(1, 3, 1)
imshow(xray, cmap=cm.gray)
title('Original')
subplot(1, 3, 2)
imshow(sobel_xray, cmap=cm.gray)
title('Result of a Sobel gradient')
subplot(1, 3, 3)
imshow(prewitt_xray, cmap=cm.gray)
title('Result of a Prewitt Gradient')
Explanation: Another Example
End of explanation
import cv2 #this is OpenCV
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('images/dave.jpg',0) #import the image as grayscale
laplacian = cv2.Laplacian(img,cv2.CV_64F)
sobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)
sobely = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)
plt.figure(figsize=(15, 10))
plt.subplot(2,2,1)
plt.imshow(img, cmap = 'gray')
plt.title('Original')
plt.subplot(2,2,2),plt.imshow(laplacian,cmap = 'gray')
plt.title('Laplacian')
plt.subplot(2,2,3)
plt.imshow(sobelx,cmap = 'gray')
plt.title('Sobel X')
plt.subplot(2,2,4)
plt.imshow(sobely,cmap = 'gray')
plt.title('Sobel Y')
plt.show()
Explanation: Laplacian filter
A second order derivative filter can be implemented by employing a Laplacian mask. The Laplacian of an image function $f(x, y)$ of two variables is defined as $\nabla ^2 f(x, y) = \dfrac{\partial ^2f(x,y)}{\partial x^2} + \dfrac{\partial ^2f(x,y)}{\partial y^2}.$
Thus, $\nabla ^2 f(x, y) = f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x,y)$.
So, a Laplacian mask would look like
$$\nabla^2_\perp=\underset{\textrm{Laplacian mask}}{\underbrace{\left[\begin{array}{rrr}
0 & 1 & 0\
1 & -4 & 1\
0 & 1 & 0
\end{array}\right]}}; \quad
\nabla^2_\odot=\underset{\textrm{Omnidirectional Laplacian mask}}{\underbrace{\left[\begin{array}{rrr}
1 & 1 & 1\
1 & -8 & 1\
1 & 1 & 1
\end{array}\right]}}.$$
The second one, here, considers four directions 1. horizontal, 2. vertical, 3. +45$^\circ$ and 4. -45$^\circ$, whereas, the first one only considers the horizontal and vertical directions.
However, there is a problem regarding the direct implementation of a Laplacian mask. Being a second derivative operation, it highlights intensity discontinuities in an image, and in the process de-emphasizes image regions having slow variations in intensity profile. SO, in order to preserve the original background features and yet perform sharpening operation, the Laplacian operator is utilized in the following manner:
$$g(x,y) = f(x, y) + c\left[\nabla^2 f(x, y)\right],$$ where, $c=-1$ for the operators we considered.
Here is a comparison of Laplacian filter with Sobel filters. For the first time, we are going to use OpenCV.
End of explanation
from skimage import img_as_ubyte, img_as_int
from scipy import ndimage
import cv2
moon = cv2.imread('images/blurry_moon.jpg',0)
img = img_as_int(moon)
laplacian1 = np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]], dtype='float64') #Laplacian 1 mask
laplacian2 = np.array([[1, 1, 1], [1, -8, 1], [1, 1, 1]], dtype='float64') / 3.0 #Laplacian 2 mask
sobelx = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]], dtype='float64') / 4.0 #Sobel x mask
sobely = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]], dtype='float64') / 4.0 #Sobel y mask
out_laplacian1 = img_as_ubyte(ndimage.convolve(img, laplacian1, mode='constant', cval=0.0))
out_laplacian2 = img_as_ubyte(ndimage.convolve(img, laplacian2, mode='constant', cval=0.0))
out_sobelx = img_as_ubyte(ndimage.convolve(img, sobelx, mode='constant', cval=0.0))
out_sobely = img_as_ubyte(ndimage.convolve(img, sobely, mode='constant', cval=0.0))
img_out_laplacian1 = img_as_ubyte(img - out_laplacian1)
img_out_laplacian2 = img_as_ubyte(img - out_laplacian2)
img_out_sobelx = img_as_ubyte(img - out_sobelx)
img_out_sobely = img_as_ubyte(img - out_sobely)
plt.figure(figsize=(15, 15))
plt.subplot(3,3,1)
plt.imshow(img, cmap = 'gray')
plt.title('Original')
plt.subplot(3,3,2)
plt.imshow(out_laplacian2, cmap = 'gray')
plt.title('Laplacian 1')
plt.subplot(3,3,3)
plt.imshow(out_laplacian2, cmap = 'gray')
plt.title('Laplacian 2')
plt.subplot(3,3,4)
plt.imshow(out_sobelx, cmap = 'gray')
plt.title('Sobel X')
plt.subplot(3,3,5)
plt.imshow(out_sobely, cmap = 'gray')
plt.title('Sobel Y')
plt.subplot(3,3,6)
plt.imshow(img_out_laplacian1, cmap = 'gray')
plt.title('Original - Laplacian 1')
plt.subplot(3,3,7)
plt.imshow(img_out_laplacian2, cmap = 'gray')
plt.title('Original - Laplacian 2')
plt.subplot(3,3,8)
plt.imshow(img_out_sobelx, cmap = 'gray')
plt.title('Original - Sobel X')
plt.subplot(3,3,9)
plt.imshow(img_out_sobely, cmap = 'gray')
plt.title('Original - Sobel Y')
Explanation: You can also use Scikit-image to achieve the same goals.
End of explanation
from PIL import Image
from PIL import ImageFilter
im0 = Image.open('images/blurry_moon.jpg')
figure(figsize=(15,15))
subplot(3,4,1)
plt.imshow(im0)
plt.title('Original')
subplot(3,4,2)
im2 = im0.filter(ImageFilter.CONTOUR)
plt.imshow(im2)
plt.title('Contour')
subplot(3,4,3)
im3 = im0.filter(ImageFilter.DETAIL)
plt.imshow(im3)
plt.title('Detail')
subplot(3,4,4)
im4 = im0.filter(ImageFilter.EDGE_ENHANCE)
plt.imshow(im4)
plt.title('Laplacian 1')
subplot(3,4,5)
im5 = im0.filter(ImageFilter.EDGE_ENHANCE_MORE)
plt.imshow(im5)
plt.title('Laplacian 2')
subplot(3,4,6)
im6 = im0.filter(ImageFilter.EMBOSS)
plt.imshow(im6)
plt.title('Emboss')
subplot(3,4,7)
im7 = im0.filter(ImageFilter.FIND_EDGES)
plt.imshow(im7)
plt.title('Sobel')
subplot(3,4,8)
im8 = im0.filter(ImageFilter.SMOOTH)
plt.imshow(im8)
plt.title('Low Pass 1')
subplot(3,4,9)
im9 = im0.filter(ImageFilter.SMOOTH_MORE)
plt.imshow(im9)
plt.title('Low Pass 2')
subplot(3,4,10)
im10 = im0.filter(ImageFilter.SHARPEN)
plt.imshow(im10)
plt.title('Sharpen')
subplot(3,4,10)
im1 = im0.filter(ImageFilter.BLUR)
plt.imshow(im1)
plt.title('Blur')
#Custom mask
size = (3, 3)
kernel1 = [1, 1, 1, 0, 0, 0, -1, -1, -1]
ker1 = ImageFilter.Kernel(size, kernel1, scale=None, offset=0)
subplot(3,4,11)
im11 = im0.filter(ker1)
plt.imshow(im11)
plt.title('Custom 1')
kernel2 = [1, 0, -1, 1, 0, -1, 0, 0, -1]
ker2 = ImageFilter.Kernel(size, kernel2, scale=None, offset=0)
subplot(3,4,12)
im12 = im0.filter(ker2)
plt.imshow(im12)
plt.title('Custom 2')
Explanation: Now, check out Python Imaging Library (fork: pillow). It contains many built in spatial filter modules.
End of explanation
from skimage import data
from skimage import img_as_ubyte, img_as_int
from scipy import ndimage
coins = img_as_int(data.coins())
low_pass = (1/9)*np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype='float64') #LP mask
blurred = ndimage.convolve(coins, low_pass, mode='constant', cval=0.0)
unsharp_mask = img_as_ubyte(coins - blurred)
k = 1
sharpened = img_as_ubyte(coins + k*unsharp_mask)
figure(figsize=(15,5))
subplot(1, 3, 1)
title('Original')
imshow(coins, cmap=cm.gray)
subplot(1, 3, 2)
title('Unsharp mask')
imshow(unsharp_mask, cmap=cm.gray)
subplot(1, 3, 3)
title('Unsharp masked image')
imshow(sharpened, cmap=cm.gray)
Explanation: An important point to note : For a high pass spatial filter mask , whether utilizing first derivative or second derivative, the sum of the mask coefficients is always zero.
Unsharp masking and high boost filtering
This approach sharpens an image using kind of a 'back-calculation' method! Let the original image function be $f(x, y)$. First, a blurred version of the image is created. Let this version be $\bar{f}(x,y)$. Then this blurred version is subtracted from the original image. This creates a mask like $g_{mask}(x,y) = f(x, y) - \bar{f}(x,y)$. Then this mask is added to the original image resulting $g(x,y) = f(x,y) + k\,g_{mask}(x,y)$, where, $k$ is a constant.
If $k=1$, this process is called unsharp masking. When $k>1$, it is called high boost filtering.
End of explanation
from skimage import data
from skimage.filters import gaussian_filter
image = data.coins()
lpgf_img = gaussian_filter(image, sigma=2, multichannel=True)
hpgf_img = image - lpgf_img
plt.figure(figsize=(15,5))
plt.subplot(131)
plt.imshow(image, cmap = 'gray')
plt.title('Input Image')
plt.subplot(132)
plt.imshow(lpgf_img, cmap = 'gray')
plt.title('LPGF with $\sigma=2$')
plt.subplot(133)
plt.imshow(hpgf_img, cmap = 'gray')
plt.title('HPGF with $\sigma=2$')
plt.show()
Explanation: Frequency Domain Approaches
Once we are comfortable with the spatial domain image enhancement techniques described above, we are ready to jump into a completely different approach towards image processing. Instead of directly manipulating the pixels in an image, we will now manipulate the Fourier Transform of the image. We would utilize the concept of 2D Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT) and the Convolution theorem in 2 dimensions.
The 2D DFT pair for an image function $f(x,y)$ can be expressed as $$F(u, v) = \frac{1}{MN} \sum {x=0}^{M-1} \sum {y=0}^{N-1} f(x,y) \exp\left[-j2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)\right],$$ for $x=0,1,\cdots,M-1$ and $y=0,1,\cdots,N-1$, and
$$f(x,y) =\sum {u=0}^{M-1} \sum {v=0}^{N-1} f(x,y) \exp\left[j2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)\right],$$ for $u=0,1,\cdots,M-1$ and $v=0,1,\cdots,N-1$.
The convolution theorem in 2D states that, $$h(x,y) f(x,y) \rightleftharpoons H(u,v)F(u,v),$$ and $$ H(u,v)F(u,v) \rightleftharpoons h(x,y)f(x,y).$$
In image enhancement problems, $f(x,y)$ is the input image, $g(x,y)$ is the output image and it is obtained by the application of a linear position invariant operator $h(x,y)$ on $f(x,y)$. Thus, $g(x,y)=h(x,y) f(x,y),$ and $G(u,v)=H(u,v) F(u,v)$. Here, $G(u,v)$, $H(u,v)$, and $F(u,v)$ are the DFTs of $g(x,y)$, $h(x,y)$ and $f(x,y)$ respectively. $H(u,v)$ is often called the process transfer function.
The main goal in frequency domain approach in image enhancement is to select a suitable $H(u,v)$ such that $g(x,y)$ exhibit some highlighted feature of $f(x,y)$.
Image processing in frequency domain usually involves the following steps:
INPUT : $f(x,y)$
1. Preprocess the input image $f(x,y)$
2. Take its DFT or FFT
3. Multiply the result with a suitable process transfer function $H$
4. Take the IDFT or IFFT of the result
5. Do some post-processing
OUTPUT : Enhanced image $g(x,y)$.
As an example,
for Low Pass filtering one can using the Gaussian LPF TRansfer Function
$$H_{GLPF}(u,v)=\exp\left[-\frac{D(u,v)}{2\sigma^2}\right],$$
and for High Pass filtering, one can use this:
$$H_{GHPF}(u,v)=1-\exp\left[-\frac{D(u,v)}{2\sigma^2}\right].$$
Here, $D(u,v)$ is the distance of the point $(u,v)$ from the origin of the frequency plane, and $\sigma$ (std. deviation) is a measure of the spread of the Gaussian curve.
One can apply this to an image simply by using a built-in module in skimage.
End of explanation
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('images/blurry_moon.jpg',0)
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.imshow(img, cmap = 'gray')
plt.title('Input Image')
plt.subplot(122)
plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Magnitude Spectrum')
plt.show()
Explanation: Now we will see how to find Fourier Transform using Numpy. Numpy has an FFT package to do this. np.fft.fft2() provides us the frequency transform which will be a complex array. Its first argument is the input image, which is grayscale. Second argument is optional which decides the size of output array. If it is greater than size of input image, input image is padded with zeros before calculation of FFT. If it is less than input image, input image will be cropped. If no arguments passed, Output array size will be same as input.
Now once you got the result, zero frequency component (DC component) will be at top left corner. If you want to bring it to center, you need to shift the result by $\frac{N}{2}$ in both the directions. This is simply done by the function, np.fft.fftshift(). (It is more easier to analyze). Once you found the frequency transform, you can find the magnitude spectrum.
End of explanation
rows, cols = img.shape
crow,ccol = rows/2 , cols/2
fshift[crow-30:crow+30, ccol-30:ccol+30] = 0
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.abs(img_back)
plt.figure(figsize=(15,5))
plt.subplot(131),plt.imshow(img, cmap = 'gray')
plt.title('Input Image')
plt.subplot(132),plt.imshow(img_back, cmap = 'gray')
plt.title('Image after HPF')
plt.subplot(133),plt.imshow(img_back)
plt.title('Result in JET')
plt.show()
Explanation: See, You can see more whiter region at the center showing low frequency content is more.
So you found the frequency transform Now you can do some operations in frequency domain, like high pass filtering and reconstruct the image, ie find inverse DFT. For that you simply remove the low frequencies by masking with a rectangular window of size $60\times 60$. Then apply the inverse shift using np.fft.ifftshift() so that DC component again come at the top-left corner. Then find inverse FFT using np.ifft2() function. The result, again, will be a complex number. You can take its absolute value.
End of explanation
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('images/blurry_moon.jpg',0)
dft = cv2.dft(np.float32(img),flags = cv2.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft)
magnitude_spectrum = 20*np.log(cv2.magnitude(dft_shift[:,:,0],dft_shift[:,:,1]))
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.imshow(img, cmap = 'gray')
plt.title('Input Image')
plt.subplot(122)
plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Magnitude Spectrum')
plt.show()
Explanation: The result shows High Pass Filtering is an edge detection operation. This also shows that most of the image data is present in the Low frequency region of the spectrum. Anyway we have seen how to find DFT, IDFT etc in Numpy. Now let’s see how to do it in OpenCV.
If you closely watch the result, especially the last image in JET color, you can see some artifacts. It shows some ripple like structures there, and it is called ringing effects. It is caused by the rectangular window we used for masking. This mask is converted to sinc shape which causes this problem. So rectangular windows is not used for filtering. Better option is Gaussian Windows.
Fourier Transform in OpenCV
OpenCV provides the functions cv2.dft() and cv2.idft() for this. It returns the same result as previous, but with two channels. First channel will have the real part of the result and second channel will have the imaginary part of the result. The input image should be converted to np.float32 first. We will see how to do it.
End of explanation
rows, cols = img.shape
crow,ccol = rows/2 , cols/2
# create a mask first, center square is 1, remaining all zeros
mask = np.zeros((rows,cols,2),np.uint8)
mask[crow-30:crow+30, ccol-30:ccol+30] = 1
# apply mask and inverse DFT
fshift = dft_shift*mask
f_ishift = np.fft.ifftshift(fshift)
img_back = cv2.idft(f_ishift)
img_back = cv2.magnitude(img_back[:,:,0],img_back[:,:,1])
plt.figure(figsize=(10,5))
plt.subplot(121),plt.imshow(img, cmap = 'gray')
plt.title('Input Image')
plt.subplot(122),plt.imshow(img_back, cmap = 'gray')
plt.title('LPF')
plt.show()
Explanation: So, now we have to do inverse DFT. In previous session, we created a HPF, this time we will see how to remove high frequency contents in the image, ie we apply LPF to image. It actually blurs the image. For this, we create a mask first with high value (1) at low frequencies, ie we pass the LF content, and 0 at HF region.
End of explanation
import cv2
import numpy as np
from matplotlib import pyplot as plt
# simple averaging filter without scaling parameter
mean_filter = np.ones((3,3))
# creating a guassian filter
x = cv2.getGaussianKernel(5,10)
gaussian = x*x.T
# different edge detecting filters
# scharr in x-direction
scharr = np.array([[-3, 0, 3],
[-10,0,10],
[-3, 0, 3]])
# sobel in x direction
sobel_x= np.array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])
# sobel in y direction
sobel_y= np.array([[-1,-2,-1],
[0, 0, 0],
[1, 2, 1]])
# laplacian
laplacian=np.array([[0, 1, 0],
[1,-4, 1],
[0, 1, 0]])
filters = [mean_filter, gaussian, laplacian, sobel_x, sobel_y, scharr]
filter_name = ['mean_filter', 'gaussian','laplacian', 'sobel_x', \
'sobel_y', 'scharr_x']
fft_filters = [np.fft.fft2(x) for x in filters]
fft_shift = [np.fft.fftshift(y) for y in fft_filters]
mag_spectrum = [np.log(np.abs(z)+1) for z in fft_shift]
plt.figure(figsize=(15,10))
for i in xrange(6):
plt.subplot(2,3,i+1)
plt.imshow(mag_spectrum[i],cmap = 'gray')
plt.title(filter_name[i]), plt.xticks([]), plt.yticks([])
plt.show()
Explanation: Note : As usual, OpenCV functions cv2.dft() and cv2.idft() are faster than Numpy counterparts. But Numpy functions are more user-friendly.
Why Laplacian is a High Pass Filter?
The question is, why Laplacian is a high pass filter? Why Sobel is a HPF? etc. And the simplest answer that can be given to it is in terms of Fourier Transform. Just take the fourier transform of Laplacian for some higher size of FFT. Analyze it:
End of explanation |
68 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extract time series from a published figure
Scott Cole
29 July 2016
Summary
Sometimes we might be interested in obtaining a precise estimate of the results published in a figure. Instead of zooming in a ton on the figure and manually taking notes, here we use some simple image processing to extract the data that we're interested in.
Example Case
We're looking at a recent Neuron paper that highlighted a potential top-down projection from motor cortex (M2) to primary somatosensory cortex (S1). This interaction is summarized in the firing rate curves below
Step1: Step 3. Project 2-D binary image to 1-D time series
Step2: Step 4. Rescale in x- and y- variables
Step3: Step 5. Resample at desired sampling rate | Python Code:
# Load image and libraries
%matplotlib inline
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc
input_image = misc.imread('figure_processed.png')
# Convert input image from RGBA to binary
input_image = input_image - 255
input_image = np.mean(input_image,2)
binary_image = input_image[::-1,:]
binary_image[binary_image>0] = 1
Npixels_rate,Npixels_time = np.shape(binary_image)
# Visualize binary image
plt.figure(figsize=(8,5))
plt.pcolor(np.arange(Npixels_time),np.arange(Npixels_rate),binary_image, cmap=cm.bone)
plt.xlim((0,Npixels_time))
plt.ylim((0,Npixels_rate))
plt.xlabel('Time (pixels)',size=20)
plt.ylabel('Firing rate (pixels)',size=20)
Explanation: Extract time series from a published figure
Scott Cole
29 July 2016
Summary
Sometimes we might be interested in obtaining a precise estimate of the results published in a figure. Instead of zooming in a ton on the figure and manually taking notes, here we use some simple image processing to extract the data that we're interested in.
Example Case
We're looking at a recent Neuron paper that highlighted a potential top-down projection from motor cortex (M2) to primary somatosensory cortex (S1). This interaction is summarized in the firing rate curves below:
<img src="files/figure_raw.PNG">
If we were interested in modeling this interaction, then we may want to closely replicate the firing rate dynamics of S1. So in this notebook, we extract this time series from the figure above so that we can use it for future model fitting.
Step 1
In our favorite image editting software (or simply MS Paint), we can isolate the curve we are interested in as well as the scale bars (separated by whitespace)
<img src="files/figure_processed.png">
Step 2: Convert image to binary
End of explanation
# Extract the time series (not the scale bars) by starting in the first column
col_in_time_series = True
s1rate_pixels = []
col = 0
while col_in_time_series == True:
if len(np.where(binary_image[:,col]==1)[0]):
s1rate_pixels.append(np.mean(np.where(binary_image[:,col]==1)[0]))
else:
col_in_time_series = False
col += 1
s1rate_pixels = np.array(s1rate_pixels)
# Subtract baseline
s1rate_pixels = s1rate_pixels - np.min(s1rate_pixels)
# Visualize time series
plt.figure(figsize=(5,5))
plt.plot(s1rate_pixels,'k',linewidth=3)
plt.xlabel('Time (pixels)',size=20)
plt.ylabel('Firing rate (pixels)',size=20)
Explanation: Step 3. Project 2-D binary image to 1-D time series
End of explanation
# Convert rate from pixels to Hz
ratescale_col = 395 # Column in image containing containing rate scale
rate_scale = 50 # Hz, scale in image
ratescale_Npixels = np.sum(binary_image[:,ratescale_col])
pixels_to_rate = rate_scale/ratescale_Npixels
s1rate = s1rate_pixels*pixels_to_rate
# Convert time from pixels to ms
timescale_row = np.argmax(np.mean(binary_image[:,400:],1)) # Row in image containing time scale
time_scale = 100 # ms, scale in image
timescale_Npixels = np.sum(binary_image[timescale_row,400:])
pixels_to_time = time_scale/timescale_Npixels
pixels = np.arange(len(s1rate_pixels))
t = pixels*pixels_to_time
# Visualize re-scaled time series
plt.figure(figsize=(5,5))
plt.plot(t, s1rate,'k',linewidth=3)
plt.xlabel('Time (ms)',size=20)
plt.ylabel('Firing rate (Hz)',size=20)
Explanation: Step 4. Rescale in x- and y- variables
End of explanation
# Interpolate time series to sample every 1ms
from scipy import interpolate
f = interpolate.interp1d(t, s1rate) # Set up interpolation
tmax = np.floor(t[-1])
t_ms = np.arange(tmax) # Desired time series, in ms
s1rate_ms = f(t_ms) # Perform interpolation
# Visualize re-scaled time series
plt.figure(figsize=(5,5))
plt.plot(t_ms, s1rate_ms,'k',linewidth=3)
plt.xlabel('Time (ms)',size=20)
plt.ylabel('Firing rate (Hz)',size=20)
# Save final time series
np.save('extracted_timeseries',s1rate_ms)
Explanation: Step 5. Resample at desired sampling rate
End of explanation |
69 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Initialize Tensor Flow and GPU devices, import modules
Step2: Download raw images and annotation locally
Khartoum city images from Spacenet Buildings v2 dataset are used. The task is to segment the instances of buidlings in the images.
The SpaceNet Dataset by SpaceNet Partners is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Step11: Create a TensorFlow Datasets Builder
It automatcially converts raw data into TF-Records and gives easy access throgh tf.data.Dataset API.
See more at
Step13: Create an input pipeline
A create_dataset function that batches, shuffles and preprocesses the dataset according to given parameters.
Step14: Define training, test and validation splits
For simplicity the training data is randomly split into 3 parts of
size 70%, 20% and 10% respectively. You would probably need a more
complex splitting for the real data.
Step15: Take a look at the dataset
Are there any problems? One might notice that there are shifted and merged instances.
Step17: Define preprocessing
We're going to use 3 easy preprocessing techniques
Step18: Now taking a look at the preprocessed dataset
This step is sanity checking that our preprocessing does what we expect.
E.g. note the brightness adjustments.
Step19: Define a convolutional model.
Our model is going to consist from
* Feature extractor (a bunch of convolutions and downsamplings)
* Decdoer (a bunch of upsamplings and convolutions, followed by a fully connected head for each pixel)
This modular architecture is common
Step20: Do some training!
Now let's create a validation dataset and do some training on GPU.
Step21: Looking at the training performance
Let's see the model predictions on a batch of training data.
As we can see, it is still not perfect and shows some patterns in the
probelms it suffers.
Step22: Looking at the validation performance
The validation performance shows us how good the model is in generalizing beyond
the training set. | Python Code:
@title License text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print(f'Found GPU at: {device_name}')
!pip install opencv-python
from concurrent import futures
import io
import os
import re
import tarfile
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as plt_colors
import pandas as pd
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from typing import Callable, Dict, Optional, Tuple
Features = Dict[str, tf.Tensor]
Explanation: Initialize Tensor Flow and GPU devices, import modules
End of explanation
# Download a small part of the public Spacenet-v2 dataset.
# The dataset structure is documented at https://spacenet.ai/khartoum/
# NOTE: This cell takes a long time to execute. If colab is disconnected from
# a runtime, all data is lost. Consider storing the unpacked gzip archive in
# some external directory you can access (e.g. Google Cloud Storage bucket).
DATASET_TAR = "/tmp/AOI_5_Khartoum_train.tar.gz"
# Using tf.io.gfile allows to access AWS and GCS buckets directly from a colab.
tf.io.gfile.copy("s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_5_Khartoum.tar.gz",
DATASET_TAR)
tf.io.gfile.mkdir("/tmp/spacenet")
with tarfile.open(DATASET_TAR) as tar_f:
tar_f.extractall("/tmp/spacenet")
tf.io.gfile.listdir("/tmp/spacenet/AOI_5_Khartoum_Train")
Explanation: Download raw images and annotation locally
Khartoum city images from Spacenet Buildings v2 dataset are used. The task is to segment the instances of buidlings in the images.
The SpaceNet Dataset by SpaceNet Partners is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
End of explanation
_DESCRIPTION = "Spacenet (Khartoum only)"
# The directory were the raw data lives.
_ROOT_DIR = "/tmp/spacenet/AOI_5_Khartoum_Train"
# Min/Max RGB value ranges over data from Khartoum.
# Needed for Spacenet dataset to convert pixel values into [0, 255] range.
# This can be pre-calculated in advance given access to all images or might not
# be needed for your dataset at all.
_GLOBAL_MIN = np.array([1.0, 1.0, 23.0])
_GLOBAL_MAX = np.array([1933.0, 2047.0, 1610.0])
IMAGE_HEIGHT, IMAGE_WIDTH = 650, 650
class SpacenetConfig(tfds.core.BuilderConfig):
BuilderConfig for spacenet.
def __init__(self, **kwargs):
Constructs a SpacenetConfig.
Args:
**kwargs: keyword arguments forwarded to super.
# Version history:
super().__init__(version=tfds.core.Version("0.0.1"), **kwargs)
self.train_path = _ROOT_DIR
self.min_val = _GLOBAL_MIN
self.max_val = _GLOBAL_MAX
class Spacenet(tfds.core.GeneratorBasedBuilder):
Spacenet remote sensing dataset (Khartoum only).
BUILDER_CONFIGS = [
SpacenetConfig(name="Spacenet-Khartoum",
description=_DESCRIPTION)
]
def __init__(self, data_dir: Optional[str] = None, **kwargs):
# NOTE: use your GCS bucket path here to persist TFRecords across multiple
# runs.
data_dir = data_dir or "/tmp/spacenet/tensorflow_datasets"
super().__init__(data_dir=data_dir, **kwargs)
def _info(self) -> tfds.core.DatasetInfo:
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
features=tfds.features.FeaturesDict({
"image":
tfds.features.Image(
shape=[IMAGE_HEIGHT, IMAGE_WIDTH, 3],
encoding_format="jpeg"),
"segmentation_mask":
tfds.features.Image(
shape=[IMAGE_HEIGHT, IMAGE_WIDTH, 1],
encoding_format="png"),
}))
def _split_generators(self, dl_manager):
Returns SplitGenerators.
train_path = self.builder_config.train_path
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={"root_path": train_path},
),
]
def _generate_examples(self, root_path: str):
Yields examples from raw data.
max_per_channel = self.builder_config.max_val
min_per_channel = self.builder_config.min_val
path = os.path.join(root_path, "RGB-PanSharpen")
buildings_path = os.path.join(root_path, "summaryData")
# Reading polygons coordinates and label them with respect to the img number
csv_files = tf.io.gfile.glob(buildings_path + "/*.csv")
with tf.io.gfile.GFile(csv_files[0], "r") as fid:
df = pd.read_csv(fid)
df["image"] = [x.split("_img")[-1] for x in df.ImageId]
files = tf.io.gfile.glob(path + "/*.tif")
for filename in files:
# Extract the image ID XXX from "RGB-PanSharpen_AOI_5_Khartoum_imgXXX.tif"
buildings_filename = filename.split("_")[-1].split(".")[0][3:]
yield filename, {
"image": _load_tif(filename, max_per_channel, min_per_channel),
"segmentation_mask": _load_mask(df, buildings_filename),
}
def get_poly_coordinate(poly: str) -> np.ndarray:
Returns polygons coordinates as numpy array.
return np.array([
pp.split(" ") for pp in re.findall(r"[0-9.\-]+ [0-9.\-]+ [0-9.\-]+", poly)
],
dtype=np.float32)
def _load_mask(df: pd.core.series.Series,
buildings_filename: str) -> np.ndarray:
Returns a loaded segmentation mask image.
mask = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 1), dtype=np.uint8)
buildings = df[df.image == buildings_filename]
for _, building in buildings.iterrows():
poly_coord = get_poly_coordinate(building.PolygonWKT_Pix)
if poly_coord.size > 0:
# Subindex polygon coordinate from [x, y, 0] to [x, y]
poly_coord = poly_coord[:, :2]
cv2.fillPoly(mask, [np.array(poly_coord, dtype=np.int32)], 1)
return mask.astype(np.uint8)
def _load_tif(filename: str,
max_per_channel: np.ndarray,
min_per_channel: np.ndarray) -> np.ndarray:
Loads TIF file and returns as an image array in [0, 1].
with tf.io.gfile.GFile(filename, "rb") as fid:
img = tfds.core.lazy_imports.skimage.external.tifffile.imread(
io.BytesIO(fid.read())).astype(np.float32)
img = (img - min_per_channel) / (max_per_channel - min_per_channel) * 255
img = np.clip(img, 0, 255).astype(np.uint8)
return img
# Convert raw data into TFRecord form and prepare for access.
tfds_builder = Spacenet()
tfds_builder.download_and_prepare()
Explanation: Create a TensorFlow Datasets Builder
It automatcially converts raw data into TF-Records and gives easy access throgh tf.data.Dataset API.
See more at:
https://www.tensorflow.org/api_docs/python/tf/data/Dataset
https://www.tensorflow.org/datasets
https://www.tensorflow.org/datasets/api_docs/python/tfds/core/GeneratorBasedBuilder
End of explanation
AUTOTUNE = tf.data.experimental.AUTOTUNE
def create_dataset(dataset_builder,
split: str,
preprocess_fn: Callable[[Features], Features],
batch_size: int,
num_epochs: Optional[int] = None,
shuffle: bool = False,
shuffle_buffer_size: int = 1000) -> tf.data.Dataset:
Returns a dataset to be used with TensorFlow2.
Args:
dataset_builder: `tfds.DatasetBuilder` object.
split: Name of the split to use. One of {'train', 'validation', 'test'}.
preprocess_fn: Callable for preprocessing.
batch_size: The batch size to use.
num_epochs: Number of epochs. See `tf.data.Dataset.repeat()`.
shuffle: Whether to shuffle examples in memory.
shuffle_buffer_size: Number of examples in the shuffle buffer.
Returns:
A `tf.data.Dataset` with the processed and batched features.
read_config = tfds.ReadConfig(options=tf.data.Options())
ds = dataset_builder.as_dataset(
read_config=read_config,
split=split,
shuffle_files=shuffle)
ds = ds.repeat(num_epochs)
if shuffle:
ds = ds.shuffle(shuffle_buffer_size)
ds = ds.map(preprocess_fn, num_parallel_calls=AUTOTUNE)
ds = ds.batch(batch_size, drop_remainder=True)
return ds.prefetch(AUTOTUNE)
Explanation: Create an input pipeline
A create_dataset function that batches, shuffles and preprocesses the dataset according to given parameters.
End of explanation
TRAIN_SPLIT="train[:70%]"
VAL_SPLIT="train[70%:90%]"
TEST_SPLIT="train[90%:]"
Explanation: Define training, test and validation splits
For simplicity the training data is randomly split into 3 parts of
size 70%, 20% and 10% respectively. You would probably need a more
complex splitting for the real data.
End of explanation
BATCH_SIZE = 16
ds = create_dataset(Spacenet(),
split=TRAIN_SPLIT,
shuffle=False,
preprocess_fn = lambda x: x,
batch_size = BATCH_SIZE)
for batch in ds.take(1):
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=2, figsize=(16, 8*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch["image"][i])
axs[i, 1].imshow(batch["image"][i])
axs[i, 1].imshow(tf.squeeze(batch["segmentation_mask"][i]), cmap='gray', alpha=0.3)
Explanation: Take a look at the dataset
Are there any problems? One might notice that there are shifted and merged instances.
End of explanation
def preprocess_fn(features: Dict[str, tf.Tensor], is_training: bool) -> Tuple[tf.Tensor, tf.Tensor]:
Runs preprocessing and converts examples into a Keras compatible format.
image = features["image"]
mask = features["segmentation_mask"]
# Rescale the image to [0..1]
image = tf.cast(image, tf.float32) / 255.0
# Resize the image and mask to (448, 448).
# Round resize mask values to nearest integer.
image = tf.image.resize(image, (448, 448))
mask = tf.cast(tf.image.resize(mask, (448, 448)), tf.int32)
# If training, apply random brightness change.
if is_training:
image = tf.image.random_brightness(image, max_delta=0.2)
return image, mask
train_preprocess_fn = functools.partial(preprocess_fn, is_training=True)
validation_preprocess_fn = functools.partial(preprocess_fn, is_training=False)
test_preprocess_fn = functools.partial(preprocess_fn, is_training=False)
Explanation: Define preprocessing
We're going to use 3 easy preprocessing techniques:
* Scaling pixels to [0, 1] range.
* Resizing an image to a fixed size of (448, 448).
* Randomly adjusting the brightness of the image (as satellite imagery might be taken with different illumination around the world)
We're going to skip the brightness adjustment for preprocessing
validation and test data, but keep scaling and resizing.
The preprocessing is done with a function that takes and example
emitted by our input tf.data.Dataset and returns the same example
preprocessed and in the Keras expected format (see https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit).
Consider snapshotting API to save the preprocessed dataset on disk if preprocessing is the perofrmance bottleneck:
https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot
End of explanation
train_ds = create_dataset(Spacenet(),
split=TRAIN_SPLIT,
shuffle=True,
preprocess_fn=train_preprocess_fn,
batch_size=BATCH_SIZE)
for batch in train_ds.take(1):
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=2, figsize=(16, 8*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[0][i]))
axs[i, 1].imshow(tf.squeeze(batch[1][i]), cmap='gray', alpha=0.3)
Explanation: Now taking a look at the preprocessed dataset
This step is sanity checking that our preprocessing does what we expect.
E.g. note the brightness adjustments.
End of explanation
# Code adapted from: https://keras.io/examples/vision/oxford_pets_image_segmentation/
# (Apache 2.0 License: https://github.com/keras-team/keras-io/blob/master/LICENSE)
# A simple encoder-decoder model for semantic segmentation.
# More on residual networks: https://arxiv.org/abs/1512.03385.
def get_model(img_size, num_classes):
inputs = keras.Input(shape=img_size + (3,))
### === Feature extractor ====
# This can be separately trained with a classfication head for pre-training.
# Entry block
x = layers.Conv2D(32, 3, strides=2, padding="same")(inputs)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
# Blocks 1, 2, 3 are identical apart from the feature depth.
for filters in [64, 128, 256]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
# Downscaling
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(filters, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
### === Segmentation decoder ====
# Takes features generated by the feature extractor and produces
# Segmentation outputs.
previous_block_activation = x # Set aside residual
for filters in [256, 128, 64, 32]:
x = layers.Activation("relu")(x)
x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
# Upsacling
x = layers.UpSampling2D(2)(x)
# Project residual
residual = layers.UpSampling2D(2)(previous_block_activation)
residual = layers.Conv2D(filters, 1, padding="same")(residual)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
# Add a per-pixel classification layer to assign segmentation classes.
outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)
# Define the model
model = keras.Model(inputs, outputs)
return model
model = get_model( (448, 448), 2)
model.summary()
Explanation: Define a convolutional model.
Our model is going to consist from
* Feature extractor (a bunch of convolutions and downsamplings)
* Decdoer (a bunch of upsamplings and convolutions, followed by a fully connected head for each pixel)
This modular architecture is common: the feature extactor can be swapped to another one easily.
For classification, only feature extractor part would be needed (with
a fully connected head for the class predictions).
End of explanation
val_ds = create_dataset(Spacenet(),
split=VAL_SPLIT,
shuffle=False,
preprocess_fn=validation_preprocess_fn,
batch_size=BATCH_SIZE)
with tf.device('/device:GPU:0'):
model = get_model( (448, 448), 2)
model.compile(optimizer='rmsprop', loss="sparse_categorical_crossentropy")
model.fit(train_ds, epochs=10, steps_per_epoch=200, validation_data=val_ds, validation_steps=4)
Explanation: Do some training!
Now let's create a validation dataset and do some training on GPU.
End of explanation
for batch in train_ds.take(1):
predictions = model.predict(batch[0])
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=4, figsize=(16, 4*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[1][i]))
axs[i, 2].imshow(tf.squeeze(predictions[i, :, :, 1] > 0.5))
axs[i, 3].imshow(tf.squeeze(predictions[i, :, :, 1]))
axs[0,0].set_title('Image')
axs[0,1].set_title('Ground truth')
axs[0,2].set_title('Segmentation @0.5')
axs[0,3].set_title('Segmentation score')
Explanation: Looking at the training performance
Let's see the model predictions on a batch of training data.
As we can see, it is still not perfect and shows some patterns in the
probelms it suffers.
End of explanation
for batch in val_ds.take(1):
predictions = model.predict(batch[0])
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=4, figsize=(16, 4*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[1][i]))
axs[i, 2].imshow(tf.squeeze(predictions[i, :, :, 1] > 0.5))
axs[i, 3].imshow(tf.squeeze(predictions[i, :, :, 1]))
axs[0,0].set_title('Image')
axs[0,1].set_title('Ground truth')
axs[0,2].set_title('Segmentation @0.5')
axs[0,3].set_title('Segmentation score')
Explanation: Looking at the validation performance
The validation performance shows us how good the model is in generalizing beyond
the training set.
End of explanation |
70 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning the parameters of a prediction function and testing it on the same data is a methodological mistake
Step1: We can now quickly sample a training set while holding out 40% of the data for testing (evaluating) our classifier
Step2: When evaluating different settings (“hyperparameters”) for estimators, such as the C setting that must be manually set for an SVM, there is still a risk of overfitting on the test set because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can “leak” into the model and evaluation metrics no longer report on generalization performance. To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”
Step3: The mean score and the 95% confidence interval of the score estimate are hence given by
Step4: By default, the score computed at each CV iteration is the score method of the estimator. It is possible to change this by using the scoring parameter
Step5: In the case of the Iris dataset, the samples are balanced across target classes hence the accuracy and the F1-score are almost equal.
When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default, the latter being used if the estimator derives from ClassifierMixin.
It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance
Step6: Data transformation with held out data
Just as it is important to test a predictor on data held-out from training, preprocessing (such as standardization, feature selection, etc.) and similar data transformations similarly should be learnt from a training set and applied to held-out data for prediction
Step7: A Pipeline makes it easier to compose estimators, providing this behavior under cross-validation
Step8: Obtaining predictions by cross-validation
The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).
These prediction can then be used to evaluate the classifier | Python Code:
import numpy as np
from sklearn import cross_validation
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
iris.data.shape, iris.target.shape
((150, 4), (150,))
Explanation: Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test. Note that the word “experiment” is not intended to denote academic use only, because even in commercial settings machine learning usually starts out experimentally.
In scikit-learn a random split into training and test sets can be quickly computed with the train_test_split helper function. Let’s load the iris data set to fit a linear support vector machine on it:
End of explanation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
... iris.data, iris.target, test_size=0.4, random_state=0)
X_train.shape, y_train.shape
((90, 4), (90,))
X_test.shape, y_test.shape
((60, 4), (60,))
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)
Explanation: We can now quickly sample a training set while holding out 40% of the data for testing (evaluating) our classifier:
End of explanation
clf = svm.SVC(kernel='linear', C=1)
scores = cross_validation.cross_val_score(
clf, iris.data, iris.target, cv=5)
...
scores
Explanation: When evaluating different settings (“hyperparameters”) for estimators, such as the C setting that must be manually set for an SVM, there is still a risk of overfitting on the test set because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can “leak” into the model and evaluation metrics no longer report on generalization performance. To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”: training proceeds on the training set, after which evaluation is done on the validation set, and when the experiment seems to be successful, final evaluation can be done on the test set.
However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets.
A solution to this problem is a procedure called cross-validation (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
A model is trained using k-1 of the folds as training data;
the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as it is the case when fixing an arbitrary test set), which is a major advantage in problem such as inverse inference where the number of samples is very small.
Computing cross-validated metrics
The simplest way to use cross-validation is to call the cross_val_score helper function on the estimator and the dataset.
The following example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the iris dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time):
End of explanation
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Explanation: The mean score and the 95% confidence interval of the score estimate are hence given by:
End of explanation
from sklearn import metrics
scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5, scoring='f1_weighted')
scores
Explanation: By default, the score computed at each CV iteration is the score method of the estimator. It is possible to change this by using the scoring parameter:
End of explanation
n_samples = iris.data.shape[0]
cv = cross_validation.ShuffleSplit(n_samples, n_iter=3, test_size=0.3, random_state=0)
cross_validation.cross_val_score(clf, iris.data, iris.target, cv=cv)
Explanation: In the case of the Iris dataset, the samples are balanced across target classes hence the accuracy and the F1-score are almost equal.
When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default, the latter being used if the estimator derives from ClassifierMixin.
It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance:
End of explanation
from sklearn import preprocessing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target, test_size=0.4, random_state=0)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_transformed = scaler.transform(X_train)
clf = svm.SVC(C=1).fit(X_train_transformed, y_train)
X_test_transformed = scaler.transform(X_test)
clf.score(X_test_transformed, y_test)
Explanation: Data transformation with held out data
Just as it is important to test a predictor on data held-out from training, preprocessing (such as standardization, feature selection, etc.) and similar data transformations similarly should be learnt from a training set and applied to held-out data for prediction:
End of explanation
from sklearn.pipeline import make_pipeline
clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
cross_validation.cross_val_score(clf, iris.data, iris.target, cv=cv)
Explanation: A Pipeline makes it easier to compose estimators, providing this behavior under cross-validation:
End of explanation
predicted = cross_validation.cross_val_predict(clf, iris.data, iris.target, cv=10)
metrics.accuracy_score(iris.target, predicted)
Explanation: Obtaining predictions by cross-validation
The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).
These prediction can then be used to evaluate the classifier:
End of explanation |
71 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerically solving differential equations with python
This is a brief description of what numerical integration is and a practical tutorial on how to do it in Python.
Software required
In order to run this notebook in your own computer, you need to install the following software
Step1: Why use scientific libraries?
The method we just used above is called the Euler method, and is the simplest one available. The problem is that, although it works reasonably well for the differential equation above, in many cases it doesn't perform very well. There are many ways to improve it
Step2: We get a much better approximation now, the two curves superimpose each other!
Now, what if we wanted to integrate a system of differential equations? Let's take the Lotka-Volterra equations
Step3: An interesting thing to do here is take a look at the phase space, that is, plot only the dependent variables, without respect to time
Step4: Congratulations
Step5: Now call your the function you created above within interact. The arguments for the sliders that set each parameter of the equations are (min, max, step). | Python Code:
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
# time intervals
tt = arange(0, 10, 0.5)
# initial condition
xx = [0.1]
def f(x):
return x * (1.-x)
# loop over time
for t in tt[1:]:
xx.append(xx[-1] + 0.5 * f(xx[-1]))
# plotting
plot(tt, xx, '.-')
ta = arange(0, 10, 0.01)
plot(ta, 0.1 * exp(ta)/(1+0.1*(exp(ta)-1.)))
xlabel('t')
ylabel('x')
legend(['approximation', 'analytical solution'], loc='best',)
Explanation: Numerically solving differential equations with python
This is a brief description of what numerical integration is and a practical tutorial on how to do it in Python.
Software required
In order to run this notebook in your own computer, you need to install the following software:
python
numpy and scipy - python scientific libraries
matplotlib - a library for plotting
the ipython notebook (now renamed to Jupyter)
On Windows and Mac, we recommend installing the Anaconda distribution, which includes all of the above in a single package (among several other libraries), available at http://continuum.io/downloads.
On Linux, you can install everything using your distribution's prefered way, e.g.:
Debian/Ubuntu: sudo apt-get install python-numpy python-scipy python-matplotlib python-ipython-notebook
Fedora: sudo yum install python-numpy python-scipy python-matplotlib python-ipython-notebook
Arch: `sudo pacman -S python-numpy python-scipy python-matplotlib jupyter
Code snippets shown here can also be copied into a pure text file with .py extension and ran outside the notebook (e.g., in an python or ipython shell).
From the web
Alternatively, you can use a service that runs notebooks on the cloud, e.g. SageMathCloud or wakari. It is possible to visualize publicly-available notebooks using http://nbviewer.ipython.org, but no computation can be performed (it just shows saved pre-calculated results).
How numerical integration works
Let's say we have a differential equation that we don't know how (or don't want) to derive its (analytical) solution. We can still find out what the solutions are through numerical integration. So, how dows that work?
The idea is to approximate the solution at successive small time intervals, extrapolating the value of the derivative over each interval. For example, let's take the differential equation
$$ \frac{dx}{dt} = f(x) = x (1 - x) $$
with an initial value $x_0 = 0.1$ at an initial time $t=0$ (that is, $x(0) = 0.1$). At $t=0$, the derivative $\frac{dx}{dt}$ values $f(0.1) = 0.1 \times (1-0.1) = 0.09$. We pick a small interval step, say, $\Delta t = 0.5$, and assume that that value of the derivative is a good approximation over the whole interval from $t=0$ up to $t=0.5$. This means that in this time $x$ is going to increase by $\frac{dx}{dt} \times \Delta t = 0.09 \times 0.5 = 0.045$. So our approximate solution for $x$ at $t=0.5$ is $x(0) + 0.045 = 0.145$. We can then use this value of $x(0.5)$ to calculate the next point in time, $t=1$. We calculate the derivative at each step, multiply by the time step and add to the previous value of the solution, as in the table below:
| $t$ | $x$ | $\frac{dx}{dt}$ |
| ---:|---------:|----------:|
| 0 | 0.1 | 0.09 |
| 0.5 | 0.145 | 0.123975 |
| 1.0 | 0.206987 | 0.164144 |
| 1.5 | 0.289059 | 0.205504 |
| 2.0 | 0.391811 | 0.238295 |
Of course, this is terribly tedious to do by hand, so we can write a simple program to do it and plot the solution. Below we compare it to the known analytical solution of this differential equation (the logistic equation). Don't worry about the code just yet: there are better and simpler ways to do it!
End of explanation
# everything after a '#' is a comment
## we begin importing libraries we are going to use
# import all (*) functions from numpy library, eg array, arange etc.
from numpy import *
# import all (*) interactive plotting functions, eg plot, xlabel etc.
from matplotlib.pyplot import *
# import the numerical integrator we will use, odeint()
from scipy.integrate import odeint
# time steps: an array of values starting from 0 going up to (but
# excluding) 10, in steps of 0.01
t = arange(0, 10., 0.01)
# parameters
r = 2.
K = 10.
# initial condition
x0 = 0.1
# let's define the right-hand side of the differential equation
# It must be a function of the dependent variable (x) and of the
# time (t), even if time does not appear explicitly
# this is how you define a function:
def f(x, t, r, K):
# in python, there are no curling braces '{}' to start or
# end a function, nor any special keyword: the block is defined
# by leading spaces (usually 4)
# arithmetic is done the same as in other languages: + - * /
return r*x*(1-x/K)
# call the function that performs the integration
# the order of the arguments is as below: the derivative function,
# the initial condition, the points where we want the solution, and
# a list of parameters
x = odeint(f, x0, t, (r, K))
# plot the solution
plot(t, x)
xlabel('t') # define label of x-axis
ylabel('x') # and of y-axis
# plot analytical solution
# notice that `t` is an array: when you do any arithmetical operation
# with an array, it is the same as doing it for each element
plot(t, K * x0 * exp(r*t)/(K+x0*(exp(r*t)-1.)))
legend(['approximation', 'analytical solution'], loc='best') # draw legend
Explanation: Why use scientific libraries?
The method we just used above is called the Euler method, and is the simplest one available. The problem is that, although it works reasonably well for the differential equation above, in many cases it doesn't perform very well. There are many ways to improve it: in fact, there are many books entirely dedicated to this. Although many math or physics students do learn how to implement more sophisticated methods, the topic is really deep. Luckily, we can rely on the expertise of lots of people to come up with good algorithms that work well in most situations.
Then, how... ?
We are going to demonstrate how to use scientific libraries to integrate differential equations. Although the specific commands depend on the software, the general procedure is usually the same:
define the derivative function (the right hand side of the differential equation)
choose a time step or a sequence of times where you want the solution
provide the parameters and the initial condition
pass the function, time sequence, parameters and initial conditions to a computer routine that runs the integration.
A single equation
So, let's start with the same equation as above, the logistic equation, now with any parameters for growth rate and carrying capacity:
$$ \frac{dx}{dt} = f(x) = r x \left(1 - \frac{x}{K} \right) $$
with $r=2$, $K=10$ and $x(0) = 0.1$. We show how to integrate it using python below, introducing key language syntax as necessary.
End of explanation
# we didn't need to do this again: if the cell above was run already,
# the libraries are imported, but we repeat it here for convenience
from numpy import *
from matplotlib.pyplot import *
from scipy.integrate import odeint
t = arange(0, 50., 0.01)
# parameters
r = 2.
c = 0.5
e = 0.1
d = 1.
# initial condition: this is an array now!
x0 = array([1., 3.])
# the function still receives only `x`, but it will be an array, not a number
def LV(x, t, r, c, e, d):
# in python, arrays are numbered from 0, so the first element
# is x[0], the second is x[1]. The square brackets `[ ]` define a
# list, that is converted to an array using the function `array()`.
# Notice that the first entry corresponds to dV/dt and the second to dP/dt
return array([ r*x[0] - c * x[0] * x[1],
e * c * x[0] * x[1] - d * x[1] ])
# call the function that performs the integration
# the order of the arguments is as below: the derivative function,
# the initial condition, the points where we want the solution, and
# a list of parameters
x = odeint(LV, x0, t, (r, c, e, d))
# Now `x` is a 2-dimension array of size 5000 x 2 (5000 time steps by 2
# variables). We can check it like this:
print('shape of x:', x.shape)
# plot the solution
plot(t, x)
xlabel('t') # define label of x-axis
ylabel('populations') # and of y-axis
legend(['V', 'P'], loc='upper right')
Explanation: We get a much better approximation now, the two curves superimpose each other!
Now, what if we wanted to integrate a system of differential equations? Let's take the Lotka-Volterra equations:
$$ \begin{aligned}
\frac{dV}{dt} &= r V - c V P\
\frac{dP}{dt} &= ec V P - dP
\end{aligned}$$
In this case, the variable is no longer a number, but an array [V, P]. We do the same as before, but now x is going to be an array:
End of explanation
# `x[0,0]` is the first value (1st line, 1st column), `x[0,1]` is the value of
# the 1st line, 2nd column, which corresponds to the value of P at the initial
# time. We plot just this point first to know where we started:
plot(x[0,0], x[0,1], 'o')
print('Initial condition:', x[0])
# `x[0]` or (equivalently) x[0,:] is the first line, and `x[:,0]` is the first
# column. Notice the colon `:` stands for all the values of that axis. We are
# going to plot the second column (P) against the first (V):
plot(x[:,0], x[:,1])
xlabel('V')
ylabel('P')
# Let's calculate and plot another solution with a different initial condition
x2 = odeint(LV, [10., 4.], t, (r, c, e, d))
plot(x2[:,0], x2[:,1])
plot(x2[0,0], x2[0,1], 'o')
Explanation: An interesting thing to do here is take a look at the phase space, that is, plot only the dependent variables, without respect to time:
End of explanation
def LV_plot(r=2, c=0.5, e=0.5, d=1):
# Time range
t = arange(0, 50., 0.01)
# Initial conditions
x0 = array([1., 3.])
# The function to be integrated
def LV(x, t, r, c, e, d):
return array([ r*x[0] - c * x[0] * x[1],
e * c * x[0] * x[1] - d * x[1] ])
#integrating
y = odeint(LV, x0, t, (r, c, e, d))
# ploting: use the function show
show(plot(t, y))
Explanation: Congratulations: you are now ready to integrate any system of differential equations! (We hope generalizing the above to more than 2 equations won't be very challenging).
Exploring parameters with a simple interface
IPython’s widgets allow to create user interface (UI) controls for exploring your code interactively. To use this resource you have to run the code below in a computer with all the software required (see first section), plus the library ipywidgets.
The interact function provides a quick way to use widgets to explore the parameter space of ODEs. To do this, first create a new function that integrate your ODEs and return a plot. The argumnts of this functions should be the parameters that you want to explore.
End of explanation
#Libraries to use interact
from __future__ import print_function
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
#Now call the function to be integrated within interact
interact(LV_plot, r=(0,5.,0.1), c = (0,1,0.1), e = (0,1,0.1), d = (0,5, 0.1))
Explanation: Now call your the function you created above within interact. The arguments for the sliders that set each parameter of the equations are (min, max, step).
End of explanation |
72 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CORDEX ESGF submission form
General Information
Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https
Step1: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
Step2: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission
Step3: Requested general information
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
Step4: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
Step5: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
Step6: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
Step7: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
Step8: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
Step9: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
Step10: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http
Step11: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http
Step12: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
Step13: Give the path where the data reside, for example
Step14: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
Step15: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
Step16: Variable list
list of variables submitted -- please remove the ones you do not provide
Step17: Check your submission form
Please evaluate the following cell to check your submission form.
In case of errors, please go up to the corresponden information cells and update your information accordingly.
Step18: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
Step19: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications | Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
Explanation: CORDEX ESGF submission form
General Information
Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https://verc.enes.org/data/projects/documents/cordex-archive-design)
Thus file names have to follow the pattern:<br />
VariableName_Domain_GCMModelName_CMIP5ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />
Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
The directory structure in which these files are stored follow the pattern:<br />
activity/product/Domain/Institution/
GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/
RCMModelName/RCMVersionID/Frequency/VariableName <br />
Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
Notice: If your model is not yet registered, please contact contact [email protected]
specifying: Full institution name, Short institution name (acronym), Contact person and
e-mail, RCM Name (acronym), Terms of Use (unrestricted or non-commercial only) and the CORDEX domains in which you are interested.
At some CORDEX ESGF data centers a 'data submission form' is in use in order to improve initial information exchange between data providers and the data center. The form has to be filled before the publication process can be started. In case you have questions pleas contact the individual data centers:
o at DKRZ: [email protected]
o at SMHI: [email protected]
End of explanation
MY_LAST_NAME = "...." # e.gl MY_LAST_NAME = "schulz"
#-------------------------------------------------
from dkrz_forms import form_handler, form_widgets, checks
form_info = form_widgets.check_pwd(MY_LAST_NAME)
sfg = form_handler.init_form(form_info)
sf = sf.sub.entity_out.form_info
Explanation: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
End of explanation
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
Explanation: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
Explanation: Requested general information
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
End of explanation
sf.institute_id = "..." # example: sf.institute_id = "AWI"
Explanation: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
End of explanation
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
Explanation: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
End of explanation
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
Explanation: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
End of explanation
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
Explanation: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
End of explanation
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
End of explanation
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
Explanation: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk.
'QC2' refers to the quality checker developed at DKRZ.
If your answer is 'other' give some informations.
End of explanation
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
Explanation: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf
End of explanation
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
Explanation: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
End of explanation
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
Explanation: Give the path where the data reside, for example:
blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string
End of explanation
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
Explanation: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
Explanation: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
End of explanation
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
Explanation: Variable list
list of variables submitted -- please remove the ones you do not provide:
End of explanation
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub.valid_submission = res['valid_submission']
form_handler.DictTable(res)
Explanation: Check your submission form
Please evaluate the following cell to check your submission form.
In case of errors, please go up to the corresponden information cells and update your information accordingly.
End of explanation
form_handler.save_form(sf,"..my comment..") # edit my comment info
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
Explanation: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
Explanation: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation |
73 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: DWD
Source ID: MPI-ESM-1-2-HR
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
74 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style='float
Step1: Skill 1
Step2: Skill 2
Step3: Skill 3
Step4: Skill 4
Step5: Skill 4
Step6: Save scores
Step7: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS. | Python Code:
import os
try:
import cPickle as pickle
except ImportError:
import pickle
run_name = '2014-07-07'
fname = os.path.join(run_name, 'config.pkl')
with open(fname, 'rb') as f:
config = pickle.load(f)
import numpy as np
from pandas import DataFrame, read_csv
from utilities import (load_secoora_ncs, to_html,
save_html, apply_skill)
fname = '{}-all_obs.csv'.format(run_name)
all_obs = read_csv(os.path.join(run_name, fname), index_col='name')
def rename_cols(df):
columns = dict()
for station in df.columns:
mask = all_obs['station'] == station
name = all_obs['station'][mask].index[0]
columns.update({station: name})
return df.rename(columns=columns)
Explanation: <img style='float: left' width="150px" src="http://secoora.org/sites/default/files/secoora_logo.png">
<br><br>
SECOORA Notebook 2
Sea Surface Temperature time-series model skill
This notebook calculates several skill scores for the
SECOORA models weekly time-series saved by 00-fetch_data.ipynb.
Load configuration
End of explanation
from utilities import mean_bias
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
df = rename_cols(df)
skill_score = dict(mean_bias=df.copy())
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'mean_bias.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 1: Model Bias (or Mean Bias)
The bias skill compares the model mean temperature against the observations.
It is possible to introduce a Mean Bias in the model due to a mismatch of the
boundary forcing and the model interior.
$$ \text{MB} = \mathbf{\overline{m}} - \mathbf{\overline{o}}$$
End of explanation
from utilities import rmse
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['rmse'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'rmse.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 2: Central Root Mean Squared Error
Root Mean Squared Error of the deviations from the mean.
$$ \text{CRMS} = \sqrt{\left(\mathbf{m'} - \mathbf{o'}\right)^2}$$
where: $\mathbf{m'} = \mathbf{m} - \mathbf{\overline{m}}$ and $\mathbf{o'} = \mathbf{o} - \mathbf{\overline{o}}$
End of explanation
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'r2.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 3: R$^2$
https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=True)
df = rename_cols(df)
skill_score['low_pass_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_r2.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 4: Low passed R$^2$
http://dx.doi.org/10.1175/1520-0450(1979)018%3C1016:LFIOAT%3E2.0.CO;2
https://github.com/ioos/secoora/issues/188
End of explanation
from utilities import r2
dfs = load_secoora_ncs(run_name)
# SABGOM dt = 3 hours.
dfs = dfs.swapaxes('items', 'major').resample('3H').swapaxes('items', 'major')
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['low_pass_resampled_3H_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_resampled_3H_r2.html'.format(run_name))
save_html(fname, html)
html
Explanation: Skill 4: Low passed and re-sampled (3H) R$^2$
https://github.com/ioos/secoora/issues/183
End of explanation
fname = os.path.join(run_name, 'skill_score.pkl')
with open(fname,'wb') as f:
pickle.dump(skill_score, f)
Explanation: Save scores
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from utilities.taylor_diagram import TaylorDiagram
def make_taylor(samples):
fig = plt.figure(figsize=(9, 9))
dia = TaylorDiagram(samples['std']['OBS_DATA'],
fig=fig,
label="Observation")
colors = plt.matplotlib.cm.jet(np.linspace(0, 1, len(samples)))
# Add samples to Taylor diagram.
samples.drop('OBS_DATA', inplace=True)
for model, row in samples.iterrows():
dia.add_sample(row['std'], row['corr'], marker='s', ls='',
label=model)
# Add RMS contours, and label them.
contours = dia.add_contours(colors='0.5')
plt.clabel(contours, inline=1, fontsize=10)
# Add a figure legend.
kw = dict(prop=dict(size='small'), loc='upper right')
leg = fig.legend(dia.samplePoints,
[p.get_label() for p in dia.samplePoints],
numpoints=1, **kw)
return fig
dfs = load_secoora_ncs(run_name)
# Bin and interpolate all series to 1 hour.
freq = '3H'
for station, df in list(dfs.iteritems()):
df = df.resample(freq).interpolate().dropna(axis=1)
if 'OBS_DATA' in df:
samples = DataFrame.from_dict(dict(std=df.std(),
corr=df.corr()['OBS_DATA']))
else:
continue
samples[samples < 0] = np.NaN
samples.dropna(inplace=True)
if len(samples) <= 2: # 1 obs 1 model.
continue
fig = make_taylor(samples)
fig.savefig(os.path.join(run_name, '{}.png'.format(station)))
plt.close(fig)
Explanation: Normalized Taylor diagrams
The radius is model standard deviation error divided by observations deviation,
azimuth is arc-cosine of cross correlation (R), and distance to point (1, 0) on the
abscissa is Centered RMS.
End of explanation |
75 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A new high performance and optimally cheap multivitamin. Saving the world one mixed integer program at a time
You have been tasked with developing a new superior multivitamin. You have been given free reign to select the ingredients and their relative amounts in the vitamin but you have been asked to keep the cost of the raw materials as low as possible.
Further complicating things, you must abide by some restrictions in the formula.
The formula should not have more than 20% of any one vitamin
The formula must have at least 10% Iron, Zinc, or Magnesium
The formula must have at least 20% Vitamin A, Vitamin C, or Vitamin D
Each vitamin that is used must account for at least 5% of the total
The formula may contain as few as 5 vitamins but no more than 10 vitamins
If the formula contains Magnesium it must also contain Calcium and Zinc
The formula must have one of the B vitamins either B6 or B12 but not both
The possible Ingredients and their per mg cost
{{HTML(df.to_html(index=False)) }}
You might see a simple greedy strategy to solve this, but the vitamin B constrains and the Magnesium/Zinc/Calcium constraints make things a bit more complicated. Instead of trial and error we can try and create this vitamin by writing a fairly simple mixed integer linear program. First we need to come up with an expression that captures our ultimate goal, in this case, to minimize the cost of raw materials.
We can calculate the total cost of the formula by adding up the individual cost of each vitamin in it.
$$\text{Total Cost} = \text{ sum over all the vitamins (cost of vitamin) * (percent of vitamin) } $$
First lets load the data we have and organize it so we can easily grab the cost of particular vitamin
Step1: Lets model the percent each vitamin is included as by a variable u. so if Vitamin C is included at 15% then u[Vitamin C] = .15. We can use pulp.LpVariable.dicts to return a dictionary of variables that are indexed by the vitamin names. This will make referring to variables later on very easy.
Step2: The next thing we need to do is create an instance of the pulp.LpProblem class. This creates a problem variable that will hold the cost and constraints and tells PuLP that we want to minimize our cost. We pass in a name and either LpMinimize or LpMaximize for either minimizing our cost or maximizing it.
Step3: Now we can define our cost. We will use pulp.lpSum instead of the regular python sum function for efficiency. In this problem it won't make a difference so feel free to experiment. This returns a pulp.LpAffineExpression which we will discuss in detail later on. To add our cost expression to the problem we literally just add it to the problem.
Step4: We can now start adding our constraints. The first few are very straightforward. Like with the cost, each constraint just gets added to the problem variable. The key difference is that the constraint will be an inequality.
1
Step5: 2
Step6: 3
Step7: These were fairly straightforward, the code for the constraints and the description of the constraint are almost identical. However the next constraint is a bit stranger.
"4. Each vitamin that is used must account for at least 5% of the total"
We basically need the u variables to be at least 5% or 0%. To model this we need to introduce some new variables that will track in simple yes/no manner if the vitamin is included in the final formula. Once we have these variables we will link them with the u variables somehow and satisfy the rest of the constraints. For now lets just look at the code.
Step8: 4
Step9: That looks a bit confusing but its actually a very common modeling technique we will explore in detail later on. These new b variables make the rest of the constraints really easy to model.
5
Step10: 6
Step11: 7
Step12: Finally, while it wasn't stated as a constraint, our u variables are supposed to be percents, so they must add up to 100%
Step13: We can now solve the problem and relax knowing we have made the best possible multivitamin (with our highly customized and formalized definition of "best") | Python Code:
df = pd.read_csv('vitamin_costs.csv')
vitamins = df.vitamin.values
vitamin_cost = df.set_index('vitamin').to_dict()['cost']
df.head()
Explanation: A new high performance and optimally cheap multivitamin. Saving the world one mixed integer program at a time
You have been tasked with developing a new superior multivitamin. You have been given free reign to select the ingredients and their relative amounts in the vitamin but you have been asked to keep the cost of the raw materials as low as possible.
Further complicating things, you must abide by some restrictions in the formula.
The formula should not have more than 20% of any one vitamin
The formula must have at least 10% Iron, Zinc, or Magnesium
The formula must have at least 20% Vitamin A, Vitamin C, or Vitamin D
Each vitamin that is used must account for at least 5% of the total
The formula may contain as few as 5 vitamins but no more than 10 vitamins
If the formula contains Magnesium it must also contain Calcium and Zinc
The formula must have one of the B vitamins either B6 or B12 but not both
The possible Ingredients and their per mg cost
{{HTML(df.to_html(index=False)) }}
You might see a simple greedy strategy to solve this, but the vitamin B constrains and the Magnesium/Zinc/Calcium constraints make things a bit more complicated. Instead of trial and error we can try and create this vitamin by writing a fairly simple mixed integer linear program. First we need to come up with an expression that captures our ultimate goal, in this case, to minimize the cost of raw materials.
We can calculate the total cost of the formula by adding up the individual cost of each vitamin in it.
$$\text{Total Cost} = \text{ sum over all the vitamins (cost of vitamin) * (percent of vitamin) } $$
First lets load the data we have and organize it so we can easily grab the cost of particular vitamin
End of explanation
u = LpVariable.dicts('percent', vitamins, 0, 1, LpContinuous)
Explanation: Lets model the percent each vitamin is included as by a variable u. so if Vitamin C is included at 15% then u[Vitamin C] = .15. We can use pulp.LpVariable.dicts to return a dictionary of variables that are indexed by the vitamin names. This will make referring to variables later on very easy.
End of explanation
prob = LpProblem('super-awesome-vitamin', LpMinimize)
Explanation: The next thing we need to do is create an instance of the pulp.LpProblem class. This creates a problem variable that will hold the cost and constraints and tells PuLP that we want to minimize our cost. We pass in a name and either LpMinimize or LpMaximize for either minimizing our cost or maximizing it.
End of explanation
cost = lpSum([ u[v]*vitamin_cost[v] for v in vitamins])
prob += cost
Explanation: Now we can define our cost. We will use pulp.lpSum instead of the regular python sum function for efficiency. In this problem it won't make a difference so feel free to experiment. This returns a pulp.LpAffineExpression which we will discuss in detail later on. To add our cost expression to the problem we literally just add it to the problem.
End of explanation
#no more than 20% of any one vitamin
for v in vitamins:
prob += u[v] <= .2
Explanation: We can now start adding our constraints. The first few are very straightforward. Like with the cost, each constraint just gets added to the problem variable. The key difference is that the constraint will be an inequality.
1: The formula should not have more than 20% of any one vitamin
End of explanation
#The formula must have at least 10% Iron, Zinc, or Magnesium
prob += u['Iron'] + u['Zinc'] + u['Magnesium'] >= .1
Explanation: 2: The formula must have at least 10% Iron, Zinc, or Magnesium
End of explanation
#The formula must have at least 20% Vitamin A, Vitamin C, or Vitamin D
prob += u['Vitamin A'] + u['Vitamin C'] + u['Vitamin D'] >= .2
Explanation: 3: The formula must have at least 20% Vitamin A, Vitamin C, or Vitamin D
End of explanation
#binary variable that captures if this vitamin will be used in the formula
b = LpVariable.dicts('use',vitamins,0,1, LpBinary)
Explanation: These were fairly straightforward, the code for the constraints and the description of the constraint are almost identical. However the next constraint is a bit stranger.
"4. Each vitamin that is used must account for at least 5% of the total"
We basically need the u variables to be at least 5% or 0%. To model this we need to introduce some new variables that will track in simple yes/no manner if the vitamin is included in the final formula. Once we have these variables we will link them with the u variables somehow and satisfy the rest of the constraints. For now lets just look at the code.
End of explanation
#Each vitamin that is used must account for at least 5% of the total
for v in vitamins:
#if we don't use this vitamin then the percent must be zero
prob += u[v] <= b[v]
#likewise if we do use this vitamin, then the percent must not be zero
prob += u[v] >= .05 -100*(1-b[v]) # > .05 or > .05 -100
Explanation: 4: Each vitamin that is used must account for at least 5% of the total
End of explanation
#The formula may contain as few as 5 vitamins but no more than 10 vitamins
prob += lpSum([ b[v] for v in vitamins]) >= 5
prob += lpSum([ b[v] for v in vitamins]) <= 10
Explanation: That looks a bit confusing but its actually a very common modeling technique we will explore in detail later on. These new b variables make the rest of the constraints really easy to model.
5: The formula may contain as few as 5 vitamins but no more than 10 vitamins
End of explanation
#If the formula contains Magnesium it must also contain Calcium and Zinc
prob += 2*b['Magnesium'] <= b['Calcium'] + b['Zinc']
Explanation: 6: If the formula contains Magnesium it must also contain Calcium and Zinc
End of explanation
#The formula must have one of the B vitamins either B6 or B12 but not both
prob += b['Vitamin B12'] + b['Vitamin B6'] == 1
Explanation: 7: The formula must have one of the B vitamins either B6 or B12 but not both
End of explanation
#the percentages must add up to 100
prob += lpSum([ u[v] for v in vitamins]) == 1.0
Explanation: Finally, while it wasn't stated as a constraint, our u variables are supposed to be percents, so they must add up to 100%
End of explanation
LpStatus[prob.solve()]
print('total cost: $%.2f'%prob.objective.value())
for v in vitamins:
if value(u[v]) >0:
print( '%s %.0f%% at unit cost of: $%.2f' %(v, 100*value(u[v]), vitamin_cost[v]))
Explanation: We can now solve the problem and relax knowing we have made the best possible multivitamin (with our highly customized and formalized definition of "best")
End of explanation |
76 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
datetime
Python has the datetime module to help deal with timestamps in your code. Time values are represented with the time class. Times have attributes for hour, minute, second, and microsecond. They can also include time zone information. The arguments to initialize a time instance are optional, but the default of 0 is unlikely to be what you want.
time
Lets take a look at how we can extract time information from the datetime module. We can create a timestamp by specifying datetime.time(hour,minute,second,microsecond)
Step1: Note
Step2: The min and max class attributes reflect the valid range of times in a single day.
Dates
datetime (as you might suspect) also allows us to work with date timestamps. Calendar date values are represented with the date class. Instances have attributes for year, month, and day. It is easy to create a date representing today’s date using the today() class method.
Lets see some examples
Step3: As with time, the range of date values supported can be determined using the min and max attributes.
Step4: Another way to create new date instances uses the replace() method of an existing date. For example, you can change the year, leaving the day and month alone.
Step5: Arithmetic
We can perform arithmetic on date objects to check for time differences. For example | Python Code:
import datetime
t = datetime.time(4, 20, 1)
# Lets show the different compoenets
print t
print 'hour :', t.hour
print 'minute:', t.minute
print 'second:', t.second
print 'microsecond:', t.microsecond
print 'tzinfo:', t.tzinfo
Explanation: datetime
Python has the datetime module to help deal with timestamps in your code. Time values are represented with the time class. Times have attributes for hour, minute, second, and microsecond. They can also include time zone information. The arguments to initialize a time instance are optional, but the default of 0 is unlikely to be what you want.
time
Lets take a look at how we can extract time information from the datetime module. We can create a timestamp by specifying datetime.time(hour,minute,second,microsecond)
End of explanation
print 'Earliest :', datetime.time.min
print 'Latest :', datetime.time.max
print 'Resolution:', datetime.time.resolution
Explanation: Note: A time instance only holds values of time, and not a date associated with the time.
We can also check the min and max values a time of day can have in the module:
End of explanation
today = datetime.date.today()
print today
print 'ctime:', today.ctime()
print 'tuple:', today.timetuple()
print 'ordinal:', today.toordinal()
print 'Year:', today.year
print 'Mon :', today.month
print 'Day :', today.day
Explanation: The min and max class attributes reflect the valid range of times in a single day.
Dates
datetime (as you might suspect) also allows us to work with date timestamps. Calendar date values are represented with the date class. Instances have attributes for year, month, and day. It is easy to create a date representing today’s date using the today() class method.
Lets see some examples:
End of explanation
print 'Earliest :', datetime.date.min
print 'Latest :', datetime.date.max
print 'Resolution:', datetime.date.resolution
Explanation: As with time, the range of date values supported can be determined using the min and max attributes.
End of explanation
d1 = datetime.date(2015, 3, 11)
print 'd1:', d1
d2 = d1.replace(year=1990)
print 'd2:', d2
Explanation: Another way to create new date instances uses the replace() method of an existing date. For example, you can change the year, leaving the day and month alone.
End of explanation
d1
d2
d1-d2
Explanation: Arithmetic
We can perform arithmetic on date objects to check for time differences. For example:
End of explanation |
77 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facebook Graph API v2.5
En este IPython Notebook se anotarán algunos usos básicos la API que provee Facebook.
Step1: Tomamos el access token temporal creado en Graph API Explorer. Si queremos crear uno que sea permanente podemos usar las instrucciones de esta pregunta de StackOverflow o de esta otra pregunta. Alternativamente, podemos crear un token de acceso con nuestra id y clave
Step2: Partiremos con lo más simple. Una consulta GET para obtener información sobre nosotros mismos.
GET /me
El token de acceso se envía como parámetro, junto con los campos que queremos obtener de la consulta
Step3: Ahora, publicaremos un estado. Esta request nos retornará la id del post, que será publicado con visibilidad "Solo para mi" (Only me)
POST /me/feed
Step4: Luego, podemos directamente borrar un estado solo si lo publicamos usando la API | Python Code:
import json
import requests
BASE = "https://graph.facebook.com"
VERSION = "v2.5"
# Si queremos imprimir los json de respuesta
# de una forma mas agradable a la vista podemos usar
def print_pretty(jsonstring, indent=4, sort_keys=False):
print(json.dumps(jsonstring, indent=indent, sort_keys=sort_keys))
Explanation: Facebook Graph API v2.5
En este IPython Notebook se anotarán algunos usos básicos la API que provee Facebook.
End of explanation
with open("credentials") as f:
access_token = str(f.read().splitlines()[0])
Explanation: Tomamos el access token temporal creado en Graph API Explorer. Si queremos crear uno que sea permanente podemos usar las instrucciones de esta pregunta de StackOverflow o de esta otra pregunta. Alternativamente, podemos crear un token de acceso con nuestra id y clave:
End of explanation
url = "{}/{}/me".format(BASE, VERSION)
params = {
"access_token": access_token,
"fields": ["id", "name"]
}
req = requests.get(url, params=params)
print_pretty(req.json())
my_id = req.json()["id"]
my_name = req.json()["name"]
Explanation: Partiremos con lo más simple. Una consulta GET para obtener información sobre nosotros mismos.
GET /me
El token de acceso se envía como parámetro, junto con los campos que queremos obtener de la consulta:
End of explanation
url = "{}/{}/me/feed".format(BASE, VERSION)
params = {
"access_token": access_token,
"message": "Este estado lo publiqué usando la API de Facebook :O"
}
req = requests.post(url, params=params)
status_id = req.json()["id"]
print("status_id = {}".format(status_id))
Explanation: Ahora, publicaremos un estado. Esta request nos retornará la id del post, que será publicado con visibilidad "Solo para mi" (Only me)
POST /me/feed
End of explanation
url = "{}/{}/{}".format(BASE, VERSION, status_id)
params = {
"access_token": access_token
}
req = requests.delete(url, params = params)
print_pretty(req.json())
Explanation: Luego, podemos directamente borrar un estado solo si lo publicamos usando la API:
DELETE /{status-id}
End of explanation |
78 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dictionary
train_direction = 0 south, 1 north
train_type = 0 Local, 1 Limited, 2 Bullet
train_
Step1: Cleanin' the data
Step2: Let's start getting some more detailed data from the trips as well
Step3: First, a word about the below code.
In the accompanying func.py there is a function called parse_train that returns a pandas.Series object. For some reason, when it's returned from a map or apply, it seems to get cast as a string. When applied to a list or a dataframe, this string gets turned into a single field in the row, OR divided into several rows, throwing the count off.
To get around this, I return the results of the parse_train function and then CAST it back to a series. This adds a weird 0 index, which I delete. I then fill in the plethora of NaNs and recombine it with the primary dataframe.
For context, previous iterations included
df['topic_train'].apply(lambda x | Python Code:
# Import necessary libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
import re
import random
import operator
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.cross_validation import KFold
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_selection import SelectKBest, f_classif
from func import *
# inline plot
%matplotlib inline
#%%javascript
#IPython.OutputArea.auto_scroll_threshold = 9999;
#%load 'data/raw-twt2016-01-26-14/21/09.csv'
df = pd.read_csv("data/raw-twt2016-01-26-14-21-09.csv",sep='\t',error_bad_lines=False)
# df.head(5)
print len(df.index)
list(df.columns.values)
Explanation: Dictionary
train_direction = 0 south, 1 north
train_type = 0 Local, 1 Limited, 2 Bullet
train_
End of explanation
# Fill in blank hashtags
df = df.where((pd.notnull(df)), np.nan)
df["hashtags"].fillna('')
# Add some date/time things
df["created_at"] = pd.to_datetime(df["created_at"], errors='coerce')
df["day_of_week"] = df["created_at"].apply(lambda x: x.weekday())
df["day_of_month"] = df["created_at"].apply(lambda x: x.day)
df["month"] = df["created_at"].apply(lambda x: x.month)
df["time_of_day"] = df["created_at"].apply(lambda x: get_time_of_day(x))
tod_Dummy = pd.get_dummies(df['time_of_day'])
print(tod_Dummy.head(5))
print tod_Dummy.count()
# del tod_Dummy['shutdown']
# df['in_reply_to_screen_name'].fillna(-1)
# df['in_reply_to_status_id'].fillna(-1)
# df['in_reply_to_user_id'].fillna(-1)
# df['retweeted_status'].fillna(-1)
# df['retweeted'].fillna(-1)
df['retweet_count'].fillna(np.nan)
df['favorite_count'].fillna(np.nan)
df["hashtags"].fillna(np.nan)
df["hashtags"] = df["hashtags"].apply(lambda x: str(x)[1:-1])
df.loc[df["hashtags"]=='a',"hashtags"] = ''
list(df.columns.values)
#Potentially remove, just cleaning for analysis sake
del df['Unnamed: 0']
del df['truncated']
del df['user_mentions']
del df['urls']
del df['source']
del df['lang']
del df['place']
del df['favorited']
del df['media']
del df['user']
# More likely to remove
del df['in_reply_to_status_id']
del df['in_reply_to_user_id']
del df['retweeted']
del df['retweeted_status']
len(df)
df.plot(x='created_at', y='day_of_week', kind='hist')
# fdf = df[["created_at","id","text","hashtags"]]
# str(fdf
Explanation: Cleanin' the data
End of explanation
# df['favorite_count'] = df['favorite_count'].astype(np.int64)
# df['retweet_count'] = df['retweet_count'].astype(np.int64)
# df['text'] = df['text'].astype(str)
# df['id'] = df['id'].astype(np.int64)
# df['day_of_week'] = df['day_of_week'].astype(np.int64)
# df['day_of_month'] = df['day_of_month'].astype(np.int64)
# df['month'] = df['month'].astype(np.int64)
# df['time_of_day'] = df['time_of_day'].astype(np.int64)
df.loc[df["hashtags"]=='on',"hashtags"] = np.nan
df.convert_objects(convert_numeric=True)
df.dtypes
len(df)
# Pull out potential trains from both hashtags and text
df["topic_train"] = df["text"].apply(lambda x: check_train_id(x))
df["topic_train"] = df["topic_train"].apply(lambda x: str(x)[1:-1])
df["topic_train"].fillna(np.nan)
df.head(5)
len(df)
# pd.pivot_table(
# df,values='values',
# index=['month'],
# columns=['day_of_week'])
Explanation: Let's start getting some more detailed data from the trips as well
End of explanation
ret = []
def parse_train(t):
# x should be a list with train codes eg 123
# {"id": "123", "type:" "bullet", direction: "south"}
try:
s = t['topic_train'].split(',')
except:
return t['topic_train']
if s[0] == '':
# print ""
return np.nan
for x in s:
# print "Iter",x[1:-1]
q = {}
# Check train id
# x = parse_train_id(x)
x = str(x)
x = re.sub('[^0-9]','', x)
if len(x)<3: continue
# 1 = north, 0 = south
q["t_northbound"] = 1 if int(x[2]) in [1,3,5,7,9] else 0
q['t_limited'] = 0
q['t_bullet'] = 0
if x[0] == '1':
q['t_limited'] = 0
elif x[0] == '2':
q["t_limited"] = 1 # limited
elif x[0] == '3':
q["t_bullet"] = 1 # bullet
else:
q['t_limited'] = 0
ret.append({'tweet_id': t['id'],
'timestamp': t['created_at'],
'train_id': int(x),
't_northbound':q["t_northbound"],
't_limited': q["t_limited"],
't_bullet': q['t_bullet']})
return s
# Let's then filter those train topics into details
# Btw this is jank as fuck.
# red = df[['id','created_at','topic_train']]
red = df.apply(lambda x:parse_train(x),axis=1)
print "red return:",len(red)
print "ret return,",len(ret)
#red
tf = pd.DataFrame(ret)
tf.head(5)
#events = pd.DataFrame([pd.Series(x) for x in red.apply(parse_train)])
#events
#del new.iloc[0]
#new.fillna('')
#df.combine_first(new)
print df.loc[df['topic_train'] != '',['topic_train','text']]
len(tf)
len(tf)
df = df.merge(tf, left_on='id',right_on='tweet_id',how='right')
df.groupby(['time_of_day','month']).mean()
list(df.columns.values)
df.plot(x='time_of_day',y='day_of_week',kind='hist')
# pd.scatter_matrix(df,alpha=0.1,figsize=(15,15), diagonal='hist');
df.groupby('month').describe()
train = df[df['train_id'] > 0]
train.groupby('day_of_week').count()
train.groupby('month').count()
train.groupby('time_of_day').count()
df.corr()
Explanation: First, a word about the below code.
In the accompanying func.py there is a function called parse_train that returns a pandas.Series object. For some reason, when it's returned from a map or apply, it seems to get cast as a string. When applied to a list or a dataframe, this string gets turned into a single field in the row, OR divided into several rows, throwing the count off.
To get around this, I return the results of the parse_train function and then CAST it back to a series. This adds a weird 0 index, which I delete. I then fill in the plethora of NaNs and recombine it with the primary dataframe.
For context, previous iterations included
df['topic_train'].apply(lambda x:parse_train(x))
which would return a pd.Series object with str versions of the returned pd.Series from parse_train
End of explanation |
79 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sklearn control overfit example
- Use the California house database to show how to control overfit tuning the model parameters
Step1: Load data
Step2: Fit the best model
Step3: A better way. Use a model_selection tool | Python Code:
from __future__ import print_function
from sklearn import __version__ as sklearn_version
print('Sklearn version:', sklearn_version)
Explanation: Sklearn control overfit example
- Use the California house database to show how to control overfit tuning the model parameters
End of explanation
from sklearn import datasets
all_data = datasets.california_housing.fetch_california_housing()
print(all_data.DESCR)
# Randomize, separate train & test and normalize
from sklearn.utils import shuffle
X, y = shuffle(all_data.data, all_data.target, random_state=0)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
# Normalize the data
from sklearn.preprocessing import Normalizer
normal = Normalizer()
X_train = normal.fit_transform(X_train)
X_test = normal.transform(X_test)
# Create a basic decision tree
from sklearn import tree
from sklearn.metrics import mean_absolute_error
clf = tree.DecisionTreeRegressor()
clf.fit(X_train, y_train)
mean_absolute_error(y_test, clf.predict(X_test))
# Define a function to evaluate the error over models with different max_depth
def acc(md):
'''
Calculate error of a tree with a specific mas_depth
Paramters:
md: max depth of the tree
Returns:
Mean absolute error of the fitted tree
'''
clf = tree.DecisionTreeRegressor(max_depth=md)
clf.fit(X_train, y_train)
return mean_absolute_error(y_test, clf.predict(X_test))
# Evaluate from max_depth=1 to max_depth=30
index = []
accuracy = []
for i in range(1,30):
accuracy_step = acc(i)
index += [i]
accuracy += [accuracy_step]
print('Max depth - Error:', i, accuracy_step)
# Plot the error vs max_depth
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(index,accuracy)
Explanation: Load data
End of explanation
clf = tree.DecisionTreeRegressor(max_depth=9)
clf.fit(X_train, y_train)
mean_absolute_error(y_test, clf.predict(X_test))
# Plot the sctterplot
plt.scatter(y_test, clf.predict(X_test))
Explanation: Fit the best model
End of explanation
import numpy as np
from time import time
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
# Define estimator. No parameters
clf = tree.DecisionTreeRegressor()
# specify parameters and distributions to sample from
param_dist = {"max_depth": randint(3, 20),
"min_samples_leaf": randint(5, 50)}
# Define randomized search
n_iter_search = 30
random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=n_iter_search)
# Run the randomized search
start = time()
random_search.fit(X_train, y_train)
print("RandomizedSearchCV took %.2f seconds for %d candidates parameter settings." % ((time() - start), n_iter_search))
# Utility function to report best scores
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidate = np.argmax(results['rank_test_score'] == i)
print("Model with rank: ", i)
print("Mean validation score: ", results['mean_test_score'][candidate])
print("Parameters: ", results['params'][candidate], "\n")
report(random_search.cv_results_)
# Build the tree with the optimal parametrization
clf = tree.DecisionTreeRegressor(max_depth=15, min_samples_leaf=28)
clf.fit(X_train, y_train)
print(mean_absolute_error(y_test, clf.predict(X_test)))
plt.scatter(y_test, clf.predict(X_test))
Explanation: A better way. Use a model_selection tool: RandomizedSeachCV
End of explanation |
80 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
In this chapter, you will learn
Step1: Based on the visualization about, we can see that there is a positive relationship between pizza diameter and price.
Training a Simple Linear Regression Model
We use scikit-learn to train our first model
Step2: The sklearn.linear_model.LinearRegression class is an estimator. Given a new value of the explanatory variable, estimators predict a response value. All estimators have the fit() and predict() methods
fit() is used to learn the parameters of a model, while predict() predicts the value of a response variable given an explanatory variable value.
The mathematical specification of a simple regression model is the following
Step3: Training a model to learn the values of the parameters for simple linear regression to create the best unbiased estimator is called ordinary least squares or linear least squares. To get a better idea of what "best unbiased estimator" is estimating in the first place, let's define what is needed to fit a model to training data.
Evaluating the Fitness of a Model with a Cost Function
How do we know that the parameter values specified by a particular model is doing well or poorly? In other words, how can we assess which parameters produced the best-fitting regression line?
Cost Function / Loss Function
The cost function or loss function provides a function that measures the error of a model. In order to find the best-fitting regression line, the goal is to minimize the sum of the differences between the predicted prices and the corresponding observed prices of the pizzas in the training set, also known as residuals or training errors.
We can visualize the residuals by drawing a vertical line from the observed price and the predicted price. Fortunately, matplotlib provides the vlines() that takes the x, ymin, and ymax arguments to draw a vertical line on a plot. We re-create Figure 2 but with the residuals this time.
Step4: Now that we can clearly see the prediction error (in red) made by our model (in blue), it's important to quantify the overall error through a formal definition of residual sum of squares.
We do this by summing the squared residuals for all of our training examples (we square the residuals because we don't care whether the error is in the positive or negative direction).
$$RSS = \sum_{i=1}^n\big(y_{i} - f(x_{i})\big)^2 $$
Where
Step5: Now that we've defined the cost function, we can find the set of parameters that minimize the RSS or MSE.
Solving Ordinary Least Squares for Simple Linear Regression
Recall the equation for simple linear regression | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
# X is the explanatory variable data structure
X = [[6], [8], [10], [14], [18]]
# Y is the response variable data structure
y = [[7], [9], [13], [17.5], [18]]
# instantiate a pyplot figure object
plt.figure()
plt.title('Figure 1. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
plt.show()
Explanation: Linear Regression
In this chapter, you will learn:
Simple Linear Regression: A model that maps the relationship from a single explanatory variable to a continuous response variable with a linear model.
Multiple Linear Regression: A generalization of simple linear regression that maps the relationship from more than one explanatory variable to a continuous response variable.
Polynomial Regression: A special case of multiple linear regression that models nonlinear relationships.
Linear Regression Model Training: finding the parameter values for the linear regression model by minimizing a cost function.
Simple Linear Regression
Assumption: A linear relationship exists between the response variable and the explanatory variable. SLR models this relationship with a linear surface called a hyperplane. A hyperplane is a subspace that has one dimension less than the ambient space that contains it.
Task: Predict the price of a pizza
Explanatory Variable: Pizza size
Response Variable: Price
Data
| Training Instance | Diameter (inches) | Price (dollars) |
|-------------------|-------------------|-----------------|
| 1 | 6 | 7 |
| 2 | 8 | 9 |
| 3 | 10 | 13 |
| 4 | 14 | 17.5 |
| 5 | 18 | 18 |
Visualizing the Data
We can use matplotlib to visualize our training data
End of explanation
from sklearn.linear_model import LinearRegression
# Training Data
# X is the explanatory variable data structure
X = [[6], [8], [10], [14], [18]]
# Y is the response variable data structure
y = [[7], [9], [13], [17.5], [18]]
# Create and fil the model
model = LinearRegression()
# Fit the model to the training data
model.fit(X, y)
# Make a prediction about how much a 12 inch pizza should cost
test_X = [12]
prediction = model.predict(test_X)
print 'A 12\" pizza should cost: $%.2f' % prediction[0]
Explanation: Based on the visualization about, we can see that there is a positive relationship between pizza diameter and price.
Training a Simple Linear Regression Model
We use scikit-learn to train our first model
End of explanation
# instantiate a pyplot figure object
plt.figure()
# re-plot a scatter plot
plt.title('Figure 2. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
# create the line of fit
line_X = [[i] for i in np.arange(0, 25)]
line_y = model.predict(line_X)
plt.plot(line_X, line_y, '-b')
plt.show()
Explanation: The sklearn.linear_model.LinearRegression class is an estimator. Given a new value of the explanatory variable, estimators predict a response value. All estimators have the fit() and predict() methods
fit() is used to learn the parameters of a model, while predict() predicts the value of a response variable given an explanatory variable value.
The mathematical specification of a simple regression model is the following:
$${y} = \alpha+ \beta{x}$$
Where:
- ${y}$: The predicted value of the response variable. In this case, the price of the pizza.
- ${x}$: The explanatory variable. In this case, the diameter of the pizza in inches.
- $\alpha$: The y-intercept term.
- $\beta$: The coefficient term (i.e. the slope of the line).
End of explanation
# instantiate a pyplot figure object
plt.figure()
# re-plot a scatter plot
plt.title('Figure 3. Pizza price plotted against diameter')
plt.xlabel('Diameter in inches')
plt.ylabel('Price in dollars')
plt.plot(X, y, 'k.')
plt.axis([0, 25, 0, 25])
plt.grid(True)
# create the line of fit
line_X = [[i] for i in np.arange(0, 25)]
line_y = model.predict(line_X)
plt.plot(line_X, line_y, '-b')
# create residual lines
for x_i, y_i in zip(X, y):
plt.vlines(x_i[0], y_i[0], model.predict(x_i), colors='r')
plt.show()
Explanation: Training a model to learn the values of the parameters for simple linear regression to create the best unbiased estimator is called ordinary least squares or linear least squares. To get a better idea of what "best unbiased estimator" is estimating in the first place, let's define what is needed to fit a model to training data.
Evaluating the Fitness of a Model with a Cost Function
How do we know that the parameter values specified by a particular model is doing well or poorly? In other words, how can we assess which parameters produced the best-fitting regression line?
Cost Function / Loss Function
The cost function or loss function provides a function that measures the error of a model. In order to find the best-fitting regression line, the goal is to minimize the sum of the differences between the predicted prices and the corresponding observed prices of the pizzas in the training set, also known as residuals or training errors.
We can visualize the residuals by drawing a vertical line from the observed price and the predicted price. Fortunately, matplotlib provides the vlines() that takes the x, ymin, and ymax arguments to draw a vertical line on a plot. We re-create Figure 2 but with the residuals this time.
End of explanation
import numpy as np
rrs = np.sum((model.predict(X) - y) ** 2)
mse = np.mean((model.predict(X) - y) ** 2)
print 'Residual sum of squares: %.2f' % rrs
print 'Mean squared error: %.2f' % mse
Explanation: Now that we can clearly see the prediction error (in red) made by our model (in blue), it's important to quantify the overall error through a formal definition of residual sum of squares.
We do this by summing the squared residuals for all of our training examples (we square the residuals because we don't care whether the error is in the positive or negative direction).
$$RSS = \sum_{i=1}^n\big(y_{i} - f(x_{i})\big)^2 $$
Where:
- $y_{i}$ is the observed value
- $f(x_{i})$ is the predicted value.
A related measure of model error is mean squared error, which is simply the mean of the residuals:
$$MSE = \dfrac{1}{n}\sum_{i=1}^n\big(y_{i} - f(x_{i})\big)^2 $$
Let's go ahead and implement RSS and MSE using numpy:
End of explanation
from __future__ import division
# calculate the mean
n = len(X)
xbar = sum([x[0] for x in X]) / n
# calculate the variance
variance = sum([(x[0] - xbar) ** 2 for x in X]) / (n - 1)
print 'Variance: %.2f' % variance
Explanation: Now that we've defined the cost function, we can find the set of parameters that minimize the RSS or MSE.
Solving Ordinary Least Squares for Simple Linear Regression
Recall the equation for simple linear regression:
$$y = \alpha + \beta{x}$$
Goal:
Solve the values of $\beta$ and $\alpha$ such that they minimize the RSS cost function.
Solving for $\beta$
Step 1: Calculate the variance of $x$
Varience is a summary statistic that represents how spread apart a set of values is. Intuitively, the variance of set A = {0, 5, 10, 15, 20} is greater than the variance of set B = {5, 5, 5, 5, 5}. The formal definition of variance is:
$$var(x) = \dfrac{\sum_{i=1}^{n}\big(x_{i} - \bar{x}\big)^2}{n - 1}$$
Where:
- $\bar{x}$ is the mean of $x$
- $x_{i}$ is the value of $x$ for the $i^{th}$ training instance
- $n$ is the number of training instances
Let's implement variance in Python.
End of explanation |
81 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 101 - Session 2
Built-in Data Structures
Intro to PythonWin
Control Flow Statements
* Materials
Whirlwind Tour of Python, Jake VanderPlas (2016)
Book
Step1: the range() function
Step2: for loop with the range() function
Step3: while loops
while i < 10 | Python Code:
#ForLoopExample.py
# This example uses a for loop to iterate through each item in
# the "fruit" list, updating the value of the "fruit" variable and
# executing whatever lines are indented under the for statement
#Create a list of fruit
fruitList = ("apples","oranges","kiwi","grapes","blueberries")
# Loop through each item in the tuple and execute
# each line that is indented under the for loop
for fruit in fruitList:
print "I like to eat " + fruit
# Dedented lines are run after the loop completes
print "\nI like ice cream too..."
Explanation: Python 101 - Session 2
Built-in Data Structures
Intro to PythonWin
Control Flow Statements
* Materials
Whirlwind Tour of Python, Jake VanderPlas (2016)
Book: http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp
Free PDF: http://www.oreilly.com/programming/free/files/a-whirlwind-tour-of-python.pdf
Interactive:
http://nbviewer.jupyter.org/github/env859/WhirlwindTourOfPython/blob/master/Index.ipynb
GitHub: https://github.com/env859/WhirlwindTourOfPython
* Prep
Ensure that PythonWin is installed on your virtual machine; See ArcGIS Desktop Post-Install
Update your WhirlwindTourOfPython repository or copy the Scripts files to your local workspace
Built-In Data Structures
Source
Lists
Tuples
Dictionaries
Sets
Lists
Ordered and mutable collection
Resembles a vector
Items in a list can different data types
Created using square brackets <br>
myList = [1,2,"Apple"]
List arithmetic
Indexing a slicing to get items - zero based
Functions to manipulate: use tab-complete or help
Tuples
Like a list, but immutable
Created using parentheses, or not:<br> myTuple = (1, 2, "Apple") or myTuple = 1,2,"Apple"
Cannot add, remove, or rearrange items.
"Modify" by creating a new tuple: <br> myTuple += (4, 5, True)
Dictionaries
A collection of unordered objects, like a list, but items referred to by a 'key', not an index
Created using curly braces with key/value pairs: playerCount'= {'Volleyball': 6, 'Baseball': 9}
Items retrieved by its key:<br> x = playerCount['Volleyball']
Items can be updated:<br> playerCount['Volleyball'] = 2
New items can be added"<br> playerCount['Soccer'] = 11
Dictionaries have functions...
Sets
Collection of unordered, unique objects
Created using curly braces<br> primes = {2, 3, 5, 7}
Can perform set functions: union, intersection, difference, symmetric difference...
Introduction to PythonWin
See Getting-Started-with-PythonWin.html
What is an Integrated Development Environment IDE?
What's what in PythonWin
Customizing PythonWin
Editing, debugging, and saving scripts
Control Flow Statements
Source
Conditional Statements
for loops
while loops
break and continue
for loops
for x in myList:
Repeats indented code block for each item in a collection (list, tuple, set)
The variable after for is available in the code block and changes with each iteration of the loop
Loop ends when the last item is processed, and code resumes to next dedented line
Use the range() function to create a simple list to loop a set number of times
End of explanation
print range(10)
print range(10,100)
print range(10,100,20)
Explanation: the range() function
End of explanation
#RangeFunctionExample.py
# This example demonstrates the range function for generating
# a sequence of values that can be used in a for loop.
pi = 3.1415
for r in range(0,100,20):
area = (r ** 2) * pi
print "The area of a circle with radius ", r, "is ", area
Explanation: for loop with the range() function
End of explanation
#WhileLoopExample.py
# This example demonstrates how a while loop is used. Here, we
# calculate the area of several circle with a radius 'r'. We loop
# through gradually larger values of r until the area of the circle
# exceeds 1000.
pi = 3.1415
r = 1
area = (r ** 2) * pi
while area < 1000:
print r, area # Indentation indicates what's run in the loop
r = r + 1 # The variable that gets evaluated must change in the
area = (r ** 2) * pi # loop otherwise you'll create an infinite loop!
print "The while loop is done" # Dedented lines run after the loop completes
Explanation: while loops
while i < 10:
Repeats indented code until the expression is no longer true
Value in the expression must change; possibility for an infinite loop
End of explanation |
82 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting two variables as lines on the same graph
So you've got 2 variables and you want to plot them on the same chart? How do you do it in ggplot? Well good news is it's super easy to do with ggplot!
We're going to use a subset of the meat dataset for this example. We're going to use pandas to switch our data from "wide" to "long" format.
Step1: Now we'll setup our aesthetics so date is the x-axis value, variable is the color of each line and value is the y-axis value. | Python Code:
meat_subset = meat[['date', 'beef', 'pork']]
df = pd.melt(meat_subset, id_vars=['date'])
df.head()
Explanation: Plotting two variables as lines on the same graph
So you've got 2 variables and you want to plot them on the same chart? How do you do it in ggplot? Well good news is it's super easy to do with ggplot!
We're going to use a subset of the meat dataset for this example. We're going to use pandas to switch our data from "wide" to "long" format.
End of explanation
ggplot(df, aes(x='date', y='value', color='variable')) + geom_line()
Explanation: Now we'll setup our aesthetics so date is the x-axis value, variable is the color of each line and value is the y-axis value.
End of explanation |
83 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
84 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning from Data
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Paper Exercise
Let us start with a simple exercise in classifying credit risk.
We have the following features in our dataset.
- Risk - ordinal (label)
- Income - continuous
- Credit History - ordinal
We want to find out the rules that would help us classify the three risk type - This is a paper and pen exercise first!!
Step1: Plotting the Data
Step2: Preparing Data
We have one ordinal variable (Risk) and one nominal variable (Credit History)
Lets use a dictionary for encoding nominal variable
Step3: Decision Tree Classifier
Step4: Visualise the Tree
Step5: Understanding how the Decision Tree works
Terminology
- Each root node represents a single input variable (x) and a split point on that variable.
- The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.
Growing the tree
The first choice we have is how many branches we split the trees. And we choose Binary Tree because otherwise it will explode due to combinatorial explosion. So BINARY TREES is a practical consideration.
The second decision is to choose which variable and where to split it. We need to have an objective function to do this
One objective function is to maximize the information gain (IG) at each split | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
df = pd.read_csv("data/creditRisk.csv")
df.head()
df.dtypes
Explanation: Learning from Data
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Paper Exercise
Let us start with a simple exercise in classifying credit risk.
We have the following features in our dataset.
- Risk - ordinal (label)
- Income - continuous
- Credit History - ordinal
We want to find out the rules that would help us classify the three risk type - This is a paper and pen exercise first!!
End of explanation
import seaborn as sns
sns.stripplot(data = df, x = "Income", y = "Credit History", hue = "Risk", size = 10)
Explanation: Plotting the Data
End of explanation
df.Risk.unique()
Risk_mapping = {
'High': 2,
'Moderate': 1,
'Low': 0}
df['Risk'] = df['Risk'].map(Risk_mapping)
df['Credit History'].unique()
Credit_mapping = {
'Unknown': 0,
'Bad': -1,
'Good': 1}
df['Credit History'] = df['Credit History'].map(Credit_mapping)
df.head()
sns.stripplot(data = df, x = "Income", y = "Credit History", hue = "Risk", size = 10)
Explanation: Preparing Data
We have one ordinal variable (Risk) and one nominal variable (Credit History)
Lets use a dictionary for encoding nominal variable
End of explanation
data = df.iloc[:,0:2]
target = df.iloc[:,2:3]
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf
clf = clf.fit(data, target)
Explanation: Decision Tree Classifier
End of explanation
import pydotplus
from IPython.display import Image
data.columns
target.columns
dot_data = tree.export_graphviz(clf, out_file='tree.dot', feature_names=data.columns,
class_names=['Low', 'Moderate', 'High'], filled=True,
rounded=True, special_characters=True)
graph = pydotplus.graph_from_dot_file('tree.dot')
Image(graph.create_png())
Explanation: Visualise the Tree
End of explanation
x_min, x_max = data.ix[:, 0].min() - 2000, data.ix[:, 0].max() + 2000
y_min, y_max = data.ix[:, 1].min() - 1, data.ix[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, (x_max - x_min)/100), np.arange(y_min, y_max, (y_max - y_min)/100))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.viridis, alpha = 0.5)
plt.scatter(x = data.ix[:,0], y = data.ix[:,1], c = target, s = 100, cmap=plt.cm.magma)
Explanation: Understanding how the Decision Tree works
Terminology
- Each root node represents a single input variable (x) and a split point on that variable.
- The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.
Growing the tree
The first choice we have is how many branches we split the trees. And we choose Binary Tree because otherwise it will explode due to combinatorial explosion. So BINARY TREES is a practical consideration.
The second decision is to choose which variable and where to split it. We need to have an objective function to do this
One objective function is to maximize the information gain (IG) at each split:
$$ IG(D_p,f)= I(D_p) - \frac{N_{right}}{N} I(D_{right}) - \frac{N_{left}}{N} I(D_{left}) $$
where:
- f is the feature to perform the split
- $D_p$, $D_{left}$, and $D_{right}$ are the datasets of the parent, left and right child node, respectively
- I is the impurity measure
- N is the total number of samples
- $N_{left}$ and $N_{right}$ is the number of samples in the left and right child node.
Now we need to first define an Impurity measure. The three popular impurity measures are:
- Gini Impurity
- Entropy
- Classification Error
Gini Impurity and Entropy lead to similiar results when growing the tree, while Classification error is not as useful for growing the tree (but for pruning the tree) - See example here http://sebastianraschka.com/faq/docs/decision-tree-binary.html
Lets understand Gini Impurity a little better. Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset, Gini impurity can be computed by summing the probability $t_{i} $ of an item with label $i$ being chosen times the probability $ 1-t_{i}$ of a mistake in categorizing that item.
$$ I_{G}(f)=\sum {i=1}^{J}t{i}(1-t_{i})=\sum {i=1}^{J}(t{i}-{t_{i}}^{2})=\sum {i=1}^{J}t{i}-\sum {i=1}^{J}{t{i}}^{2}=1-\sum {i=1}^{J}{t{i}}^{2} $$
Lets calculate the Gini for the overall data set:
Low - 4, Moderate - 6, High - 8 and total observations are 18
$$ I_G(t) = 1 - \left(\frac{6}{18}\right)^2 - \left(\frac{4}{18}\right)^2 - \left(\frac{8}{18}\right)^2 = 1 - \frac{116}{256} = 0.642 $$
scikit-learn uses an optimized CART algorithm, which will use a greedy approach. A greedy approach is used to divide the space called recursive binary splitting. This is a numerical procedure where all the values are lined up and different split points are tried and tested using a objective cost function. The split with the best cost (lowest cost because we minimize cost) is selected.
Another way to think of this is that a learned binary tree is actually a partitioning of the input space. You can think of each input variable as a dimension on an p-dimensional space. The decision tree split this up into rectangles (when p=2 input variables) or some kind of hyper-rectangles with more inputs.
We can draw these partitions for our dataset
End of explanation |
85 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The PGM
For more an introduction to PGMS see Daphne Koller's Probabilistic Graphical Models. Below is the PGM that we will explore in this notebook.
Step1: We have sets of foregrounds and backgrounds along with the variables
$\alpha$
Step2: Results | Python Code:
%matplotlib inline
from matplotlib import rc
rc("font", family="serif", size=14)
rc("text", usetex=True)
import daft
pgm = daft.PGM([7, 6], origin=[0, 0])
#background nodes
pgm.add_plate(daft.Plate([0.5, 3.0, 5, 2], label=r"foreground galaxy $i$",
shift=-0.1))
pgm.add_node(daft.Node("theta", r"$\theta$", 3.5, 5.5, fixed=True))
pgm.add_node(daft.Node("alpha", r"$\alpha$", 1.5, 5.5, fixed=True))
pgm.add_node(daft.Node("halo_mass", r"$M_i$", 3.5, 4, scale=2))
pgm.add_node(daft.Node("background_z", r"$z_i$", 2, 4, fixed=True))
pgm.add_node(daft.Node("concentration", r"$c_i$", 1.5, 3.5, fixed=True))
pgm.add_node(daft.Node("background_x", r"$x_i$", 1.0, 3.5, fixed=True))
#foreground nodes
pgm.add_plate(daft.Plate([0.5, 0.5, 5, 2], label=r"background galaxy $j$",
shift=-0.1))
pgm.add_node(daft.Node("reduced_shear", r"$g_j$", 2.0, 1.5, fixed=True))
pgm.add_node(daft.Node("reduced_shear", r"$g_j$", 2.0, 1.5, fixed=True))
pgm.add_node(daft.Node("foreground_z", r"$z_j$", 1.0, 1.5, fixed=True))
pgm.add_node(daft.Node("foreground_x", r"$x_j$", 1.0, 1.0, fixed=True))
pgm.add_node(daft.Node("ellipticities", r"$\epsilon_j^{obs}$", 4.5, 1.5, observed=True, scale=2))
#outer nodes
pgm.add_node(daft.Node("sigma_obs", r"$\sigma_{\epsilon_j}^{obs}$", 3.0, 2.0, fixed=True))
pgm.add_node(daft.Node("sigma_int", r"$\sigma_{\epsilon}^{int}$", 6.0, 1.5, fixed=True))
#edges
pgm.add_edge("foreground_z", "reduced_shear")
pgm.add_edge("foreground_x", "reduced_shear")
pgm.add_edge("reduced_shear", "ellipticities")
pgm.add_edge("sigma_obs", "ellipticities")
pgm.add_edge("sigma_int", "ellipticities")
pgm.add_edge("concentration", "reduced_shear")
pgm.add_edge("halo_mass", "concentration")
pgm.add_edge("background_z", "concentration")
pgm.add_edge("background_x", "reduced_shear")
pgm.add_edge("alpha", "concentration")
pgm.add_edge("theta", "halo_mass")
pgm.render()
Explanation: The PGM
For more an introduction to PGMS see Daphne Koller's Probabilistic Graphical Models. Below is the PGM that we will explore in this notebook.
End of explanation
from pandas import read_table
from pangloss import GUO_FILE
m_h = 'M_Subhalo[M_sol/h]'
m_s = 'M_Stellar[M_sol/h]'
guo_data = read_table(GUO_FILE)
nonzero_guo_data= guo_data[guo_data[m_h] > 0]
import matplotlib.pyplot as plt
stellar_mass_threshold = 5.883920e+10
plt.scatter(nonzero_guo_data[m_h], nonzero_guo_data[m_s], alpha=0.05)
plt.axhline(y=stellar_mass_threshold, color='red')
plt.xlabel('Halo Mass')
plt.ylabel('Stellar Mass')
plt.title('SMHM Scatter')
plt.xscale('log')
plt.yscale('log')
from math import log
import numpy as np
start = log(nonzero_guo_data[m_s].min(), 10)
stop = log(nonzero_guo_data[m_s].max(), 10)
m_logspace = np.logspace(start, stop, num=20, base=10)[:-1]
m_corrs = []
thin_data = nonzero_guo_data[[m_s, m_h]]
for cutoff in m_logspace:
tmp = thin_data[nonzero_guo_data[m_s] > cutoff]
m_corrs.append(tmp.corr()[m_s][1])
plt.plot(m_logspace, m_corrs, label='correlation')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Stellar Mass - Halo Mass Correlation')
plt.title('SMHM Correlation')
plt.rcParams['figure.figsize'] = (10, 6)
# plt.plot(hist[1][:-1], hist[0], label='correlation')
plt.hist(nonzero_guo_data[m_s], bins=m_logspace, alpha=0.4, normed=False, label='dataset')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Number of Samples')
plt.title('Stellar Mass Distribution')
Explanation: We have sets of foregrounds and backgrounds along with the variables
$\alpha$: parameters in the concentration function (which is a function of $z_i,M_i$)
$\theta$: prior distribution of halo masses
$z_i$: foreground galaxy redshift
$x_i$: foreground galaxy angular coordinates
$z_j$: background galaxy redshift
$x_j$: background galaxy angular coordinates
$g_j$: reduced shear
$\sigma_{\epsilon_j}^{obs}$: noise from our ellipticity measurement process
$\sigma_{\epsilon}^{int}$: intrinsic variance in ellipticities
$\epsilon_j^{obs}$: intrinsic variance in ellipticities
Stellar Mass Threshold
End of explanation
from pandas import read_csv
res = read_csv('data3.csv')
tru = read_csv('true3.csv')
start = min([res[res[c] > 0][c].min() for c in res.columns[1:-1]])
stop = res.max().max()
base = 10
start = log(start, base)
end = log(stop, base)
res_logspace = np.logspace(start, end, num=10, base=base)
plt.rcParams['figure.figsize'] = (20, 12)
for i,val in enumerate(tru.columns[1:]):
plt.subplot(int('91' + str(i+1)))
x = res[val][res[val] > 0]
weights = np.exp(res['log-likelihood'][res[val] > 0])
t = tru[val].loc[0]
plt.hist(x, bins=res_logspace, alpha=0.4, normed=True, label='prior')
plt.hist(x, bins=res_logspace, weights=weights, alpha=0.4, normed=True, label='posterior')
plt.axvline(x=t, color='red', label='truth', linewidth=1)
plt.xscale('log')
plt.legend()
plt.ylabel('PDF')
plt.xlabel('Halo Mass (log-scale)')
plt.title('Halo ID ' + val)
plt.show()
res.columns
res[['112009306000027', 'log-likelihood']].sort('log-likelihood')
Explanation: Results
End of explanation |
86 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 17
Step1: Recalling the mechanics of file I/O, you'll see we opened up a file descriptor to alice.txt and read the whole file in a single go, storing all the text as a single string book. We then closed the file descriptor and printed out the first line (or first 71 characters), while wrapping the entire operation in a try / except block.
But as we saw before, it's also pretty convenient to split up a large text file by lines. You could use the readlines() method instead, but you can take a string and split it up into a list of strings as well.
Step2: voilà! lines is now a list of strings.
Step3: ...a list of over 3,700 lines of text, no less o_O
Newline characters
Let's go over this point in a little more detail.
A "newline" character is an actual character--like "a" or "b" or "1" or "
Step4: You can already see some problems with this approach
Step5: This is fine for 99% of cases, except when the string already happens to have a newline at the end.
Step6: "But wait!" you say again, "You read in the text file and split it on newlines a few slides ago, but when you printed out the first line, there was no extra blank line underneath! Why did that work today but not in previous lectures?"
An excellent question. It has to do with the approach we took. Previously, we used the readline() method, which hands you back one line of text at a time with the trailing newline intact
Step7: On the other hand, when you call split() on a string, it not only identifies all the instances of the character you specify as the endpoints of each successive list, but it also removes those characters from the ensuing lists.
Step8: Is this getting confusing? If so, just remember the following
Step9: All the pesky spaces, tabs, and newlines have been stripped off the string. This is extremely useful and pretty much a must when you're preprocessing text.
Capitalization
This is one of those insidious that seems like such a tiny detail but can radically alter your analysis if left unnoticed
Step10: You'll notice the word "and" appears twice
Step11: Now everything is, in some sense, "equivalent."
Part 2
Step12: It otherwise behaves exactly like a regular Python dictionary, except we won't get a KeyError if we reference a key that doesn't exist; instead, a new key will be automatically created and a default value set. For the int type, this default value is 0.
Next, we'll iterate through the lines of the book. There are a couple things we need to do here
Step13: Let's take a look at what we have! First, we'll count how many unique words there are.
Step14: Next, we'll count the total number of words in the book.
Step15: Now we'll find the word that occurred most often
Step16: Well, there's a shocker. /sarcasm
Python has another incredibly useful utility class for whenever we're counting things
Step17: Pretty boring, right? Most of these words are referred to as stop words, or words that used pretty much in every context and therefore don't tell you anything particularly interesting. They're usually filtered out, but because of some interesting corner cases, there's no universal "stop word list"; it's generally up to you to decide what words to remove (though pretty much all of the above top 20, with the exception of "alice", can be removed).
So, in addition to stripping out and splitting on whitespace, and lowercasing all the words, we also check if the word is part of some pre-built stop-word list. If it is, just throw it out; if not, then we'll count it.
Part 3
Step18: By using the curly braces {} inside the string, I've effectively created a placeholder for another string to be inserted. That other string is the argument(s) to the format() function.
But there's a lot more to the curly braces than just {}.
The simplest is just using the curly braces and nothing else. If you specify multiple pairs of curly braces, you'll need to specify an equal number of arguments to format(), and they'll be inserted into the string in the order you gave them to format().
Step19: Alternatively, you can specify the indices of the format() arguments inside the curly braces
Step20: Notice the 2nd and 3rd arguments were flipped in their final ordering!
You can even provide arbitrary named arguments inside the curly braces, which format() will then expect.
Step21: Leading zeros and decimal precision
You can also use this same syntax to specify leading zeros and decimal precision, but the notation gets a little more complicated.
You'll need to first enter a colon "
Step22: Decimal precision is very similar, but instead of a 0, you'll specify a decimal point "." followed by the level of precision you want (a number), followed by the letter "f" to signify that it's a floating-point
Step23: Finally, you can also include the comma in large numbers so you can actually read them more easily
Step24: Additional string functions
There is an entire ecosystem of Python string functions that I highly encourage you to investigate, but I'll go over a few of the most common here.
upper() and lower()
Step25: What if you need to find the actual location in a string of that substring? As in, where is "Wonderland" first mentioned in the book? find() to the rescue!
Step26: ...well, that's embarrassing; that's probably the "Wonderland" that's in the book title. How about the second occurrence, then? We can use the index of the first one to tell find() that we want to start looking from there.
Step27: Now, I've decided I don't want this book to be Alice in Wonderland, but rather Alice in Las Vegas! How can I make this happen? replace()!
Step28: Two more very useful string functions are startswith() and endswith(). These are great if you're testing for leading or trailing characters or words.
Step29: Finally, the join() method. This is a little tricky to use, but insanely useful. It's cropped up on a couple previous assignments.
You'll want to use this method whenever you have a list of strings that you want to "glue" together into a single string. Perhaps you have a list of words and want to put them back together into a sentence!
Step30: We can do this by specifying first the character we want to put in between all the words we're joining--in this case, just a space character--then calling join() on that character, and passing in the list of words we want to glue together as the argument to the function. | Python Code:
book = None
try: # Good coding practices!
f = open("Lecture17/alice.txt", "r")
book = f.read()
except FileNotFoundError:
print("Could not find alice.txt.")
else:
f.close()
print(book[:71]) # Print the first 71 characters.
Explanation: Lecture 17: Natural Language Processing I
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
We've covered about all the core basics of Python and are now solidly into how we wield these tools in the realm of data science. One extremely common, almost unavoidable application is text processing. It's a messy, complex, but very rewarding subarea that has reams of literature devoted to it, whereas we have this single lecture. By the end of this lecture, you should be able to:
Differentiate structured from unstructured data
Understand the different string parsing tools available through Python
Grasp some of the basic preprocessing steps required when text is involved
Define the "bag of words" text representation
Part 1: Text Preprocessing
"Preprocessing" is something of a recursively ambiguous: it's the processing before the processing (what?).
More colloquially, it's the processing that you do in order to put your data in a useful format for the actual analysis you intend to perform. As we saw in the previous lecture, this is what data scientists spend the majority of their time doing, so it's important to know and understand the basic steps.
The vast majority of interesting data is in unstructured format. You can think of this kind of like data in its natural habitat. Like wild animals, though, data in unstructured form requires significantly more effort to study effectively.
Our goal in preprocessing is, in a sense, to turn unstructured data into structured data, or data that has a logical flow and format.
To start, let's go back to the Alice in Wonderland example from the previous lecture (you can download the text version of the book here).
End of explanation
print(type(book))
lines = book.split("\n") # Split the string. Where should the splits happen? On newline characters, of course.
print(type(lines))
Explanation: Recalling the mechanics of file I/O, you'll see we opened up a file descriptor to alice.txt and read the whole file in a single go, storing all the text as a single string book. We then closed the file descriptor and printed out the first line (or first 71 characters), while wrapping the entire operation in a try / except block.
But as we saw before, it's also pretty convenient to split up a large text file by lines. You could use the readlines() method instead, but you can take a string and split it up into a list of strings as well.
End of explanation
print(len(lines))
Explanation: voilà! lines is now a list of strings.
End of explanation
sentences = book.split(".")
print(sentences[0])
Explanation: ...a list of over 3,700 lines of text, no less o_O
Newline characters
Let's go over this point in a little more detail.
A "newline" character is an actual character--like "a" or "b" or "1" or ":"--that represents pressing the "enter" key. However, like tabs and spaces, this character falls under the category of a "whitespace" character, meaning that in print you can't actually see it; the computer hides it.
But when in programming languages like Python (and Java, and C, and Matlab, and R, and and and...), they need a way to explicitly represent these whitespace characters, specifically when processing text like we're doing right now.
So, even though you can't see tabs or newlines in the actual text--go ahead and open up Alice in Wonderland and tell me if you can see the actual characters representing newlines and tabs--you can see these characters in Python.
Tabs are represented by a backslash followed by the letter "t", the whole thing in quotes: "\t"
Newlines are represented by a backslash followed by the letter "n", the whole thing in quotes: "\n"
"But wait!" you say, "Slash-t and slash-n are two characters each, not one! What kind of shenanigans are you trying to pull?"
Yes, it's weird. If you build a career in text processing, you'll find the backslash has a long and storied history as a kind of "meta"-character, in that it tells whatever programming language that the character after it is a super-special snowflake. So in some sense, the backslash-t and backslash-n constructs are actually one character, because the backslash is the text equivalent of a formal introduction.
Back to text parsing
When we called split() on the string holding the entire Alice in Wonderland book, we passed in the argument "\n", which is the newline character. In doing so, we instructed Python to
Split up the original string (hence, the name of the function) into a list of strings
The end of one list and the beginning of the next list would be delimited by the occurrence of a newline character "\n" in the original string. In a sense, we're treating the book as a "newline-delimited" format
Return a list of strings, where each string is one line of the book
An important distinction for text processing neophytes: this splits the book up on a line by line basis, NOT a sentence by sentence basis. There are a lot of implicit semantic assumptions we hold from a lifetime of taking our native language for granted, but which Python has absolutely no understanding of beyond what we tell it to do.
You certainly could, in theory, split the book on punctuation, rather than newlines. This is a bit trickier to do without regular expressions (see Part 3), but to give an example of splitting by period:
End of explanation
print("Even though there's no newline in the string I wrote, Python's print function still adds one.")
print() # Blank line!
print("There's a blank line above.")
Explanation: You can already see some problems with this approach: not all sentences end with periods. Sure, you could split things again on question marks and exclamation points, but this still wouldn't tease out the case of the title--which has NO punctuation to speak of!--and doesn't account for important literary devices like semicolons and parentheses. These are valid punctuation characters in English! But how would you handle them?
Cleaning up trailing whitespace
You may have noticed that, whenever you invoke the print() statement, you automatically get a new line even though I doubt you've ever added a "\n" to the end of the string you're printing.
End of explanation
print("Here's a string with an explicit newline --> \n")
print()
print("Now there are TWO blank lines above!")
Explanation: This is fine for 99% of cases, except when the string already happens to have a newline at the end.
End of explanation
readlines = None
try:
with open("Lecture17/alice.txt", "r") as f:
readlines = f.readlines()
except:
print("Something went wrong.")
print(readlines[0])
print(readlines[2])
print("There are blank lines because of the trailing newline characters.")
Explanation: "But wait!" you say again, "You read in the text file and split it on newlines a few slides ago, but when you printed out the first line, there was no extra blank line underneath! Why did that work today but not in previous lectures?"
An excellent question. It has to do with the approach we took. Previously, we used the readline() method, which hands you back one line of text at a time with the trailing newline intact:
End of explanation
print(readlines[0]) # This used readlines(), so it STILL HAS trailing newlines.
print(lines[0]) # This used split(), so the newlines were REMOVED.
print("No trailing newline when using split()!")
Explanation: On the other hand, when you call split() on a string, it not only identifies all the instances of the character you specify as the endpoints of each successive list, but it also removes those characters from the ensuing lists.
End of explanation
trailing_whitespace = " \t this is the important part \n \n \t "
no_whitespace = trailing_whitespace.strip()
print("Border --> |{}| <-- Border".format(no_whitespace))
Explanation: Is this getting confusing? If so, just remember the following:
In general, make liberal use of the strip() function for strings you read in from files.
This function strips (hence, the name) any whitespace off the front AND end of a string. So in the following example:
End of explanation
print(lines[410])
print(lines[411])
Explanation: All the pesky spaces, tabs, and newlines have been stripped off the string. This is extremely useful and pretty much a must when you're preprocessing text.
Capitalization
This is one of those insidious that seems like such a tiny detail but can radically alter your analysis if left unnoticed: developing a strategy for how you're going to handle uppercase versus lowercase.
Take the following example from Alice in Wonderland, lines 410 and 411:
End of explanation
print(lines[0])
title = lines[0].lower()
print(title)
Explanation: You'll notice the word "and" appears twice: once at the beginning of the sentence in line 410, and again in the middle of the sentence in line 411. It's the same word, but given their difference in capitalization, it's entirely likely that your analysis framework would treat those as two separate words. After all, "and" != "And". Go ahead and try!
A common strategy is to simply lowercase everything. Yes, you likely lose a little bit of information, as it becomes more difficult to identify proper nouns, but a significant source of confusion--is it a proper noun, or just the start of a sentence? has the meaning of the word changed if it's in lowercase versus ALL CAPS? what if you're comparing multiple styles of writing and the authors use different literary forms of capitalizatoin?--is removed entirely.
You can do this with the Python string's lower() method:
End of explanation
from collections import defaultdict
word_counts = defaultdict(int) # All values are integers.
Explanation: Now everything is, in some sense, "equivalent."
Part 2: The "Bag of Words"
The "bag of words" model is one of the most popular ways of representing a large collection of text, and one of the easiest ways to structure text.
The "bag of words" on display on the 8th floor of the Computer Science building at Carnegie Mellon University:
When using this model, the implicit assumptions behind it are saying
Relative word order and grammar DON'T MATTER to the overall meaning of the text.
Relative word frequencies ABSOLUTELY MATTER to the overall meaning of the text.
Formally, the bag of words is a "multiset", but you can think of it like a Python dictionary. In fact, at its simplest, that's all the bag of words is: a count of how many times each word occurs in your text. But like dictionaries, ordering no longer matters.
To illustrate, let's go ahead and design a word counter for Alice in Wonderland! First, we'll initialize our dictionary of counts. To make our lives easier, we'll use a defaultdict, a special kind of dictionary you can use when you want automatic default values enforced for keys that don't exist.
End of explanation
for line in lines: # Iterate through the lines of the book
words = line.split() # If you don't give split() any arguments, the *default* split character is ANY whitespace.
for word in words:
w = word.lower() # Convert to lowercase.
word_counts[w] += 1 # Add 1 to the count for that word in our word dictionary.
Explanation: It otherwise behaves exactly like a regular Python dictionary, except we won't get a KeyError if we reference a key that doesn't exist; instead, a new key will be automatically created and a default value set. For the int type, this default value is 0.
Next, we'll iterate through the lines of the book. There are a couple things we need to do here:
For each line, split the line into single words. We'll go back yet again to our good friend split().
Now we'll have a list of words, so we'll need to iterate over these words, lowercasing them all and then adding them up.
So the code should look something like this:
End of explanation
print("Unique words: {}".format(len(word_counts.keys())))
Explanation: Let's take a look at what we have! First, we'll count how many unique words there are.
End of explanation
print("Total words: {}".format(sum(word_counts.values())))
Explanation: Next, we'll count the total number of words in the book.
End of explanation
maxcount = -1
maxitem = None
for k, v in word_counts.items():
if v > maxcount:
maxcount = v
maxitem = k
print("'{}' occurred most often ({} times).".format(maxitem, maxcount))
Explanation: Now we'll find the word that occurred most often:
End of explanation
from collections import Counter
counts = Counter(word_counts)
print(counts.most_common(20)) # Find the 20 words with the highest counts!
Explanation: Well, there's a shocker. /sarcasm
Python has another incredibly useful utility class for whenever we're counting things: a Counter! This will let us easily find the n words with the highest counts.
End of explanation
print("Here's the notation --> {}".format("another string"))
Explanation: Pretty boring, right? Most of these words are referred to as stop words, or words that used pretty much in every context and therefore don't tell you anything particularly interesting. They're usually filtered out, but because of some interesting corner cases, there's no universal "stop word list"; it's generally up to you to decide what words to remove (though pretty much all of the above top 20, with the exception of "alice", can be removed).
So, in addition to stripping out and splitting on whitespace, and lowercasing all the words, we also check if the word is part of some pre-built stop-word list. If it is, just throw it out; if not, then we'll count it.
Part 3: String Formatting
We've seen previously how to convert strings and numbers (integers and floating-point values) back and forth; just using the str(), int(), and float() functions. Pretty easy.
Here's a harder question: how do you represent a floating-point number as a string, but to only 2 decimal places?
Another hard question: how do you represent an integer as string, but with 3 leading zeros?
You've probably noticed the bizarre notation I've used when printing out strings.
End of explanation
print("{}, {}, and {}".format("a", "b", "c"))
Explanation: By using the curly braces {} inside the string, I've effectively created a placeholder for another string to be inserted. That other string is the argument(s) to the format() function.
But there's a lot more to the curly braces than just {}.
The simplest is just using the curly braces and nothing else. If you specify multiple pairs of curly braces, you'll need to specify an equal number of arguments to format(), and they'll be inserted into the string in the order you gave them to format().
End of explanation
print("{0}, {2}, and {1}".format("a", "b", "c"))
Explanation: Alternatively, you can specify the indices of the format() arguments inside the curly braces:
End of explanation
print("{first_arg}, {second_arg}, and {third_arg}".format(second_arg = "b", first_arg = "a", third_arg = "c"))
Explanation: Notice the 2nd and 3rd arguments were flipped in their final ordering!
You can even provide arbitrary named arguments inside the curly braces, which format() will then expect.
End of explanation
print("One leading zero: {:02}".format(1))
print("Two leading zeros: {:03}".format(1))
print("One leading zero: {:04}".format(100))
print("Two leading zeros: {:05}".format(100))
Explanation: Leading zeros and decimal precision
You can also use this same syntax to specify leading zeros and decimal precision, but the notation gets a little more complicated.
You'll need to first enter a colon ":", followed by the number 0, followed by the number of places that should be counted:
End of explanation
import numpy as np
print("Unformatted: {}".format(np.pi))
print("Two decimal places: {:.2f}".format(np.pi))
Explanation: Decimal precision is very similar, but instead of a 0, you'll specify a decimal point "." followed by the level of precision you want (a number), followed by the letter "f" to signify that it's a floating-point:
End of explanation
big_number = 98483745834
print("Big number: {}".format(big_number))
print("Big number with commas: {:,}".format(big_number))
Explanation: Finally, you can also include the comma in large numbers so you can actually read them more easily:
End of explanation
print("'Wonderland' occurs {} times.".format(book.count("Wonderland")))
Explanation: Additional string functions
There is an entire ecosystem of Python string functions that I highly encourage you to investigate, but I'll go over a few of the most common here.
upper() and lower(): we've seen the latter already, but the former can be just as useful.
count() will give you the number of times a substring occurs in the actual string. If you're interested in one word in particular, this can be a very efficient way of finding it:
End of explanation
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland")))
Explanation: What if you need to find the actual location in a string of that substring? As in, where is "Wonderland" first mentioned in the book? find() to the rescue!
End of explanation
print("'Wonderland' is first found {} characters in.".format(book.find("Wonderland", 43 + 1)))
Explanation: ...well, that's embarrassing; that's probably the "Wonderland" that's in the book title. How about the second occurrence, then? We can use the index of the first one to tell find() that we want to start looking from there.
End of explanation
my_book = book.replace("Wonderland", "Las Vegas") # Replace the 1st thing with the 2nd thing
print(my_book[:71])
Explanation: Now, I've decided I don't want this book to be Alice in Wonderland, but rather Alice in Las Vegas! How can I make this happen? replace()!
End of explanation
print(lines[8])
print(lines[8].startswith("Title"))
print(lines[8].endswith("Wonderland"))
Explanation: Two more very useful string functions are startswith() and endswith(). These are great if you're testing for leading or trailing characters or words.
End of explanation
words = lines[8].split(" ")
print(words)
Explanation: Finally, the join() method. This is a little tricky to use, but insanely useful. It's cropped up on a couple previous assignments.
You'll want to use this method whenever you have a list of strings that you want to "glue" together into a single string. Perhaps you have a list of words and want to put them back together into a sentence!
End of explanation
between_char = " "
sentence = between_char.join(words)
print(sentence)
Explanation: We can do this by specifying first the character we want to put in between all the words we're joining--in this case, just a space character--then calling join() on that character, and passing in the list of words we want to glue together as the argument to the function.
End of explanation |
87 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SETUP
Step1: Autosipper
Step2: Manifold
Step3: Micromanager
Step4: Preset
Step5: ACQUISITION
Step6: MM Get info
Step7: Video
Step8: SNAP CV2
Step9: EXIT | Python Code:
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
Explanation: SETUP
End of explanation
# config directory must have "__init__.py" file
# from the 'config' directory, import the following classes:
from config import Motor, ASI_Controller, Autosipper
from config import utils as ut
autosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
autosipper.coord_frames
from config import gui
gui.stage_control(autosipper.XY, autosipper.Z)
# add/determine deck info
autosipper.coord_frames.deck.position_table = ut.read_delim_pd('config/position_tables/deck')
# check deck alignment
# CLEAR DECK OF OBSTRUCTIONS!!
autosipper.go_to('deck', ['name'],'align')
# add plate
from config import utils as ut
platemap = ut.generate_position_table((8,8),(9,9),93.5)
platemap[]
ut.lookup(platemap)
Explanation: Autosipper
End of explanation
from config import Manifold
manifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)
manifold.valvemap[manifold.valvemap.name>0]
def valve_states():
tmp = []
for i in [2,0,14,8]:
status = 'x'
if manifold.read_valve(i):
status = 'o'
tmp.append([status, manifold.valvemap.name.iloc[i]])
return pd.DataFrame(tmp)
tmp = []
for i in range(16):
status = 'x'
if manifold.read_valve(i):
status = 'o'
name = manifold.valvemap.name.iloc[i]
tmp.append([status, name])
pd.DataFrame(tmp).replace(np.nan, '')
name = 'inlet_in'
v = manifold.valvemap['valve'][manifold.valvemap.name==name]
v=14
manifold.depressurize(v)
manifold.pressurize(v)
manifold.exit()
Explanation: Manifold
End of explanation
# !!!! Also must have MM folder on system PATH
# mm_version = 'C:\Micro-Manager-1.4'
# cfg = 'C:\Micro-Manager-1.4\SetupNumber2_05102016.cfg'
mm_version = 'C:\Program Files\Micro-Manager-2.0beta'
cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg'
import sys
sys.path.insert(0, mm_version) # make it so python can find MMCorePy
import MMCorePy
from PIL import Image
core = MMCorePy.CMMCore()
core.loadSystemConfiguration(cfg)
core.setProperty("Spectra", "White_Enable", "1")
core.waitForDevice("Spectra")
core.setProperty("Cam Andor_Zyla4.2", "Sensitivity/DynamicRange", "16-bit (low noise & high well capacity)") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)
core.setProperty("Spectra", "White_Enable", "0")
Explanation: Micromanager
End of explanation
log = []
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence observed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'fill'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
#closed all
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence removed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'flush'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
for i in [2,0,14,8]:
manifold.pressurize(i)
Explanation: Preset: 1_PBP
ConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP
Preset: 2_BF
ConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF
Preset: 3_DAPI
ConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI
Preset: 4_eGFP
ConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP
Preset: 5_Cy5
ConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5
Preset: 6_AttoPhos
ConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos
TEST
4.5 psi, 25 psi valves
End of explanation
log
core.setConfig('Channel','2_BF')
core.setProperty(core.getCameraDevice(), "Exposure", 20)
core.snapImage()
img = core.getImage()
plt.imshow(img,cmap='gray')
image = Image.fromarray(img)
# image.save('TESTIMAGE.tif')
position_list = ut.load_mm_positionlist("C:/Users/fordycelab/Desktop/D1_cjm.pos")
position_list
def acquire():
for i in xrange(len(position_list)):
si = str(i)
x,y = position_list[['x','y']].iloc[i]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(log, 'moved '+si)
core.snapImage()
# core.waitForDevice(core.getCameraDevice())
logadd(log, 'snapped '+si)
img = core.getImage()
logadd(log, 'got image '+si)
image = Image.fromarray(img)
image.save('images/images_{}.tif'.format(i))
logadd(log, 'saved image '+si)
x,y = position_list[['x','y']].iloc[0]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(log, 'moved '+ str(0))
def logadd(log,st):
log.append([time.ctime(time.time()), st])
print log[-1]
# Auto
core.setAutoShutter(True) # default
core.snapImage()
# Manual
core.setAutoShutter(False) # disable auto shutter
core.setProperty("Shutter", "State", "1")
core.waitForDevice("Shutter")
core.snapImage()
core.setProperty("Shutter", "State", "0")
Explanation: ACQUISITION
End of explanation
core.getFocusDevice()
core.getCameraDevice()
core.XYStageDevice()
core.getDevicePropertyNames(core.getCameraDevice())
Explanation: MM Get info
End of explanation
import cv2
from IPython import display
import numpy as np
from ipywidgets import widgets
import time
# core.initializeCircularBuffer()
# core.setCircularBufferMemoryFootprint(4096) # MiB
cv2.WND
# video with button (CV2)
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
core.startContinuousSequenceAcquisition(1000) # time overridden by exposure
time.sleep(.2)
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
time.sleep(.015)
if core.getRemainingImageCount() > 0:
img = core.getLastImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
core.stopSequenceAcquisition()
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
# video with button (CV2)
# serial snap image
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
core.snapImage()
time.sleep(.05)
img = core.getImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
cv2.destroyAllWindows()
Explanation: Video
End of explanation
# snap (CV2)
snap = widgets.Button(description='Snap')
close2 = widgets.Button(description='Close')
display.display(widgets.HBox([snap, close2]))
def on_snap_clicked(b):
cv2.destroyWindow('Snap')
cv2.namedWindow('Snap',cv2.WINDOW_NORMAL)
cv2.resizeWindow('Snap', 500,500)
cv2.setWindowProperty('Snap', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
core.snapImage()
time.sleep(.1)
img = core.getImage()
cv2.imshow('Snap',img)
k = cv2.waitKey(30)
def on_close2_clicked(b):
cv2.destroyWindow('Snap')
snap.on_click(on_snap_clicked)
close2.on_click(on_close2_clicked)
Explanation: SNAP CV2
End of explanation
autosipper.exit()
manifold.exit()
core.unloadAllDevices()
core.reset()
print 'closed'
Explanation: EXIT
End of explanation |
88 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'access-1-0', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: ACCESS-1-0
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:55
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
89 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Multi-Armed-Bandits" data-toc-modified-id="Multi-Armed-Bandits-1"><span class="toc-item-num">1 </span>Multi-Armed Bandits</a></span><ul class="toc-item"><li><span><a href="#Differences-Between-A/B-Testing-and-Bandit-Testing" data-toc-modified-id="Differences-Between-A/B-Testing-and-Bandit-Testing-1.1"><span class="toc-item-num">1.1 </span>Differences Between A/B Testing and Bandit Testing</a></span></li><li><span><a href="#Bandit-Algorithms" data-toc-modified-id="Bandit-Algorithms-1.2"><span class="toc-item-num">1.2 </span>Bandit Algorithms</a></span><ul class="toc-item"><li><span><a href="#Algorithm-1---Epsilon-Greedy" data-toc-modified-id="Algorithm-1---Epsilon-Greedy-1.2.1"><span class="toc-item-num">1.2.1 </span>Algorithm 1 - Epsilon Greedy</a></span></li><li><span><a href="#Algorithm-2---Boltzmann-Exploration-(Softmax)" data-toc-modified-id="Algorithm-2---Boltzmann-Exploration-(Softmax)-1.2.2"><span class="toc-item-num">1.2.2 </span>Algorithm 2 - Boltzmann Exploration (Softmax)</a></span></li><li><span><a href="#Algorithm-3---Upper-Confidence-Bounds-(UCB)" data-toc-modified-id="Algorithm-3---Upper-Confidence-Bounds-(UCB)-1.2.3"><span class="toc-item-num">1.2.3 </span>Algorithm 3 - Upper Confidence Bounds (UCB)</a></span></li></ul></li><li><span><a href="#Experimenting-With-Bandit-Algorithms" data-toc-modified-id="Experimenting-With-Bandit-Algorithms-1.3"><span class="toc-item-num">1.3 </span>Experimenting With Bandit Algorithms</a></span></li></ul></li><li><span><a href="#Bayesian-Bandits" data-toc-modified-id="Bayesian-Bandits-2"><span class="toc-item-num">2 </span>Bayesian Bandits</a></span><ul class="toc-item"><li><span><a href="#Beta-Distribution" data-toc-modified-id="Beta-Distribution-2.1"><span class="toc-item-num">2.1 </span>Beta Distribution</a></span></li><li><span><a href="#Thompson-Sampling" data-toc-modified-id="Thompson-Sampling-2.2"><span class="toc-item-num">2.2 </span>Thompson Sampling</a></span></li><li><span><a href="#Notes-On-Bandit-Testings" data-toc-modified-id="Notes-On-Bandit-Testings-2.3"><span class="toc-item-num">2.3 </span>Notes On Bandit Testings</a></span><ul class="toc-item"><li><span><a href="#Short-term-testing" data-toc-modified-id="Short-term-testing-2.3.1"><span class="toc-item-num">2.3.1 </span>Short-term testing</a></span></li><li><span><a href="#Long-term-testing" data-toc-modified-id="Long-term-testing-2.3.2"><span class="toc-item-num">2.3.2 </span>Long-term testing</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2.4"><span class="toc-item-num">2.4 </span>Reference</a></span></li></ul></li></ul></div>
Step2: Multi-Armed Bandits
Imagine this scenario
Step4: Algorithm 1 - Epsilon Greedy
At each round $t = 1, 2, ...$ the Epsilon Greedy algorithm will
Step5: The decrease_const parameter in the function above may look unfamiliar.
For the Epsilon Greedy algorithm, setting the $\epsilon$ can be a bit tricky. If it's too small, exploration will be slow at the beginning, and we will be slow to react to changes. If we happen to sample, say, the second-best arm the first few times, it may take a long time to discover that another arm is actually better. If $\epsilon$ is too big, we'll waste many trials pulling random arms without gaining much.
To accommodate for this situation, we will set the $\epsilon$ value at a higher value in the beginning and anneal (gradually lower) it over time. Intuitively, this simply means that after exploring around for a while, we become more certain about each arms' empirical means. After that, it's better to exploit.
In the function call above, the $\epsilon$ at turn $t$ will become
Step7: Algorithm 2 - Boltzmann Exploration (Softmax)
The Softmax algorithm picks each arm with a probability that is proportional to its average reward.
\begin{align}
p_i(t+1)= \frac{ e^{u_i(t) / \tau} }{ \sum_{j=1}^K e^{u_j(t) / \tau} }
\end{align}
Where $\tau$ is a temperature parameter, controlling the randomness of the choice. When $\tau$ = 0, the algorithm acts like pure greedy. As $\tau$ grows to infinity, the algorithm will pick arms uniformly at random.
Step9: Algorithm 3 - Upper Confidence Bounds (UCB)
In the world of statistics, whenever we estimate some unknown parameter (such as the mean of a distribution) using random samples, there is a way to quantify the uncertainty inherent in our estimate. For example, the true mean of a fair six-sided die is 3.5. But if we only roll it once and get a 2, our best estimate of the mean is just 2. Obviously that estimate is not very good, and we can quantify the confidence we have for our estimate. There are confidence bounds which can be written, for example, as
Step12: Experimenting With Bandit Algorithms
In this section, we'll use our simulated data to experiment with our algorithms. To do this we'll also need a metric to calculate how well we are doing. Recall the absolute best we can do is to always pick the webpage (arm) with the largest click through rate (ctr). Denote this best arm's probability of $w_{opt}$. Our score should be relative to how well we would have done had we chosen the best arm from the beginning. This motivates the total regret of a strategy, defined as
Step13: Section Conclusion
Step19: There are two important things to note about the Beta distribution
Step21: In our simulation, we gave the Bayesian bandit two webpages (arms) - one had a CTR of 0.25, the other had a CTR of 0.35. To start with, both webpages were displayed to the user with roughly equal probability. Over time, evidence accumulated that arm 2 was considerably better than arm 1. At this point the algorithm switched to displaying primarily webpage 1, and the overall CTR of the experiment converged to 0.35 (the optimal CTR).
We can also visualize our Beta distribution for each arms in different turns. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import beta
from collections import namedtuple
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,scipy
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Multi-Armed-Bandits" data-toc-modified-id="Multi-Armed-Bandits-1"><span class="toc-item-num">1 </span>Multi-Armed Bandits</a></span><ul class="toc-item"><li><span><a href="#Differences-Between-A/B-Testing-and-Bandit-Testing" data-toc-modified-id="Differences-Between-A/B-Testing-and-Bandit-Testing-1.1"><span class="toc-item-num">1.1 </span>Differences Between A/B Testing and Bandit Testing</a></span></li><li><span><a href="#Bandit-Algorithms" data-toc-modified-id="Bandit-Algorithms-1.2"><span class="toc-item-num">1.2 </span>Bandit Algorithms</a></span><ul class="toc-item"><li><span><a href="#Algorithm-1---Epsilon-Greedy" data-toc-modified-id="Algorithm-1---Epsilon-Greedy-1.2.1"><span class="toc-item-num">1.2.1 </span>Algorithm 1 - Epsilon Greedy</a></span></li><li><span><a href="#Algorithm-2---Boltzmann-Exploration-(Softmax)" data-toc-modified-id="Algorithm-2---Boltzmann-Exploration-(Softmax)-1.2.2"><span class="toc-item-num">1.2.2 </span>Algorithm 2 - Boltzmann Exploration (Softmax)</a></span></li><li><span><a href="#Algorithm-3---Upper-Confidence-Bounds-(UCB)" data-toc-modified-id="Algorithm-3---Upper-Confidence-Bounds-(UCB)-1.2.3"><span class="toc-item-num">1.2.3 </span>Algorithm 3 - Upper Confidence Bounds (UCB)</a></span></li></ul></li><li><span><a href="#Experimenting-With-Bandit-Algorithms" data-toc-modified-id="Experimenting-With-Bandit-Algorithms-1.3"><span class="toc-item-num">1.3 </span>Experimenting With Bandit Algorithms</a></span></li></ul></li><li><span><a href="#Bayesian-Bandits" data-toc-modified-id="Bayesian-Bandits-2"><span class="toc-item-num">2 </span>Bayesian Bandits</a></span><ul class="toc-item"><li><span><a href="#Beta-Distribution" data-toc-modified-id="Beta-Distribution-2.1"><span class="toc-item-num">2.1 </span>Beta Distribution</a></span></li><li><span><a href="#Thompson-Sampling" data-toc-modified-id="Thompson-Sampling-2.2"><span class="toc-item-num">2.2 </span>Thompson Sampling</a></span></li><li><span><a href="#Notes-On-Bandit-Testings" data-toc-modified-id="Notes-On-Bandit-Testings-2.3"><span class="toc-item-num">2.3 </span>Notes On Bandit Testings</a></span><ul class="toc-item"><li><span><a href="#Short-term-testing" data-toc-modified-id="Short-term-testing-2.3.1"><span class="toc-item-num">2.3.1 </span>Short-term testing</a></span></li><li><span><a href="#Long-term-testing" data-toc-modified-id="Long-term-testing-2.3.2"><span class="toc-item-num">2.3.2 </span>Long-term testing</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2.4"><span class="toc-item-num">2.4 </span>Reference</a></span></li></ul></li></ul></div>
End of explanation
def generate_bernoulli_bandit_data(n_simulations, K):
Generate simluate data, that represents success / trial data
Parameters
----------
n_simulations : int
the total number of turns in a simulation.
K : int
the total number of arms.
Returns
-------
ctr : float 1d ndarray, shape[K,]
the randomly generated empirical click through rate for each arm
rewards : bool 2d ndarray, shape [n_simulations, K]
given the empirical ctr, simulate in each turn of the simulation,
whether the arm gets pulled will obtain the
reward or not (whether the webpage gets clicked)
ctr = np.random.rand(K)
rewards = np.random.rand(n_simulations, K) < np.tile(ctr, (n_simulations, 1))
return ctr, rewards
K = 2
n_simulations = 5
ctr, rewards = generate_bernoulli_bandit_data(n_simulations, K)
print(ctr)
print(rewards)
Explanation: Multi-Armed Bandits
Imagine this scenario: we're in a casino. There are many different slot machines (known as "one-armed bandits", as they're known for robbing people), each with a lever (an arm, if you will). We think that some slot machines payout more frequently than others do, and our goal is to walk out of the casino with the most money.
The question is, how do we learn which slot machine rewards us with the most money in the shortest amount of time? We could try all the slot machines out to get a sense of the expected return from playing each machine. But remember, each time we play a poor performing machine, we lower our take that we walk out of the casino with that night. In order to maximize how much money we walk out of the casino with, we will have to be efficient with how we collect our data.
Rewriting the scenario above into a business language. Each time a shopper looks comes to a webpage, we show them one of the $K$ variations of the webpage. They either click on it or do not, and we log this information about the (binary) reward for each $K$ variations. Next, we proceed to the next shopper and have to choose one of $K$ webpage variations again.
Differences Between A/B Testing and Bandit Testing
In both scenarios above, we would normally determine our "winner" (the slot machine that pays the most, or the best webpage variations that gets the most clicks) using the well-known A/B testing approach. The A/B testing approach consists of a period of pure exploration, where we're randomly assigning equal numbers of users to one of the $K$ variations and run the test until it's valid. After that, it jumps into pure exploitation, where you send 100% of your users to the more successful version of your site.
Two possible problems with the classical A/B testing approach is that:
It jumps discretely from exploration to exploitation, when we might be able to transition more smoothly.
During the exploratory phase (the test), it wastes resources exploring inferior options in order to gather as much data as possible.
Given the exploration - exploitation dilemma stated above, the bandit testing approach try to account for this. The following graph depicts the difference between the two types of testing method:
<img src="img/ab_vs_bandit.png" width="70%" height="70%">
If we have three variations that we wish to test, the A/B testing approach we try out each of the three variations with equal proportions until we are done with our test at week 5, and then select the variation with the highest value.
As for bandit testing, it attempts to use what it knows about each variation from the very beginning, and it continuously updates the probabilities that it will select each variation throughout the optimization process. In the above chart we can see that with each new week, the bandit testing reduces how often it selects the lower performing options and increases how often if selects the highest performing option.
We need to explore in order to figure out what works and what doesn't. On the other hand, if we exploit we take advantage of what we have learned. The bandit testing approach highlights the fact that collecting data also has its cost.
To be specific, bandit testing algorithms will try to minimize what's known as regret, which is the difference between our actual payoff and the payoff we would have collected had we played the optimal (best) options at every opportunity. There are tons of different bandit methods, in the next section we'll look at some of the more common ones.
Bandit Algorithms
Before introducing the algorithms and trying them out through simulations, we'll denote some notations and terminologies to formally define the problem:
Arms is simply the variations that we're testing (webpages that we're testing) and there will be $K$ of them in total.
In a simulation of $t$ turns (how many samples in a simulation), we'll maintain empirical means of the reward for each arm (e.g. after trying out arm A for 10 turns, it got 3 clicks, the empirical means is simply 0.3) that are updated at every turn $t$.
$u_i(t)$ is the empirical mean of arm $i$ after $t$ turns.
$p_i(t)$ is the probability of picking arm $i$ at turn $t$.
Let's look at our simulated data before diving into each algorithms (hopefully the docstrings are self-explanatory).
End of explanation
def epsilon_greedy(counts, epsilon=0.5, decrease_const=1000):
Adaptive epsilon greedy
Parameters
----------
counts : int 2d-array, shape(K, 2), where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
epsilon : float
the initial probability of choosing a random arm;
1 - epsilon is the probability of choosing the current best arm
decrease_const : int
parameter for the adaptive (annealing) epsilon, where the epsilon
parameter will decrease as time goes by.
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
n_arms = counts.shape[0]
totals = counts.sum(axis=1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
epsilon /= (1 + total_counts / decrease_const)
if np.random.rand() > epsilon:
return np.argmax(empirical_means)
else:
return np.random.randint(0, n_arms)
# counts : stores the counts of success and failures for each arm
# where column 0 represents success, 1 represents failure.
# each arm's count is initialiated as 1 to ensure that each arm is
# played at least once, to prevent "cold start" problem and
# 0 division in the beginning
K = 2
counts = np.ones((K, 2))
print(counts)
epsilon_greedy(counts)
Explanation: Algorithm 1 - Epsilon Greedy
At each round $t = 1, 2, ...$ the Epsilon Greedy algorithm will:
Choose a random arm with the probability of $\epsilon$.
Choose the arm with the current best empirical mean with probability of $1-\epsilon$.
In mathematical notations:
\begin{align}
p_i(t+1)=
\begin{cases}
1 - \epsilon + \epsilon \big/ K & \quad \text{if i = } argmax_{j = 1, ..., K} \ u_j(t) \
\epsilon \big/ K & \quad otherwise
\end{cases}
\end{align}
Or more intuitively:
When a new visitor comes to the site, the algorithm flips a coin that comes up tail with the probability of $\epsilon$. When it does in fact comes up tail, the algorithm is going to explore. The exploration phase is to randomly choose amongst any possible arm with equal (uniform) probability and showing it to the visitor.
On the other hand, the algorithm will exploit the best known solution with the probability of $1- \epsilon$. To exploit, the algorithm looks up the current empirical means and shows the best one to the visitor.
The image below sums up the algorithm pretty well.
<img src="img/epsilon_greedy.png" width="70%" height="70%">
End of explanation
# show adaptive learning rate
epsilon = 0.5
decrease_const = 1000
# the epsilon value after 10 turns
total_counts = 10
print(epsilon / (1 + total_counts / decrease_const))
# after 10000 turns
total_counts = 10000
print(epsilon / (1 + total_counts / decrease_const))
Explanation: The decrease_const parameter in the function above may look unfamiliar.
For the Epsilon Greedy algorithm, setting the $\epsilon$ can be a bit tricky. If it's too small, exploration will be slow at the beginning, and we will be slow to react to changes. If we happen to sample, say, the second-best arm the first few times, it may take a long time to discover that another arm is actually better. If $\epsilon$ is too big, we'll waste many trials pulling random arms without gaining much.
To accommodate for this situation, we will set the $\epsilon$ value at a higher value in the beginning and anneal (gradually lower) it over time. Intuitively, this simply means that after exploring around for a while, we become more certain about each arms' empirical means. After that, it's better to exploit.
In the function call above, the $\epsilon$ at turn $t$ will become:
\begin{align}
\epsilon(t) = \epsilon(0) \Big/ (1 + t/T)
\end{align}
Where $T$ is a new parameter that represents a decreasing constant.
Note that there are different ways of annealing a parameter, but the spirit is the same.
End of explanation
def softmax(counts):
adaptive softmax
Parameters
----------
counts : int 2d-array, shape( K, 2 ), where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
totals = counts.sum(axis=1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
# annealing (adaptive learning rate)
tau = 1 / np.log(total_counts + 0.000001)
probs_n = np.exp(empirical_means / tau)
probs_d = probs_n.sum()
probs = probs_n / probs_d
cum_prob = 0.
z = np.random.rand()
for idx, prob in enumerate(probs):
cum_prob += prob
if cum_prob > z:
return idx
counts = np.ones((K, 2))
softmax(counts)
Explanation: Algorithm 2 - Boltzmann Exploration (Softmax)
The Softmax algorithm picks each arm with a probability that is proportional to its average reward.
\begin{align}
p_i(t+1)= \frac{ e^{u_i(t) / \tau} }{ \sum_{j=1}^K e^{u_j(t) / \tau} }
\end{align}
Where $\tau$ is a temperature parameter, controlling the randomness of the choice. When $\tau$ = 0, the algorithm acts like pure greedy. As $\tau$ grows to infinity, the algorithm will pick arms uniformly at random.
End of explanation
def ucb(counts):
Upper Confidence Bounds
Parameters
----------
counts : int 2d ndarray, shape [K, 2], where K = the total number of arms
success and failures for each arm where column 0 represents
success, 1 represents failure
Returns
-------
(int) the chosen arm
# calculate the empirical means and the total number of simulations that were ran
totals = counts.sum(axis=1)
successes = counts[:, 0]
empirical_means = successes / totals
total_counts = counts.sum()
bonus = np.sqrt(2 * np.log(total_counts) / totals)
return np.argmax(empirical_means + bonus)
counts = np.ones((K, 2))
softmax(counts)
Explanation: Algorithm 3 - Upper Confidence Bounds (UCB)
In the world of statistics, whenever we estimate some unknown parameter (such as the mean of a distribution) using random samples, there is a way to quantify the uncertainty inherent in our estimate. For example, the true mean of a fair six-sided die is 3.5. But if we only roll it once and get a 2, our best estimate of the mean is just 2. Obviously that estimate is not very good, and we can quantify the confidence we have for our estimate. There are confidence bounds which can be written, for example, as: "The mean of this die is 2, with a 95-th percentile lower bound of 1.4 and a 95-th percentile upper bound of 5.2."
The upper confidence bound (UCB) family of algorithms, as its name suggests, selects the arm with the largest upper confidence bound at each turn. The intuition is this: the more times we roll the die, the tighter the confidence bounds, and if we roll the die an infinite number of times then the width of the confidence bound is zero. In short, as the number of rolls increases, the uncertainty decreases, and so does the width of the confidence bound.
Thus, unlike Epsilon Greedy and Softmax algorithm that only keeps track of the empirical means, the UCB algorithm also maintains the number of times that each arm has been played, denoted by $n_i(t)$. Initially, each arm is played once. Afterwards, at round t, the algorithm greedily picks the arm $j(t)$ as follows:
\begin{align}
j(t) = argmax_{i = 1, ..., K} \left( u_i + \sqrt{\frac{2 \cdot ln(t)}{n_i}} \right)
\end{align}
We can see that the UCB algorithm will try to learn about arms that we don't know enough about. The main advantages of these types of algorithms are:
Take uncertainty of sample mean estimate into account in a smart way.
No parameters (e.g. epsilon, annealing) to validate.
End of explanation
def run_bandit_algo(rewards, ctr, algo, **kwargs):
Run different types of bandit algorithms
Parameters
----------
rewards, ctr :
Return value of the `generate_bernoulli_bandit_data` function
algo : bandit function
[epsilon_greedy, softmax, ucb]
**kwargs :
additional parameters to pass in to the algo
Returns
-------
cum_regret : 1d ndarray, shape [n_simulations,]
The total regret accumulated over the experiment, where the regret
is measured by the maximum ctr - the chosen arm's ctr
opt_arm_percentage : float
The percentage of plays in which the optimal arm is pulled
n_simulations, K = rewards.shape
# counts : success and failures for each arm where column 0 represents
# success, 1 represents failure. Each arm's count is initialiated as 1
# to ensure that each arm is played at least once, to prevent "cold start"
# problem and 0 division in the beginning
counts = np.ones((K, 2), dtype=np.int)
regret = np.zeros(n_simulations)
max_ctr_count = 0
max_ctr = np.max(ctr)
max_ctr_idx = np.argmax(ctr)
for i in range(n_simulations):
# 1. run the algorithm to obtain the arm that got pulled
# 2. update the success / failure according to the generated rewards
# 3. update the expected regret for each turn of the simulation
# 4. if the arm that got pulled is the one with the opt ctr, increment this count
arm = algo(counts, **kwargs)
if rewards[i, arm] == 1:
counts[arm, 0] += 1
else:
counts[arm, 1] += 1
regret[i] = max_ctr - ctr[arm]
if arm == max_ctr_idx:
max_ctr_count += 1
cum_regret = np.cumsum(regret)
opt_arm_percentage = max_ctr_count / n_simulations
return cum_regret, opt_arm_percentage
def run_experiment(K, n_simulations, algorithms):
Run the bandit algorithm's simulation by the
specified number of samples for simulation, the number of arms
and the different version of algorithm
Parameters
----------
n_simulations : int
the total number of turns in a simulation
K : int
the total number of arms
algorithms : list of functions
the list of bandit algorithms to simulate
Returns
-------
ctr : float 1d-array, shape [K,]
the randomly generated empirical click through rate for each arm
algo_opt_arm_percentage : float list
the percentage of simulations that chose the best arm
algo_cum_regret : float 2d-array, shape [n_simulations, length of the algorithm]
each column stores the cumulative regret for one algorithm
fig : matplotlib figure
the cumulative regret for each bandit algorithm
algo_opt_arm_percentage = []
algo_cum_regret = np.zeros((n_simulations, len(algorithms)))
fig = plt.figure(figsize=(10, 7))
ctr, rewards = generate_bernoulli_bandit_data(n_simulations, K)
for idx, algo in enumerate(algorithms):
cum_regret, opt_arm_percentage = run_bandit_algo(rewards, ctr, algo=algo)
algo_cum_regret[:, idx] = cum_regret
algo_opt_arm_percentage.append(opt_arm_percentage)
plt.semilogy(cum_regret, label=algo.__name__)
plt.title('Simulated Bandit Performance for K = {}'.format(K))
plt.ylabel('Cumulative Expected Regret')
plt.xlabel('Round Index')
plt.legend(loc='lower right')
return ctr, algo_opt_arm_percentage, algo_cum_regret, fig
# change default figure size and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
K = 5
n_simulations = 10000
algorithms = [epsilon_greedy, softmax, ucb]
np.random.seed(2345)
ctr, algo_opt_arm_percentage, algo_cum_regret, fig = run_experiment(K, n_simulations, algorithms)
plt.show()
print(ctr)
print(algo_opt_arm_percentage)
Explanation: Experimenting With Bandit Algorithms
In this section, we'll use our simulated data to experiment with our algorithms. To do this we'll also need a metric to calculate how well we are doing. Recall the absolute best we can do is to always pick the webpage (arm) with the largest click through rate (ctr). Denote this best arm's probability of $w_{opt}$. Our score should be relative to how well we would have done had we chosen the best arm from the beginning. This motivates the total regret of a strategy, defined as:
\begin{align}
R_T & = \sum_{t=1}^{T} \left( w_{opt} - w_{I(t)} \right) \nonumber \
& = Tw_{opt} - \sum_{t=1}^{T} \; w_{I(t)}
\end{align}
Where $T$ is the total number of samples in the experiment, $w_{I(t)}$ is the probability of obtaining the reward (getting clicked) of the chosen arm in the $t_{th}$ turn. A total regret of 0 means the strategy is attaining the best possible score. This is likely not possible, as initially our algorithm will often make the wrong choice. Ideally, a strategy's total regret should flatten as it learns the best bandit. (Mathematically, we achieve $w_{I(t)} = w_{opt}$ often)
We'll run the experiment and plot the cumulative regret of the three algorithms below:
End of explanation
plt.figure(figsize=(12, 5))
x = np.linspace(0.01, .99, 100)
params = [(2, 5), (1, 1), (5, 5), (20, 4)]
for a, b in params:
y = beta.pdf(x, a, b)
lines = plt.plot(x, y, label="(%.1f,%.1f)" % (a, b), lw=2)
plt.fill_between(x, 0, y, alpha=0.2, color=lines[0].get_color())
plt.autoscale(tight=True)
plt.legend(loc='upper left', title="(a,b)-parameters")
plt.show()
Explanation: Section Conclusion: The cumulative expected regret plot from our experiment showed that all three different algorithms have converged (the cumulative expected regret gradually decreases to a steady level). And the UCB seems to be doing better than the other two algorithms in this limited horizon as the way to read the graph is the lower the better (the y-axis represents regrets).
Bayesian Bandits
Next, we'll introduce a Bayesian method called Thompson Sampling. Recall that the problem we want to solve is the following. We came up with $K$ different variations of the webpage (e.g. different layout) and we wish to find the ones with the best click through rate (CTR), e.g. clicking to sign-up for the newsletter. Let's represent each CTR by $\theta_i$ - i.e., $\theta_i$ is the true probability that an individual user will click when they were shown with the $i_{th}$ webpage. It is important to note that we don't actually know what $\theta_i$ is - if we did, we could simply choose $i$ for which $\theta_i$ was largest and move on. We're simply pretending that we know in order to simulate the performance of the algorithm.
Using the Bayesian approach we will construct a prior probability distribution which represents our original belief about what the actual value of $\theta_i$, our ctr for the $i_{th}$ webpage is. The prior we'll use is the Beta distribution. Here's a quick recap of the distribution:
Beta Distribution
The Beta distribution is very useful in Bayesian statistics. A random variable $X$ has a Beta distribution, with parameters $(\alpha, \beta)$, if its density function is:
\begin{align}
f_X(x | \; \alpha, \beta ) = \frac{ x^{(\alpha - 1)}(1-x)^{ (\beta - 1) } }{B(\alpha, \beta) }
\end{align}
where $B$ is the Beta function (hence the name). The random variable $X$ is only allowed in [0,1], making the Beta distribution a popular distribution for decimal values, probabilities and proportions. The values of $\alpha$ and $\beta$, both positive values, provide great flexibility in the shape of the distribution. Below we plot some Beta distributions with different $\alpha$ and $\beta$ values:
End of explanation
class BayesianBandit:
Thompson Sampling
Parameters
----------
K : int
total number of arms
prior_params : list of float length 2 tuple, default None, (optional)
each element of the list is a tuple, where each tuple
contains the alpha and beta parameter that represents the prior
beta distribution for each arm. If not supplied
it will assume that all arms's prior starts with an uniform distribution
Attributes
----------
trials, success : int 1d ndarray, shape [K,]
stores the trials and success for each arm,
e.g. trial = [ 1, 1 ] and success = [ 0, 1 ] means
that both arm has been pulled once and arm 1 has generated
the reward (clicked)
def __init__(self, K, prior_params=None):
if prior_params:
priors = namedtuple("priors", ["alpha", "beta"])
prior = [priors(*p) for p in prior_params]
self.alphas = np.array([p.alpha for p in prior])
self.betas = np.array([p.beta for p in prior])
else:
self.alphas = np.ones(K)
self.betas = np.ones(K)
self.trials = np.zeros(K, dtype=np.int)
self.success = np.zeros(K, dtype=np.int)
def get_recommendation(self):
for all arms, construct their beta distribution and
draw a random sample from it, then return the arm
with the maximum value random sample
theta = np.random.beta(self.alphas + self.success,
self.betas + self.trials - self.success)
return np.argmax(theta)
def update_result(self, arm, converted):
override the trials and success array, the success array
will only be updated if it has generated a reward
self.trials[arm] += 1
if converted:
self.success[arm] += 1
return self
def experiment(T, ctr, prior_params=None):
run the experiment for Thompson Sampling,
pass in ctr, the fixed ctr for each arm
or K, the total number of arms to run the experiment,
if K is supplied then it will be randomly generated
Parameters
----------
T : int
number of simulation in an experiment
ctr : float sequence, len = K (total number of arms)
the empirical click through rate for each arm
prior_params : list of float length 2 tuple, default None, (optional)
each element of the list is a tuple, where each tuple
contains the alpha and beta parameter that represents the prior
beta distribution for each arm. If not supplied
it will assume that all arms's prior starts with an uniform distribution
Returns
-------
ctr : float sequence, len = K
the supplied or the randomly generated ctr
trials, success : int 2d ndarray, shape [T, K]
trials and success recorded for each turn of the experiment
alphas, betas : float 1d ndarray, shape [K,]
the alpha and beta parameters for each arm
K = len(ctr)
trials = np.zeros((T, K), dtype=np.int)
success = np.zeros((T, K), dtype=np.int)
bayes_bandit = BayesianBandit(K, prior_params)
for t in range(T):
arm = bayes_bandit.get_recommendation()
converted = np.random.rand() < ctr[arm]
bayes_bandit.update_result(arm, converted)
trials[t] = bayes_bandit.trials
success[t] = bayes_bandit.success
return ctr, trials, success, bayes_bandit.alphas, bayes_bandit.betas
def experiment_plot(ctr, trials, success):
Pass in the ctr, trials and success returned
by the `experiment` function and plot
the Cumulative Number of Turns For Each Arm and
the CTR's Convergence Plot side by side
T, K = trials.shape
n = np.arange(T) + 1
fig = plt.figure(figsize=(14, 7))
plt.subplot(121)
for i in range(K):
plt.loglog(n, trials[:, i], label="arm {}".format(i + 1))
plt.legend(loc="upper left")
plt.xlabel("Number of turns")
plt.ylabel("Number of turns/arm")
plt.title("Cumulative Number of Turns For Each Arm")
plt.subplot(122)
for i in range(K):
plt.semilogx(n, np.zeros(T) + ctr[i], label="arm {}'s CTR".format(i + 1))
plt.semilogx(n, (success[:, 0] + success[:, 1]) / n, label="CTR at turn t")
plt.axis([1, T, 0, 1])
plt.legend(loc="upper left")
plt.xlabel("Number of turns")
plt.ylabel("CTR")
plt.title("CTR's Convergence Plot")
return fig
# number of simulation in an experiment
T = 10000
# the empirical click through rate for each arm
ctr = 0.25, 0.35
ctr, trials, success, alphas, betas = experiment(T=T, ctr=ctr)
trials
fig = experiment_plot(ctr, trials, success)
plt.show()
Explanation: There are two important things to note about the Beta distribution:
The first is the presence of the flat distribution above, specified by parameters $(1,1)$. This is the Uniform distribution. Hence the Beta distribution is a generalization of the Uniform distribution.
The second is that there is an interesting connection between the Beta distribution and the Binomial distribution. Suppose we are interested in some unknown proportion or probability $p$. We assign a $\text{Beta}(\alpha, \beta)$ prior to $p$. We observe some data generated by a Binomial process, say $X \sim \text{Binomial}(N, p)$, with $p$ still unknown. Then our posterior is again a Beta distribution, i.e. $p | X \sim \text{Beta}( \alpha + X, \beta + N -X )$. Succinctly, one can relate the two by "a Beta prior with Binomial observations creates a Beta posterior".
In light of the above two paragraphs, if we start with a $\text{Beta}(1,1)$ prior on $p$ (which is a Uniform), observe data $X \sim \text{Binomial}(N, p)$, then our posterior is $\text{Beta}(1 + X, 1 + N - X)$.
Thompson Sampling
So after assuming the priors on the probability of ctr for each webpage. To be explicit on the phrase "assuming the priors", we will assume that we're completely ignorant of these probabilities. So a very natural prior is the flat prior over 0 to 1, $\text{Beta}(\alpha=1,\beta=1)$. The algorithm then proceeds as follows:
For each turn:
Sample a random variable $X_i$ from the prior of arm $i$, for all $i$ ($K$ in total).
Select the arm with largest sample, i.e. select $i = \text{argmax}\; X_i$.
Observe the result of pulled arm $i$, and update your prior with that arm $i$.
Return to 1.
Like all the algorithms we've introduced before, Thompson Sampling suggests that we should not discard losers, but we should pick them at a decreasing rate as we gather confidence that there exist better webpages (arms). This follows because there is always a non-zero chance that a webpage with a lower ctr will get chosen, but the probability of this event decreases as we play more rounds.
End of explanation
def plot_beta_dist(ctr, trials, success, alphas, betas, turns):
Pass in the ctr, trials and success, alphas, betas returned
by the `experiment` function and the number of turns
and plot the beta distribution for all the arms in that turn
subplot_num = len(turns) / 2
x = np.linspace(0.001, .999, 200)
fig = plt.figure(figsize=(14, 7))
for idx, turn in enumerate(turns):
plt.subplot(subplot_num, 2, idx + 1)
for i in range(len(ctr)):
y = beta(alphas[i] + success[turn, i],
betas[i] + trials[turn, i] - success[turn, i]).pdf(x)
line = plt.plot(x, y, lw=2, label="arm {}".format(i + 1))
color = line[0].get_color()
plt.fill_between(x, 0, y, alpha=0.2, color=color)
plt.axvline(x=ctr[i], color=color, linestyle="--", lw=2)
plt.title("Posteriors After {} turns".format(turn))
plt.legend(loc="upper right")
return fig
turns = [1, 100, 1000, 9999]
posterior_fig = plot_beta_dist(ctr, trials, success, alphas, betas, turns)
plt.show()
Explanation: In our simulation, we gave the Bayesian bandit two webpages (arms) - one had a CTR of 0.25, the other had a CTR of 0.35. To start with, both webpages were displayed to the user with roughly equal probability. Over time, evidence accumulated that arm 2 was considerably better than arm 1. At this point the algorithm switched to displaying primarily webpage 1, and the overall CTR of the experiment converged to 0.35 (the optimal CTR).
We can also visualize our Beta distribution for each arms in different turns.
End of explanation |
90 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports
Step1: Default settings
Step2: Load PMT pulses
Pulse shape
One of the elements of simulted S1s is the single p.e. pulse model. We extract this from the gain calibration dataset.
Step3: What do we need the cumulative fraction for? Well, we input this into the custom_pmt_pulse_current in pax.simulation. Here is a quick check that all is well. There is just a little shift, but the alignment is quite arbitrary anyway.
Step4: Gain variation
Step11: S1 model
Simulation
Step13: Here is what we get out.
wv_matrix is a matrix containing the y-coordinates of the waveforms. The columns are the individual waveforms, to get the first waveform, go wv_matrix[
Step14: Statistical errors are negligible if you have more than a few hundred waveforms.
Systematic errors
Step15: Real data waveforms
Here we read the S1 data for three (highfield) datasets
Step17: Here's an example waveform
Step18: Model-data comparison
Plotting
Step19: Fitting
Residuals function
Step20: Statistics of nphotons and stability of fit
Step21: Wait, what? The residuals spread get larger with increasing stats? That does not sound right.
Step22: Fit fit fit
Step23: Fit singlet fraction and TTS
Step24: GOF uncertainty
Need higher stats?
Fit three parameters
Step25: Fit four parameters
ER
Step26: The fit is pushing the singlet livetime to very low values... There is some degeneracy here, and also some mis-modeling, it seems. The sample at 0 is always under-estimated. Why? Maybe because the tts is actually quite low but modeled here as large. The effects may not be symmetric | Python Code:
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import stats
# import warnings
# warnings.filterwarnings('error')
from multihist import Hist1d, Histdd
Explanation: Imports
End of explanation
# Digitizer sample size
dt = 2
# Waveform time labels
spe_ts = np.linspace(0, 639*2, 640) - 340 * 2
# Valid time (because the waveform does not range the full time span)
valid_t_range = (-100, 300)
t_mask = (valid_t_range[0] <= spe_ts) & (spe_ts < valid_t_range[1])
spe_ts = spe_ts[t_mask]
spe_t_edges = np.concatenate([[spe_ts[0] - dt/2], spe_ts + dt/2])
default_params = dict(
t1 = 3.1, # Singlet lifetime, Nest 2014 p2
t3 = 24, # Triplet lifetime, Nest 2014 p2
fs = 0.2, # Singlet fraction
tts = 2., # Transit time spread.
s1_min=50,
s1_max=100,
dset='er',
pulse_model=1, # This is the CHANNEL that is used...
n_photons = int(2e5),
t_min = -15.,
t_max = 125.,
s1_sample = 'data', # 'uniform'
error_offset = 0. ,
error_pct = 0.
)
def get_params(params):
'''
Returns full set of parameters, setting the values given in `params` and setting the values in
`default_params` if not set explicity.
'''
for k, v in default_params.items(): # key, value
params.setdefault(k, v)
if params['tts'] < 0:
params['tts'] = 1e-6
return params
Explanation: Default settings
End of explanation
import pickle
from scipy.interpolate import interp1d
spe_pulses_cum = []
spe_ys = []
for ch, fn in enumerate(['170323_103732', '170323_104831']):
with open('../pulse_shape_single_pe/%s_ch%d.pickle' % (fn, ch) , 'rb') as infile:
ys = pickle.load(infile)[t_mask]
plt.plot(spe_ts, ys/ys.sum(), label='Channel %d' % ch)
spe_ys.append(ys/ys.sum())
# spe_pulses_cum: list of 2 elements: cumulative distribution for two channels
spe_pulses_cum.append(
interp1d(spe_ts, np.cumsum(ys)/ys.sum())
)
plt.ylim(-0.01, 0.3)
plt.xlabel('Time (ns)')
plt.ylabel('Area / (2 ns)')
plt.legend()
plt.title('Relative (normalized) amplitude of single p.e. pulses.')
plt.show()
for ch, p in enumerate(spe_pulses_cum):
plt.plot(spe_ts, p(spe_ts), label='Channel %d' % ch)
plt.grid(alpha=0.2, linestyle='-')
plt.xlabel('Time (ns)')
plt.ylabel('Cumulative fraction of area found')
plt.legend()
plt.show()
Explanation: Load PMT pulses
Pulse shape
One of the elements of simulted S1s is the single p.e. pulse model. We extract this from the gain calibration dataset.
End of explanation
# custom_pmt_pulse_current(pmt_pulse, offset, dt, samples_before, samples_after)
from pax.simulation import custom_pmt_pulse_current
for ch, c in zip([0, 1], ['blue', 'red']):
plt.plot(custom_pmt_pulse_current(spe_pulses_cum[ch], 0.1, 2, 10, 100), color=c)
plt.plot(spe_ts * 0.5 + 10 - 0.5, spe_ys[ch] * 0.5, color=c, ls='--')
plt.xlim(-10, 60)
plt.xlabel('Time sample number')
plt.ylabel('Relative amplitude')
plt.show()
Explanation: What do we need the cumulative fraction for? Well, we input this into the custom_pmt_pulse_current in pax.simulation. Here is a quick check that all is well. There is just a little shift, but the alignment is quite arbitrary anyway.
End of explanation
gain_params = []
for ch, fn in enumerate(['170323_103732', '170323_104831']):
with open('../pulse_shape_single_pe/%s_ch%d_function.pickle' % (fn, ch) , 'rb') as infile:
_norm, _popt, _perr = pickle.load(infile)
gain_params.append(np.concatenate([np.array([_norm]), _popt, _perr]))
gain_params = np.array(gain_params)
import scipy
def area_sample(n_values, gain_params, **params):
params = get_params(params)
channel = params['pulse_model']
norm, mu, sigma, _, _ = gain_params[channel]
lower, upper = (0., 3.)
X = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma)
return X.rvs(n_values)
def gaus_trunc(x, mu, sigma):
return (x > 0) * np.exp( - (x - mu)**2 / (2 * sigma**2))
nbins = 600
ran = (-0.5, 3.5)
for channel in (0, 1):
plt.hist(area_sample(200000, gain_params, pulse_model = channel), bins=nbins, histtype='step', normed=True, range=ran)
x_plot = np.linspace(*ran, num=nbins)
y_plot = gaus_trunc(x_plot,gain_params[channel][1], gain_params[channel][2])
norm = 1 / (np.sum(y_plot) * (ran[1] - ran[0])) * nbins
plt.plot(x_plot, norm * y_plot)
plt.title('Channel %d' % channel)
plt.show()
Explanation: Gain variation
End of explanation
import numba
# def split_s1_groups(x, n_x, s1_min, s1_max):
# Splits x into groups with uniform(s1_min, s1_max) elements, then return matrix of histograms per group.
# Returns: integer array (n_x, n_groups)
# n_x: number of possible values in x. Assumed to be from 0 ... n_x - 1
# s1_min: minimum S1 number of hits
# s1_max: maximum S1 number of hits
#
# # We want to exhaust the indices x. Simulate a generous amount of S1 sizes
# n_s1_est = int(1.5 * 2 * len(x) / (s1_min + s1_max))
# if
# hits_per_s1 = np.random.randint(s1_min, s1_max, size=n_s1_est)
# result = np.zeros((n_x, n_s1_est), dtype=np.int)
# s1_i = _split_s1_groups(x, hits_per_s1, result)
# return result[:,:s1_i - 1]
# @numba.jit(nopython=True)
# def _split_s1_groups(x, hits_per_s1, result):
# s1_i = 0
# for i in x:
# if hits_per_s1[s1_i] == 0:
# s1_i += 1
# continue
# result[i, s1_i] += 1
# hits_per_s1[s1_i] -= 1
# return s1_i
def split_s1_groups(x, n_x, areas, **params):
Splits x into groups with uniform (s1_min, s1_max) elements, then return matrix of histograms per group.
Returns: integer array (n_x, n_groups)
n_x: number of possible values in x. Assumed to be from 0 ... n_x - 1
s1_min: minimum S1 number of hits
s1_max: maximum S1 number of hits
params = get_params(params)
# We want to exhaust the indices x. Simulate a generous amount of S1 sizes
n_s1_est = int(1.5 * 2 * len(x) / (params['s1_min'] + params['s1_max']))
if params['s1_sample'] == 'data' and 'xams_data' not in globals():
print('Warning: data-derived s1 area distribution not possible, reverting to uniform...')
params['s1_sample'] = 'uniform'
if params['s1_sample'] == 'uniform':
pe_per_s1 = (params['s1_max'] - params['s1_min']) * np.random.random(size=n_s1_est) + params['s1_min']
elif params['s1_sample'] == 'data':
# Take S1 from the data sample
s1s_data = xams_data[params['dset']]['s1']
s1s_data = s1s_data[(s1s_data >= params['s1_min']) & (s1s_data < params['s1_max'])]
pe_per_s1 = np.random.choice(s1s_data, size=n_s1_est)
else:
raise ValueError('Configuration not understood, got this: ', params['s1_sample'])
result = np.zeros((n_x, n_s1_est), dtype=float)
# s1_i = _split_s1_groups(x, pe_per_s1, result)
s1_i = _split_s1_groups(x, pe_per_s1, result, areas)
return result[:,:s1_i - 1]
@numba.jit(nopython=True)
def _split_s1_groups(x, hits_per_s1, result, areas):
s1_i = 0
for photon_i, i in enumerate(x):
if hits_per_s1[s1_i] < 0:
s1_i += 1
continue
result[i, s1_i] += areas[photon_i]
hits_per_s1[s1_i] -= areas[photon_i]
return s1_i
# %%timeit
# split_s1_groups(np.random.randint(0, 100, size=int(1e6)), 101, 10, 20)
def shift(x, n):
Shift the array x n samples to the right, adding zeros to the left.
if n > 0:
return np.pad(x, (n, 0), mode='constant')[:len(x)]
else:
return np.pad(x, (0, -n), mode='constant')[-len(x):]
def simulate_s1_pulse(**params):
# n_photons=int(2e5),
Return (wv_matrix, time_matrix, t_shift vector) for simulated S1s, consisting of n_photons in total
params = get_params(params)
n_photons = params['n_photons']
##
# Make matrix (n_samples, n_waveforms) of pulse waveforms with various shifts
##
i_noshift = np.searchsorted(spe_t_edges, [0])[0] # Index corresponding to no shift in the waveform
y = spe_ys[params['pulse_model']] # This is the CHANNEL
# This is a matrix filled with waveforms, ordered by their SHIFT.
# So, these are all just model waveforms and will be selected later
wv_matrix = np.vstack([shift(y, i - i_noshift)
for i in range(len(spe_ts))]).T
##
# Simulate S1 pulse times, convert to index
##
times = np.zeros(n_photons)
n_singlets = np.random.binomial(n=n_photons, p=params['fs']) # We randomly select if the photon came from a singlet
# or triplet decay
# Time is distributed according to exponential distribution
# This is the TRUE time of all the photons generated, assuming time=0 is the time of the interaction
times += np.concatenate([
np.random.exponential(params['t1'], n_singlets),
np.random.exponential(params['t3'], n_photons - n_singlets)
])
# Since `times` is now sorted in (singlet, triplet), shuffle them
np.random.shuffle(times)
# Here we start taking into account detector physics: the transit time spread (simulated as normal dist.)
times += np.random.normal(0, params['tts'], size=n_photons)
# Find the bin that the photon would be in if it were sampled.
indices = np.searchsorted(spe_t_edges, times)
# Now, we delete all the photons that are outside of the bin range and re-match to the bin centers
# (Check the searchsorted documentation)
indices = indices[~((indices == 0) | (indices == len(spe_t_edges)))] - 1
# This is the new amount of photons simulated
if len(indices) < n_photons:
# print('Warning: I just threw away %d photons...' % (n_photons - len(indices)))
n_photons = len(indices)
# TODO: gain variation simulation
areas = area_sample(n_photons, gain_params, **params)
# NOTE do we also want to take the difference between the two channels into accont?
##
# Build instruction matrix, simulate waveforms
##
# So far, we've just been simulating a bunch of photons (very many).
# We are now going to split this into S1s: the split will be made at a random point between s1_min and s1_max.
# `index_matrix` is a matrix split into groups forming S1s.
# index_matrix = split_s1_groups(indices, len(spe_t_edges) - 1, params['s1_min'], params['s1_max'])
index_matrix = split_s1_groups(indices, len(spe_t_edges) - 1, areas, **params)
# Now, index_matrix[:, 0] contains a list of number of entries for the shift for each timestamp in bin
n_s1 = index_matrix.shape[1]
# return wv_matrix, index_matrix
# Remember that wv_matrix is a matrix of waveforms, each element at position i of which is shifted i samples
s1_waveforms = np.dot(wv_matrix, index_matrix)
# return s1_waveforms
##
# Alignment based on maximum sample, compute average pulse
##
time_matrix, t_shift = aligned_time_matrix(spe_ts, s1_waveforms)
return s1_waveforms, time_matrix, t_shift
def aligned_time_matrix(ts, wv_matrix, mode = '10p'):
Return time matrix that would align waveforms im wv_matrix
n_s1 = wv_matrix.shape[1]
if mode == 'max':
# Find the position of maximum sample and match its times
t_shift = ts[np.argmax(wv_matrix, axis=0)]
elif mode == '10p':
fraction_reached = np.cumsum(wv_matrix, axis=0) / np.sum(wv_matrix, axis=0)
# Get the sample where 10% is reached by taking the sample closest to the 10% point
# This is as good as you can get without introducing fractional samples (which may be an improvement)
# TODO get interpolation in here
distance_to_10p_point = np.abs(fraction_reached - 0.1)
t_shift = ts[np.argmin(distance_to_10p_point, axis=0)]
time_matrix = np.repeat(ts, n_s1).reshape(wv_matrix.shape)
time_matrix -= t_shift[np.newaxis,:]
return time_matrix, t_shift
def average_pulse(time_matrix, wv_matrix):
Return average pulse, given time and waveform matrices
h, _ = np.histogram(time_matrix, bins=spe_t_edges, weights=wv_matrix)
h /= h.sum()
return h
def s1_average_pulse_model(*args, **kwargs):
wv_matrix, time_matrix, _ = simulate_s1_pulse(*args, **kwargs)
return average_pulse(time_matrix, wv_matrix)
s1_wvs, tmat, _ = simulate_s1_pulse(n_photons=int(2e5), t3=1, t1=50, tts=1, fs=0.5, dset='nr')
for i in range(100):
plt.plot(tmat[:, i], s1_wvs[:, i], alpha=0.1, c='k')
plt.grid(alpha=0.2, linestyle='-')
Explanation: S1 model
Simulation
End of explanation
def s1_models_resample(*args, n_data_s1s=1000, bootstrap_trials=10, **kwargs):
Return bootstrap_trials waveform templates from sampling n_data_s1s s1s
wv_matrix, time_matrix, _ = simulate_s1_pulse(*args, **kwargs)
n_s1s = wv_matrix.shape[1]
waveform_templates = np.zeros((len(spe_ts), bootstrap_trials))
for i in range(bootstrap_trials):
new_indices = np.random.randint(n_s1s, size=n_data_s1s)
waveform_templates[:, i] = average_pulse(time_matrix[:, new_indices],
wv_matrix[:, new_indices])
return waveform_templates
def sigmas_plot(x, q, color='b', **kwargs):
for n_sigma, alpha in [(1,0.5), (2, 0.1)]:
plt.fill_between(x,
np.percentile(q, 100 * stats.norm.cdf(-n_sigma), axis=1),
np.percentile(q, 100 * stats.norm.cdf(n_sigma), axis=1),
alpha=alpha, linewidth=0, color=color, step='mid')
plt.plot(x,
np.percentile(q, 50, axis=1),
color=color, linestyle='-', alpha=0.5, linewidth=1, **kwargs)
waveform_templates = s1_models_resample(n_data_s1s=100, s1_min=50, s1_max=60, bootstrap_trials=100)
sigmas_plot(spe_ts, waveform_templates)
Explanation: Here is what we get out.
wv_matrix is a matrix containing the y-coordinates of the waveforms. The columns are the individual waveforms, to get the first waveform, go wv_matrix[:, 0]. time_matrix is the same thing except for it contains the times. t_shift_vector contains the shift of the waveform in ns (based on pulse times).
Statistical errors
Here we simulate statistical errors by simulating n_data_s1s and then performing bootstrap trials. The conclusion:....
End of explanation
import itertools
def s1_models_error(*args, shifts=None, **kwargs):
'''
Compute the error on the S1 waveform given errors on specific parameters.
This will compute the S1 model for parameter +error, +0, and -error.
All combinations of paramters are tried.
`shifts` is a dict containting the allowed shift (+/-) for each model parameter.
`*args` and `**kwargs` will be passed to `s1_average_pulse_model` to compute the base model.
This function can also be used for getting the difference in pulse model for channel 0 and 1.
'''
if shifts is None:
# Default uncertainty: in pulse model and in TTS
shifts = dict(tts=0.5, pulse_model=[0,1])
base_model = s1_average_pulse_model(*args, **kwargs)
# Allow specifying a single +- amplitude of variation
for p, shift_values in shifts.items():
if isinstance(shift_values, (float, int)):
shifts[p] = kwargs.get(p, default_params[p]) + np.array([-1, 0, 1]) * shift_values
shift_pars = sorted(shifts.keys())
shift_values = [shifts[k] for k in shift_pars]
# shift_value_combs is a list of paramters that will be tried to compute the average pulse.
# Contains all combintations: (+, 0, -) for all the parameters. ((3n)^2 for n number of parameters.)
shift_value_combs = list(itertools.product(*shift_values))
alt_models = []
for vs in shift_value_combs:
kw = dict()
kw.update(kwargs)
for i, p in enumerate(shift_pars):
kw[p] = vs[i]
alt_models.append(s1_average_pulse_model(*args, **kw))
alt_models = np.vstack(alt_models)
# Hmmm. this seems like an upper estimate of the error, no?
# ask jelle
minus = np.min(alt_models, axis=0)
plus = np.max(alt_models, axis=0)
return minus, base_model, plus
# return [s1_average_pulse_model(*args, **kwargs)
# for q in [-tts_sigma, 0, tts_sigma]]
minus, base, plus = s1_models_error()
plt.fill_between(spe_ts, minus, plus, alpha=0.5, linewidth=0, label='Uncertainty')
plt.plot(spe_ts, base, label='Base model')
plt.xlabel('Time, (ns)')
plt.ylabel('Time')
plt.legend()
plt.show()
Explanation: Statistical errors are negligible if you have more than a few hundred waveforms.
Systematic errors
End of explanation
xams_data = dict()
xams_data['nr'], xams_data['er'], xams_data['bg_nr'] = pickle.load(open('highfield_dataframes.pickle', 'rb'))
xams_s1s = dict()
# Get pulse waveforms to matrix rather than object column
for k, d in xams_data.items():
xams_s1s[k] = np.array([x for x in d['s1_pulse']])
del d['s1_pulse']
Explanation: Real data waveforms
Here we read the S1 data for three (highfield) datasets: NR, ER and BG_NR. We store it in the form of a dict (keys: er, nr, bg_nr). Each dict item is an array containing the waveforms (per row).
End of explanation
plt.plot(spe_ts, xams_s1s['nr'][0])
plt.xlabel('Time (ns)')
plt.ylabel('Amplitude')
plt.show()
def real_s1_wv(**params):
Return average S1 waveform, number of S1s it was constructed from
params = get_params(params)
areas = xams_data[params['dset']]['s1'].values
mask = (params['s1_min'] < areas) & (areas < params['s1_max'])
# Could now derive distribution, I'll just assume uniform for the moment.
# Hist1d(areas[mask],
# bins=np.linspace(params['s1_min'], params['s1_max'], 100)).plot()
n_data_s1s = mask.sum()
wvs = xams_s1s[params['dset']][mask].T
tmat, _ = aligned_time_matrix(spe_ts, wvs)
real_s1_avg = average_pulse(tmat, wvs)
return real_s1_avg, n_data_s1s
s1_range = (10, 20)
dset ='nr'
ydata, n_data_s1s = real_s1_wv(s1_min = s1_range[0], s1_max = s1_range[1])
plt.plot(spe_ts, ydata)
plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (s1_range[0], s1_range[1], n_data_s1s))
s1_bins = np.linspace(0, 100, 11)
for left, right in zip(s1_bins[:-1], s1_bins[1:]):
ydata, n_data_s1s = real_s1_wv(s1_min = left, s1_max = right, dset = 'er')
plt.plot(spe_ts, ydata, label = '%d - %d p.e.' % (left, right))
#plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (left, right, n_data_s1s))
#plt.show()
plt.xlim(-10, 100)
plt.title('ER')
plt.legend()
plt.show()
for left, right in zip(s1_bins[:-1], s1_bins[1:]):
ydata, n_data_s1s = real_s1_wv(s1_min = left, s1_max = right, dset='nr')
plt.plot(spe_ts, ydata, label = '%d - %d p.e.' % (left, right))
#plt.title('Average waveform %.1f - %.1f p.e., %d events.' % (left, right, n_data_s1s))
#plt.show()
plt.xlim(-10, 100)
plt.title('NR')
plt.legend()
plt.show()
Explanation: Here's an example waveform
End of explanation
def residuals(ydata, minus, base, plus, **params):
params = get_params(params)
# CHANGED BY ERIK check for zero
sigma = get_sigma(minus, base, plus, **params)
if 0. in sigma:
zero_positions = np.where(sigma == 0)
print('Warning: found zero in error array at positions: ', zero_positions)
print('Replacing with infinite error instead...')
for pos in zero_positions:
sigma[pos] = np.inf
return (ydata - base) / sigma
def get_sigma(minus, base, plus, **params):
params = get_params(params)
sigma = np.abs(plus - minus)/2 + params['error_offset'] + params['error_pct'] * np.abs(base)
return sigma
def comparison_plot(ydata, minus, base, plus, **params):
params = get_params(params)
sigmas = get_sigma(minus, base, plus, **params)
# large subplot
ax2 = plt.subplot2grid((3,1), (2,0))
ax1 = plt.subplot2grid((3,1), (0,0), rowspan=2, sharex=ax2)
#f, (ax1, ax2) = plt.subplots(2, sharex=True)
plt.sca(ax1)
# plt.fill_between(spe_ts, minus, plus, alpha=0.5, linewidth=0, step='mid')
plt.fill_between(spe_ts, base - sigmas, base + sigmas,
alpha=0.5, linewidth=0, step='mid')
plt.plot(spe_ts, base, linestyle='steps-mid', label='Model')
plt.plot(spe_ts, ydata, marker='.', linestyle='', markersize=3, c='k', label='Observed')
plt.grid(alpha=0.1, linestyle='-', which='both')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel("Fraction of amplitude")
plt.axhline(0, c='k', alpha=0.5)
leg = plt.legend(loc='upper right', numpoints=1)
leg.get_frame().set_linewidth(0.0)
leg.get_frame().set_alpha(0.5)
plt.ylim(0, None)
#ax1.set_xticklabels([])
# Add residuals
plt.sca(ax2)
plt.subplot2grid((3,1), (2,0), sharex=ax1)
plt.xlim(params['t_min'], params['t_max'])
res = residuals(ydata, minus, base, plus)
plt.plot(spe_ts, res,
linestyle='', marker='x', c='k', markersize=3)
plt.ylim(-3, 3)
plt.grid(which='both', linestyle='-', alpha=0.1)
plt.axhline(0, c='k', alpha=0.5)
plt.ylabel("Residual")
plt.xlabel("Time since alignment point")
plt.text(#plt.xlim()[1] * 0.5, plt.ylim()[1] * 0.6,
60, 2,
'Mean abs. res.: %0.3f' % np.abs(res).mean())
plt.tight_layout()
plt.gcf().subplots_adjust(0,0,1,1,0,0)
def comparison_plot_2(ydata, minus, base, plus, **params):
params = get_params(params)
res = residuals(ydata, minus, base, plus, **params)
sigmas = get_sigma(minus, base, plus, **params)
# plt.fill_between(spe_ts, minus - params['error_offset'], plus + params['error_offset'],
# alpha=0.5, linewidth=0, step='mid')
plt.fill_between(spe_ts, base - sigmas, base + sigmas,
alpha=0.5, linewidth=0, step='mid')
plt.plot(spe_ts, base, linestyle='steps-mid', label='Model')
plt.plot(spe_ts, ydata, marker='.', linestyle='', markersize=3, c='k', label='Observed')
plt.yscale('log')
plt.ylim(2e-5, 1e-1)
plt.ylabel("Fraction of amplitude")
plt.xlabel('Time (ns)')
for _l in (params['t_min'], params['t_max']):
plt.axvline(_l, ls='dotted', color='black')
plt.twinx()
plt.plot(spe_ts, np.abs(res), color='red')
plt.ylabel('Residual / error')
plt.ylim(0)
plt.xlim(params['t_min'] - 20, params['t_max'] + 50)
res = res[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
chi2 = sum(res**2) / len(spe_ts[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])])
print('chi2 = %f' % chi2)
cust_params = {
's1_min' : 20,
's1_max' : 30,
'dset' : 'nr',
'tts' : .75,
'fs' : 0.2
}
ydata, n_data_s1s = real_s1_wv(**cust_params)
minus, base, plus = s1_models_error(**cust_params)
res = residuals(ydata, minus, base, plus)
comparison_plot(ydata, minus, base, plus)
print('Average waveform %.1f - %.1f p.e., %d events.' % (cust_params['s1_min'], cust_params['s1_max'], n_data_s1s))
comparison_plot_2(ydata, minus, base, plus, error_offset = 0.0002)
Explanation: Model-data comparison
Plotting
End of explanation
def gof(verbose=True, mode = 'chi2_ndf', **params):
'''
Get the mean residuals for given model parameters.
'''
params = get_params(params)
# Do not allow unphysical values
if params['t1'] < 0 or params['t3'] < 0 or not (0 <= params['fs'] <= 1):
result = float('inf')
else:
ydata, _ = real_s1_wv(**params)
# By default, the errors are set to: [0,1] for pulse model, 1.0 for tts
minus, base, plus = s1_models_error(**params)
res = residuals(ydata, minus, base, plus, **params)
assert len(res) == len(spe_ts)
res = res[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
if mode == 'mean':
result = np.abs(res).mean()
elif mode == 'median':
result = np.median(np.abs(res))
elif mode == 'chi2':
result = np.sum(res**2)
elif mode == 'chi2_ndf':
result = 1/len(res) *np.sum(res**2)
elif mode == 'res':
result = res
else:
raise ValueError('Mode unknown, fot this: %s' % mode)
if verbose and (mode != 'res'):
print('gof={gof}, fs={fs}, t1={t1}, t3={t3}, tts={tts}'.format(gof=result, **params))
return result
from copy import deepcopy
def gof_simultaneous(fs_er, fs_nr, verbose=True, mode='mean', **params):
params = get_params(params)
params_er = deepcopy(params)
params_nr = deepcopy(params)
params_er['dset'] = 'er'
params_nr['dset'] = 'nr'
params_er['fs'] = fs_er
params_nr['fs'] = fs_nr
gof_er = gof(verbose=False, mode=mode, **params_er)
gof_nr = gof(verbose=False, mode=mode, **params_nr)
if verbose:
print('gof_er={gof_er}, gof_nr={gof_nr}, fs_er={fs_er}, fs_nr={fs_nr} t1={t1}, t3={t3}, tts={tts}'.format(
gof_er=gof_er, gof_nr=gof_nr, fs_er = params_er['fs'], fs_nr = params_nr['fs'], **params))
return gof_er + gof_nr
gof_simultaneous(fs_er = 0.2, fs_nr = 0.16, mode='chi2', error_offset = 2e-4)
Explanation: Fitting
Residuals function
End of explanation
iterations = 100
n_photons_scan = [int(1e4), int(3e4), int(7e4), int(2e5)]
const_gofs = []
for n_photons in n_photons_scan:
print(n_photons)
const_gofs.append([gof(verbose = False, mode='chi2', n_photons = n_photons) for _ in range(iterations)])
for gofs, n_photons, c in zip(const_gofs, n_photons_scan, ['blue', 'orange', 'green', 'red', 'black']):
plt.hist(gofs, label="%d" % n_photons, histtype='step', range=(0, 500), bins=100, color = c)
plt.axvline(np.mean(gofs), color = c)
plt.legend()
plt.show()
Explanation: Statistics of nphotons and stability of fit
End of explanation
for i in range(10):
plt.plot(gof(mode='res', error_offset = 0.))
for i in range(10):
plt.plot((gof(mode='res', error_offset = 0., error_pct = 0.1))**2)
def sigma_from_params(**params):
params = get_params(params)
# ydata, _ = real_s1_wv(**params)
minus, base, plus = s1_models_error(**params)
sigma = get_sigma(minus, base, plus, **params)
sigma = sigma[(spe_ts >= params['t_min']) & (spe_ts < params['t_max'])]
return sigma
plt.plot(1/sigma_from_params(error_pct = 5e-2, error_ofset = 1e-3))
plt.ylim(0)
iterations = 250
n_photons_scan = [int(1e4), int(3e4), int(7e4), int(2e5)]
const_gofs = []
for n_photons in n_photons_scan:
print(n_photons)
const_gofs.append([gof(verbose = False, mode='chi2', n_photons = n_photons,
error_pct = 1e-2, error_ofset = 1e-4) for _ in range(iterations)])
for gofs, n_photons, c in zip(const_gofs, n_photons_scan, ['blue', 'orange', 'green', 'red', 'black']):
plt.hist(gofs / np.average(gofs), label="%d" % n_photons, histtype='step', range=(0, 2), bins=200, color = c)
plt.axvline(color = c)
plt.legend()
plt.show()
ydata, n_data_s1s = real_s1_wv()
minus, base, plus = s1_models_error()
# res = residuals(ydata, minus, base, plus)
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4, t_max= 125)
# plt.ylim(0, 2)
Explanation: Wait, what? The residuals spread get larger with increasing stats? That does not sound right.
End of explanation
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof_simultaneous(fs_er=x[0], fs_nr=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100,
mode='chi2', error_offset = 1e-4),
[0.2, 0.3, 25., 2.],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=10000),
method='Powell',
)
print('Done')
# mode = mean, s1_min =30, s1_max = 100: [ 0.20968042, 0.28464569, 24.8145522 , 2.42197182]
# array([ 0.17916349, 0.32752012, 24.00000003, 1.03864494])
# array([ 0.18086791, 0.24823393, 24.23984679, 2.3384889 ]) 462.62128366264312
# array([ 0.19454366, 0.3126068 , 25.57424767, 2.38196603]) 484.92280858647905
x = optresult.x
def check_params(plot_type = 0, **params):
params = get_params(params)
ydata, _ = real_s1_wv(**params)
minus, base, plus = s1_models_error(**params)
if plot_type == 1:
comparison_plot(ydata, minus, base, plus, **params)
elif plot_type == 2:
comparison_plot_2(ydata, minus, base, plus, **params)
elif plot_type == 0:
comparison_plot(ydata, minus, base, plus, **params)
plt.show()
comparison_plot_2(ydata, minus, base, plus, **params)
return
x
optresult
check_params(s1_min = 30, s1_max = 100, dset='er', fs=x[0], t3 = x[2], tts=x[3], plot_type=0, error_offset = 1e-4)
plt.title('ER')
plt.show()
check_params(s1_min = 30, s1_max = 100, dset='nr', fs=x[1], t3 = x[2], tts=x[3], plot_type=0, error_offset = 1e-4)
plt.title('NR')
plt.show()
gofs = [gof_simultaneous(fs_er=x[0], fs_nr=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100,
mode='chi2', error_offset = 1e-4)
for _ in range(20)]
plt.hist(gofs)
Explanation: Fit fit fit
End of explanation
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], tts=x[1], s1_min=30, s1_max = 100, error_pct = 1e-2, error_offset = 1e-4, mode='chi2_ndf'),
[0.2, 2],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
print('Done')
optresult
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], tts=fit[1], s1_min = 30, s1_max = 100,
error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
Explanation: Fit singlet fraction and TTS
End of explanation
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t3=x[1], tts=x[2], s1_min = 30, s1_max = 100,
error_pct = 0.5e-2, error_offset = 1e-5),
[0.2, 24, 3],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t3=fit[1], tts=fit[2], error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 0.5e-2, error_offset = 1e-5)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 0.5e-2, error_offset = 1e-5)
plt.show()
def gof_v_parameter(parameter, variation_range, num, **params):
params_to_try = np.linspace(*variation_range, num=num)
gofs = []
for param_value in params_to_try:
params[parameter] = param_value
gofs.append(gof(**params))
return params_to_try, np.array(gofs)
def gof_v_2_paramters(parameter1, parameter2, variation_range1, variation_range2, num1, num2, **params):
import time
start = time.time()
params_to_try1 = np.linspace(*variation_range1, num=num1)
params_to_try2 = np.linspace(*variation_range2, num=num2)
gvd = []
for par1 in params_to_try1:
for par2 in params_to_try2:
params[parameter1] = par1
params[parameter2] = par2
gof_value = gof(**params)
gvd.append([par1, par2, gof_value])
stop = time.time()
print('Computation took %d seconds (%.1f s/it)' % ((stop - start), (stop - start) / len(gvd)))
return np.array(gvd)
nx = 20
ny = 20
ding = gof_v_2_paramters('fs', 't3', (0.16, 0.24), (23., 27.), nx, ny, tts=fit[2],
error_pct = 1e-2, error_offset = 1e-4, verbose=False)
plt.scatter(ding[:,0], ding[:,1], c=ding[:, 2])
plt.colorbar()
x = np.reshape(ding[:, 0], (nx, ny))
y = np.reshape(ding[:, 1], (nx, ny))
z = np.reshape(ding[:, 2], (nx, ny))
plt.pcolormesh(x, y, z/ np.min(z))
plt.colorbar()
edge_x = ding[:, 0]
edge_y =
plt.figure()
ax = plt.gca()
pc = ax.pcolormesh(edge_x, edge_y, 1000* (h_fg - h_bg).T, cmap='RdBu', vmin = -3e-1, vmax = 3e-1)
fss, gofs = gof_v_parameter('fs', (0.14, 0.24), 20, fs=fit[0], t3=fit[1], tts=fit[2], error_pct = 1e-2, error_offset = 1e-4)
plt.plot(fss, gofs, marker='.', markersize=5)
optresult_nr = optimize.minimize(
lambda x: gof(fs=x[0], t3=x[1], tts=x[2], dset = 'nr', error_pct = 1e-2, error_offset = 1e-4),
[0.2, 24, 3],
bounds=[[.01, 1], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult_nr.x
print(fit)
ydata, _ = real_s1_wv(dset='nr')
minus, base, plus = s1_models_error(fs=fit[0], t3=fit[1], tts=fit[2], dset='nr', error_pct = 1e-2, error_offset = 1e-4)
comparison_plot(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
plt.show()
comparison_plot_2(ydata, minus, base, plus, error_pct = 1e-2, error_offset = 1e-4)
for _l in (-15, 125):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
plt.hist(xams_data['er']['s1'], bins=100, histtype='step', range=(50,100))
plt.hist(xams_data['nr']['s1'], bins=100, histtype='step', range=(50,100))
plt.show()
Explanation: GOF uncertainty
Need higher stats?
Fit three parameters
End of explanation
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t1=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100, dst='er'),
[0.2, 3.1, 24, 3],
bounds=[[.01, 1], [.1, 5], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
# fit = optresult.x
# ydata, _ = real_s1_wv()
# minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3])
# comparison_plot(ydata, minus, base, plus)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3], s1_min=30, s1_max = 100)
comparison_plot(ydata, minus, base, plus)
plt.show()
comparison_plot_2(ydata, minus, base, plus)
for _l in (-20, 100):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
Explanation: Fit four parameters
ER
End of explanation
from scipy import optimize
optresult = optimize.minimize(
lambda x: gof(fs=x[0], t1=x[1], t3=x[2], tts=x[3], s1_min=30, s1_max = 100, dst='nr'),
[0.2, 3.1, 24, 3],
bounds=[[.01, 1], [.1, 5], [20, 30], [.1, 5]],
options=dict(maxfev=1000),
method='Powell',
)
fit = optresult.x
print(fit)
ydata, _ = real_s1_wv()
minus, base, plus = s1_models_error(fs=fit[0], t1=fit[1], t3=fit[2], tts=fit[3], s1_min=30, s1_max = 100, dset='nr')
comparison_plot(ydata, minus, base, plus)
plt.show()
comparison_plot_2(ydata, minus, base, plus)
for _l in (-20, 100):
plt.axvline(_l)
plt.xlim(-50, 200)
plt.show()
Explanation: The fit is pushing the singlet livetime to very low values... There is some degeneracy here, and also some mis-modeling, it seems. The sample at 0 is always under-estimated. Why? Maybe because the tts is actually quite low but modeled here as large. The effects may not be symmetric: there are many things causing a delay, but not a negative delay.
NR
End of explanation |
91 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>3. Aproximacion de las raices para 230 x^4 + 18 x^3 + 9 x^2 - 221 x -9</h2>
A partir de métodos gráficos podemos ver que existen dos reales en los intervalos (-0.5,0) y (0.5,1.5)
<h3>Método de bisección
Step1: <h3>Método de regla falsa
Step2: <h3>Método de Newton - Raphson
Step3: <h3>Método de secante | Python Code:
import math
def funcion(x):
return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9
def biseccion(intA, intB, errorA, noMaxIter):
if(funcion(intA)*funcion(intB)<0):
noIter = 0
errorTmp = 1
intTmp = 0
oldInt = intA
while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):
intTmp = (intB+intA)/2
if(funcion(intA)*funcion(intTmp)<0):
intB = intTmp
else:
intA = intTmp
noIter+=1
errorTmp=abs((intTmp-oldInt)/intTmp)*100
oldInt = intTmp
#print('Error: ',errorTmp)
print('La raíz es: ',intTmp)
print('F(raiz) es:' ,funcion(intTmp))
print('Error: ',errorTmp)
print('No. de iteraciones realizadas: ',noIter)
else:
print('En el intervalo dado la función no presenta cambio de signo')
print('No hay raices que encontrar')
print('------------------------------------')
print('Valores inicial : -0.5 y 0')
biseccion(-0.5,0,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores iniciales : 0.5,1.5')
biseccion(0.5,1.5,math.pow(10,-6),1000)
print('------------------------------------')
Explanation: <h2>3. Aproximacion de las raices para 230 x^4 + 18 x^3 + 9 x^2 - 221 x -9</h2>
A partir de métodos gráficos podemos ver que existen dos reales en los intervalos (-0.5,0) y (0.5,1.5)
<h3>Método de bisección:</h3>
End of explanation
import math
def funcion(x):
return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9
def reglaFalsa(intA, intB, errorA, noMaxIter):
if(funcion(intA)*funcion(intB)<0):
noIter = 0
errorTmp = 1
intTmp = 0
oldInt = intA
while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):
intTmp = intB-((funcion(intB)*(intA+intB))/(funcion(intA)-funcion(intB)))
if(funcion(intA)*funcion(intTmp)<0):
intB = intTmp
else:
intA = intTmp
noIter+=1
errorTmp=abs((intTmp-oldInt)/intTmp)*100
oldInt = intTmp
#print('Error: ',errorTmp)
print('La raíz es: ',intTmp)
print('F(raiz) es:' ,funcion(intTmp))
print('Error: ',errorTmp)
print('No. de iteraciones realizadas: ',noIter)
else:
print('En el intervalo dado la función no presenta cambio de signo')
print('No hay raices que encontrar')
print('------------------------------------')
print('Valores inicial : -0.5 y 0')
reglaFalsa(-0.5,0,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores iniciales : 0.5,1.5')
reglaFalsa(0.5,1.5,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores inicial : -0.2 y 0')
reglaFalsa(-0.2,0,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores iniciales : 0.8,1')
reglaFalsa(0.8,1,math.pow(10,-6),1000)
print('------------------------------------')
Explanation: <h3>Método de regla falsa:</h3>
<p>Debido al comportamiento de la función cerca a las raices (decrece y crece demasiado rápido) no fue posible usar este método para alcanzar una aproximación con precisión de 10^-6 aunque reducieramos los intervalos iniciales basándonos en los resultados del método de bisección (que si satisfacen la precisión).</p>
End of explanation
import math
def funcion(x):
return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9
def funcionDeriv(x):
return (920*math.pow(x,3))+(54*math.pow(x,2))+(18*x)-221
def newtonRaphson(val, errorA, noMaxIter):
noIter = 0
errorTmp = 1
intTmp = 0
while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):
valTmp = val-((funcion(val))/(funcionDeriv(val)))
errorTmp=abs((valTmp-val)/valTmp)*100
val = valTmp
noIter+=1
print('La raíz es: ',valTmp)
print('F(raiz) es:' ,funcion(valTmp))
print('Error: ',errorTmp)
print('No. de iteraciones realizadas: ',noIter)
print('------------------------------------')
print('Valores inicial : -0.5')
newtonRaphson(-0.5,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores iniciales : 1.5')
newtonRaphson(1.5,math.pow(10,-6),1000)
print('------------------------------------')
Explanation: <h3>Método de Newton - Raphson:</h3>
End of explanation
import math
def funcion(x):
return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9
def secante(primerVal, segundoVal, errorA, noMaxIter):
noIter = 0
errorTmp = 1
intTmp = 0
while(noIter<noMaxIter and errorTmp>errorA and funcion(segundoVal)!=0):
valTmp = segundoVal-((funcion(segundoVal)*(primerVal-segundoVal))/(funcion(primerVal)-funcion(segundoVal)))
primerVal = segundoVal
segundoVal = valTmp
errorTmp=abs((segundoVal-primerVal)/segundoVal)*100
#print('Noiter: ',noIter, ' primVal:', primerVal,' segunVal:',segundoVal,' error:',errorTmp)
noIter+=1
print('La raíz es: ',valTmp)
print('F(raiz) es:' ,funcion(valTmp))
print('Error: ',errorTmp)
print('No. de iteraciones realizadas: ',noIter)
print('------------------------------------')
print('Valores inicial : -0.5 y 0')
secante(-0.5,0,math.pow(10,-6),1000)
print('------------------------------------')
print('Valores iniciales : 0.5,1.5')
secante(0.5,1.5,math.pow(10,-6),1000)
print('------------------------------------')
Explanation: <h3>Método de secante:</h3>
End of explanation |
92 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Sparse weights using structural pruning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download and normalize image data from the MNIST dataset
Step3: Define structural pruning parameters
Define parameters for pruning and specify the type of structural pruning. Set the parameters for pruning to (2, 4).
These settings mean that in a block of four elements, at least two with the lowest magnitude are set to zero.
You don't have to set the pruning_schedule parameter. By default, the pruning mask is defined at the first step and it is not updated during the training.
Step4: Define parameters for random pruning with the target sparsity of 50%.
Step5: Define the model architecture and specify which layers to prune. Structural pruning is applied based on the layers of the model you select.
In the example below, we prune only some of the layers. We prune the second Conv2D layer and the first Dense layer.
Notice that the first Conv2D layer cannot be pruned structurally. To be pruned structurally, it should have more than one input channels. Instead, we prune the first Conv2D layer with random pruning.
Step6: Train and evaluate the model.
Step7: Remove the pruning wrapper so that it is not included in the model when you convert it to TensorFlow Lite format.
Step8: Convert model to tflite format
Step9: Visualize and check weights
Now visualize the structure of weights in the Dense layer pruned with 2 by 4 sparsity. Extract the weights from the tflite file.
Step10: To verify that we selected the correct layer that has been pruned, print the shape of the weight tensor.
Step11: Now we visualize the structure for a small subset of the weight tensor. The structure of the weight tensor is sparse in the last dimension, using the (2,4) pattern
Step12: Define the auxiliary function to draw separation lines to see the structure clearly.
Step13: Now visualize the subset of the weight tensor.
Step14: Visualize weights for the Conv2D layer. The structural sparsity is applied in the last channel, similar to the Dense layer. Only the second Conv2D layer is structurally pruned as pointed out above.
Step15: Similar to the weights of Dense layer, the last dimension of the kernel has a (2, 4) structure.
Step16: Let's see how those randomly pruned weights look. We extract them and display a subset of the weight tensor.
Step17: The TensorFlow Model Optimization Toolkit includes a python script that can be used to check whether which layers in the model from the given tflite file have the structurally pruned weights | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow
! pip install -q tensorflow-model-optimization
! pip install -q matplotlib
import tensorflow as tf
from tensorflow import keras
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
Explanation: Sparse weights using structural pruning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_sparsity_2_by_4"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_sparsity_2_by_4.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_sparsity_2_by_4.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_sparsity_2_by_4.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Structural pruning weights from your model to make it sparse in specific pattern can accelerate model inference time with appropriate HW supports.
This tutorial shows you how to:
* Define and train a model on the mnist dataset with a specific structural sparsity
* Convert the pruned model to tflite format
* Visualize structure of the pruned weights
For a general overview of the pruning technique for the model optimization, see the pruning overview. For tutorial on general weight pruning, see Pruning in Keras.
Structural pruning of weights
Structural pruning systematically zeroes out model weights at the beginning of the training process. You apply this pruning techniques to regular blocks of weights to speed up inference on supporting HWs, for example: grouping weights in the model by blocks of four and zeroing out two of those weights in each block, known as a 2 by 4 reduction. This technique applies only to the last dimension of the weight tensor for the model that is converted by TensorFlow Lite. For example, Conv2D layer weights in TensorFlow Lite have the structure [channel_out, height, width, channel_in] and Dense layer weights have the structure [channel_out, channel_in]. The sparsity pattern is applied to the weights in the last dimension: channel_in.
Compare to the random sparsity, the structured sparsity generally has lower accuracy due to restrictive structure, however, it can reduce inference time significantly on the supported hardware.
Pruning can be applied to a model together with other model compression techniques for better compression rate. See quantization and clustering examples in collaborative optimization technique for more details.
Setup
Prepare your development environment and data.
End of explanation
# Load MNIST dataset.
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 and 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: Download and normalize image data from the MNIST dataset
End of explanation
pruning_params_2_by_4 = {
'sparsity_m_by_n': (2, 4),
}
Explanation: Define structural pruning parameters
Define parameters for pruning and specify the type of structural pruning. Set the parameters for pruning to (2, 4).
These settings mean that in a block of four elements, at least two with the lowest magnitude are set to zero.
You don't have to set the pruning_schedule parameter. By default, the pruning mask is defined at the first step and it is not updated during the training.
End of explanation
pruning_params_sparsity_0_5 = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(target_sparsity=0.5,
begin_step=0,
frequency=100)
}
Explanation: Define parameters for random pruning with the target sparsity of 50%.
End of explanation
model = keras.Sequential([
prune_low_magnitude(
keras.layers.Conv2D(
32, 5, padding='same', activation='relu',
input_shape=(28, 28, 1),
name="pruning_sparsity_0_5"),
**pruning_params_sparsity_0_5),
keras.layers.MaxPooling2D((2, 2), (2, 2), padding='same'),
prune_low_magnitude(
keras.layers.Conv2D(
64, 5, padding='same',
name="structural_pruning"),
**pruning_params_2_by_4),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.MaxPooling2D((2, 2), (2, 2), padding='same'),
keras.layers.Flatten(),
prune_low_magnitude(
keras.layers.Dense(
1024, activation='relu',
name="structural_pruning_dense"),
**pruning_params_2_by_4),
keras.layers.Dropout(0.4),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
Explanation: Define the model architecture and specify which layers to prune. Structural pruning is applied based on the layers of the model you select.
In the example below, we prune only some of the layers. We prune the second Conv2D layer and the first Dense layer.
Notice that the first Conv2D layer cannot be pruned structurally. To be pruned structurally, it should have more than one input channels. Instead, we prune the first Conv2D layer with random pruning.
End of explanation
batch_size = 128
epochs = 2
model.fit(
train_images,
train_labels,
batch_size=batch_size,
epochs=epochs,
verbose=0,
callbacks=tfmot.sparsity.keras.UpdatePruningStep(),
validation_split=0.1)
_, pruned_model_accuracy = model.evaluate(test_images, test_labels, verbose=0)
print('Pruned test accuracy:', pruned_model_accuracy)
Explanation: Train and evaluate the model.
End of explanation
model = tfmot.sparsity.keras.strip_pruning(model)
Explanation: Remove the pruning wrapper so that it is not included in the model when you convert it to TensorFlow Lite format.
End of explanation
import tempfile
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
_, tflite_file = tempfile.mkstemp('.tflite')
print('Saved converted pruned model to:', tflite_file)
with open(tflite_file, 'wb') as f:
f.write(tflite_model)
Explanation: Convert model to tflite format
End of explanation
# Load tflite file with the created pruned model
interpreter = tf.lite.Interpreter(model_path=tflite_file)
interpreter.allocate_tensors()
details = interpreter.get_tensor_details()
# Weights of the dense layer that has been pruned.
tensor_name = 'structural_pruning_dense/MatMul'
detail = [x for x in details if tensor_name in x["name"]]
# We need the first layer.
tensor_data = interpreter.tensor(detail[0]["index"])()
Explanation: Visualize and check weights
Now visualize the structure of weights in the Dense layer pruned with 2 by 4 sparsity. Extract the weights from the tflite file.
End of explanation
print(f"Shape of Dense layer is {tensor_data.shape}")
Explanation: To verify that we selected the correct layer that has been pruned, print the shape of the weight tensor.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
# The value 24 is chosen for convenience.
width = height = 24
subset_values_to_display = tensor_data[0:height, 0:width]
val_ones = np.ones([height, width])
val_zeros = np.zeros([height, width])
subset_values_to_display = np.where(abs(subset_values_to_display) > 0, val_ones, val_zeros)
Explanation: Now we visualize the structure for a small subset of the weight tensor. The structure of the weight tensor is sparse in the last dimension, using the (2,4) pattern: two elements out of four are zeros. To make the visualization more clear, we replace all non-zero values with ones.
End of explanation
def plot_separation_lines(height, width):
block_size = [1, 4]
# Add separation lines to the figure.
num_hlines = int((height - 1) / block_size[0])
num_vlines = int((width - 1) / block_size[1])
line_y_pos = [y * block_size[0] for y in range(1, num_hlines + 1)]
line_x_pos = [x * block_size[1] for x in range(1, num_vlines + 1)]
for y_pos in line_y_pos:
plt.plot([-0.5, width], [y_pos - 0.5 , y_pos - 0.5], color='w')
for x_pos in line_x_pos:
plt.plot([x_pos - 0.5, x_pos - 0.5], [-0.5, height], color='w')
Explanation: Define the auxiliary function to draw separation lines to see the structure clearly.
End of explanation
plot_separation_lines(height, width)
plt.axis('off')
plt.imshow(subset_values_to_display)
plt.colorbar()
plt.title("Structural pruning for Dense layer")
plt.show()
Explanation: Now visualize the subset of the weight tensor.
End of explanation
# Get weights of the convolutional layer that has been pruned with 2 by 4 sparsity.
tensor_name = 'structural_pruning/Conv2D'
detail = [x for x in details if tensor_name in x["name"]]
tensor_data = interpreter.tensor(detail[1]["index"])()
print(f"Shape of the weight tensor is {tensor_data.shape}")
Explanation: Visualize weights for the Conv2D layer. The structural sparsity is applied in the last channel, similar to the Dense layer. Only the second Conv2D layer is structurally pruned as pointed out above.
End of explanation
weights_to_display = tf.reshape(tensor_data, [tf.reduce_prod(tensor_data.shape[:-1]), -1])
weights_to_display = weights_to_display[0:width, 0:height]
val_ones = np.ones([height, width])
val_zeros = np.zeros([height, width])
subset_values_to_display = np.where(abs(weights_to_display) > 1e-9, val_ones, val_zeros)
plot_separation_lines(height, width)
plt.axis('off')
plt.imshow(subset_values_to_display)
plt.colorbar()
plt.title("Structurally pruned weights for Conv2D layer")
plt.show()
Explanation: Similar to the weights of Dense layer, the last dimension of the kernel has a (2, 4) structure.
End of explanation
# Get weights of the convolutional layer that has been pruned with random pruning.
tensor_name = 'pruning_sparsity_0_5/Conv2D'
detail = [x for x in details if tensor_name in x["name"]]
tensor_data = interpreter.tensor(detail[0]["index"])()
print(f"Shape of the weight tensor is {tensor_data.shape}")
weights_to_display = tf.reshape(tensor_data, [tensor_data.shape[0],tf.reduce_prod(tensor_data.shape[1:])])
weights_to_display = weights_to_display[0:width, 0:height]
val_ones = np.ones([height, width])
val_zeros = np.zeros([height, width])
subset_values_to_display = np.where(abs(weights_to_display) > 0, val_ones, val_zeros)
plot_separation_lines(height, width)
plt.axis('off')
plt.imshow(subset_values_to_display)
plt.colorbar()
plt.title("Unstructed pruned weights for Conv2D layer")
plt.show()
Explanation: Let's see how those randomly pruned weights look. We extract them and display a subset of the weight tensor.
End of explanation
! python3 ./tensorflow_model_optimization/python/core/sparsity/keras/tools/check_sparsity_m_by_n.py --model_tflite=pruned_model.tflite --m_by_n=2,4
Explanation: The TensorFlow Model Optimization Toolkit includes a python script that can be used to check whether which layers in the model from the given tflite file have the structurally pruned weights: check_sparsity_m_by_n.py. The following command demonstrates how to use this tool to check for 2 by 4 sparsity in a specific model.
End of explanation |
93 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[PUBLIC] CLBlast vs ARM Compute Library on representative matrix sizes
Overview
Data [for developers]
Code [for developers]
Table
Plot
<a id="data"></a>
Get the experimental data
Step1: NB
Step2: Scientific
If some of the scientific packages are missing, please install them using
Step3: Collective Knowledge
If CK is not installed, please install it using
Step4: Define helper functions
Step5: Plot experimental data
Step6: Access experimental data
Step7: Print
Step8: <a id="table"></a>
Table
Step9: <a id="plot"></a>
Plot | Python Code:
repo_uoa = 'explore-matrix-size-gemm-libs-dvdt-prof-firefly-rk3399-001'
Explanation: [PUBLIC] CLBlast vs ARM Compute Library on representative matrix sizes
Overview
Data [for developers]
Code [for developers]
Table
Plot
<a id="data"></a>
Get the experimental data
End of explanation
import os
import sys
import json
import re
Explanation: NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
The experimental data was collected on the experimental platform and archived as follows:
$ cd `ck find ck-math:script:<...>`
$ python <...>.py
$ ck zip local:experiment:* --archive_name=<...>.zip
It can be downloaded and extracted as follows:
$ wget <...>.zip
$ ck add repo:<repo_uoa> --zip=<....>.zip --quiet
<a id="code"></a>
Data wrangling code
NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
Includes
Standard
End of explanation
import IPython as ip
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mp
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Seaborn version: %s' % sns.__version__) # apt install python-tk
print ('Matplotlib version: %s' % mp.__version__)
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from IPython.display import Image, display
def display_in_full(df):
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
display(df)
Explanation: Scientific
If some of the scientific packages are missing, please install them using:
```
pip install jupyter pandas numpy matplotlib
```
End of explanation
import ck.kernel as ck
print ('CK version: %s' % ck.__version__)
Explanation: Collective Knowledge
If CK is not installed, please install it using:
```
pip install ck
```
End of explanation
# client: 'acl-sgemm-opencl-example' or 'clblast-tune'
def get_mnk(characteristics, client):
# dim: 'm', 'n', 'k'
def get_dim_int(characteristics, client, dim):
if client == 'clblast-tune':
dim_str = characteristics['run'][dim][0]
dim_int = np.int64(dim_str)
else:
dim_str = characteristics['run'][dim]
dim_int = np.int64(dim_str)
return dim_int
m = get_dim_int(characteristics, client, 'm')
n = get_dim_int(characteristics, client, 'n')
k = get_dim_int(characteristics, client, 'k')
return ('(%d, %d, %d)' % (m, n, k))
def get_GFLOPS(characteristics, client):
if client == 'acl-sgemm-opencl-example':
GFLOPS_str = characteristics['run']['GFLOPS_1']
else:
GFLOPS_str = characteristics['run']['GFLOPS_1'][0]
GFLOPS = np.float(GFLOPS_str)
return GFLOPS
def get_TimeMS(characteristics,client):
time_execution =characteristics['run'].get('ms_1')
return time_execution
print profiling
start = datetime.strptime(profiling['timestamp']['start'], '%Y-%m-%dT%H:%M:%S.%f')
end = datetime.strptime(profiling['timestamp']['end'], '%Y-%m-%dT%H:%M:%S.%f')
print (start.timestamp() * 1000)
print (end.timestamp() * 1000)
elapsed = (end.timestamp() * 1000) - (start.timestamp() * 1000)
return elapsed
Explanation: Define helper functions
End of explanation
default_colormap = cm.autumn
default_figsize = [20, 12]
default_dpi = 200
default_fontsize = 20
default_legend_fontsize = 'medium'
if mp.__version__[0]=='2': mp.style.use('classic')
mp.rcParams['figure.figsize'] = default_figsize
mp.rcParams['figure.dpi'] = default_dpi
mp.rcParams['font.size'] = default_fontsize
mp.rcParams['legend.fontsize'] = default_legend_fontsize
def plot(df_mean, df_std, rot=90, patch_fontsize=default_fontsize):
ax = df_mean.plot(yerr=df_std,
kind='bar', ylim=[0, 20], rot=rot, width=0.9, grid=True, legend=True,
figsize=default_figsize, colormap=default_colormap, fontsize=default_fontsize)
ax.set_title('ARM Compute Library vs CLBlast (dv/dt)', fontsize=default_fontsize)
ax.set_ylabel('SGEMM GFLOPS', fontsize=default_fontsize)
ax.legend(loc='upper right')
for patch in ax.patches:
text = '{0:2.1f}'.format(patch.get_height())
ax.annotate(text, (patch.get_x()*1.00, patch.get_height()*1.01), fontsize=patch_fontsize)
Explanation: Plot experimental data
End of explanation
def get_experimental_results(repo_uoa='explore-matrix-size-gemm-libs-dvdt-prof-firefly-rk3399', tags='explore-matrix-size-libs-sgemm, acl-sgemm-opencl-example'):
module_uoa = 'experiment'
r = ck.access({'action':'search', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'tags':tags})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
experiments = r['lst']
dfs = []
for experiment in experiments:
data_uoa = experiment['data_uoa']
r = ck.access({'action':'list_points', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
for point in r['points']:
with open(os.path.join(r['path'], 'ckp-%s.0001.json' % point)) as point_file:
point_data_raw = json.load(point_file)
characteristics_list = point_data_raw['characteristics_list']
num_repetitions = len(characteristics_list)
client = data_uoa[len('explore-matrix-size-gemm-libs-'):]
# Obtain column data.
data = [
{
'client': client,
'(m, n, k)': get_mnk(characteristics, client),
'GFLOPS': get_GFLOPS(characteristics, client),
'dvdt_prof_info': characteristics['run'].get('dvdt_prof',[]),
'time (ms)' : get_TimeMS(characteristics,client),
'repetition_id': repetition_id
}
for (characteristics, repetition_id) in zip(characteristics_list, range(num_repetitions))
]
#Construct a DataFrame.
df = pd.DataFrame(data)
# Set columns and index names.
df.columns.name = 'characteristics'
df.index.name = 'index'
df = df.set_index(['client', '(m, n, k)', 'repetition_id','GFLOPS','time (ms)'])
# Append to the list of similarly constructed DataFrames.
dfs.append(df)
# Concatenate all constructed DataFrames (i.e. stack on top of each other).
result = pd.concat(dfs).unstack('client').swaplevel(axis=1)
return result.sort_index(level=result.index.names)
Explanation: Access experimental data
End of explanation
df = get_experimental_results(repo_uoa=repo_uoa)
display_in_full(df)
df_min = df \
.ix[df.groupby(level=df.index.names[:-1])['time (ms)'].idxmin()] \
.reset_index('repetition_id', drop=True)
df_min
batch_size = 1
df_model_lib = df_min[['dvdt_prof_info']] \
.reset_index('platform', drop=True) \
.reorder_levels([ 'batch_size', 'model', 'lib']) \
.loc[batch_size] \
.sortlevel()
df_model_lib
models = df_model_lib.index.levels[0]
libs = df_model_lib.index.levels[1]
def concat(model, lib):
return '%s:%s' % (model, lib)
def analyse_model_lib(df_model_lib, model, lib, min_pc=1.0):
trace = pw.index_calls(df_model_lib.loc[model].loc[lib]['dvdt_prof_info'])
# All kernel enqueues.
df_kernel_enqueues = pw.df_kernel_enqueues(pw.filter_calls(trace, ['clEnqueueNDRangeKernel']), unit='ms')
# Kernel enqueues that take at least 'min_pc' % of the execution time.
df_kernel_enqueues_cum_time_num = pw.df_kernel_enqueues_cumulative_time_num(df_kernel_enqueues, unit)
df_kernel_enqueues_cum_time_num.columns.name = concat(model, lib)
return df_kernel_enqueues_cum_time_num[df_kernel_enqueues_cum_time_num['** Execution time (%) **'] > min_pc]
def analyse_xgemm_kernel(df_model_lib, model, lib, kernel):
# Get trace for lib and model.
trace = pw.index_calls(df_model_lib.loc[model].loc[lib]['dvdt_prof_info'])
# All calls to set kernel args.
set_args = pw.filter_calls(trace, ['clSetKernelArg'])
# All kernel enqueues.
nqs = pw.filter_calls(trace, ['clEnqueueNDRangeKernel'])
# Construct a DataFrame with info about kernel enqueues.
df = pw.df_kernel_enqueues(nqs, unit='ms').swaplevel().ix[kernel]
df = df[['p3 - p2 (ms)', 'gws2']]
# As gws2 is always 1, we can use it to count the number of enqueues.
df.columns = [ '** Execution time (ms) **', '** Number of enqueues **' ]
df.columns.name = kernel
# Augment the DataFrame with columns for the (M, N, K) triples.
df['kSizeM'] = 'M'; df['bSizeM'] = 'MM'
df['kSizeN'] = 'N'; df['bSizeN'] = 'NN'
df['kSizeK'] = 'K'; df['bSizeK'] = 'KK'
# Initialise buckets.
buckets = init_buckets()
# Augment the DataFrame with the actual (M, N, K) triples.
mnk_triples = []; mmnnkk_triples = []
for nq in nqs:
if nq['name'] == kernel:
prof = nq['profiling']
(M, N, K) = ('M', 'N', 'K'); (MM, NN, KK) = ('MM', 'NN', 'KK')
for set_arg in set_args:
if (set_arg['call_index'] > nq['call_index']): break
if (set_arg['kernel'] != nq['kernel']): continue
arg_value = pc.hex_str_as_int(set_arg['arg_value'])
if (set_arg['arg_index'] == 0): M = arg_value; MM = arg_value
if (set_arg['arg_index'] == 1): N = arg_value; NN = arg_value
if (set_arg['arg_index'] == 2): K = arg_value; KK = arg_value
mnk_triples.append((M, N, K))
mmnnkk_triples.append(get_nearest_bucket(buckets, (M, N, K)))
df[['kSizeM', 'kSizeN', 'kSizeK']] = mnk_triples
df[['bSizeM', 'bSizeN', 'bSizeK']] = mmnnkk_triples
# Calculate Gflops and GFLOPS (Gflops/s).
df['** Gflops **'] = 2*df['kSizeM']*df['kSizeN']*df['kSizeK']*1e-9
df['** GFLOPS **'] = df['** Gflops **'] / (df['** Execution time (ms) **']*1e-3)
return df
model_lib_kernel_analysis = {}
for model in models:
for lib in libs:
title = concat(model, lib)
print('== %s ==' % title)
try:
analysis = model_lib_analysis[title]
except:
print(' ... missing ...'); print(''); continue
for kernel in analysis.index:
if kernel.lower().find('xgemm') == -1: continue
analysis_xgemm = analyse_xgemm_kernel(df_model_lib, model, lib, kernel)
pd.options.display.max_columns = analysis_xgemm.columns.size
pd.options.display.max_rows = analysis_xgemm.index.size
display(analysis_xgemm)
analysis_xgemm_stats = analysis_xgemm.describe()
pd.options.display.max_columns = analysis_xgemm_stats.columns.size
pd.options.display.max_rows = analysis_xgemm_stats.index.size
display(analysis_xgemm_stats)
model_lib_kernel_analysis[concat(title, kernel)] = analysis_xgemm
print('')
print('')
Explanation: Print
End of explanation
df = get_experimental_results(repo_uoa=repo_uoa)
display_in_full(df)
Explanation: <a id="table"></a>
Table
End of explanation
df_mean = df.groupby(level=df.index.names[:-1]).mean()
df_std = df.groupby(level=df.index.names[:-1]).std()
plot(df_mean, df_std)
Explanation: <a id="plot"></a>
Plot
End of explanation |
94 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The belowing code in azure can't work, because 403 error
boston = load_boston()
california = fetch_california_housing()
In local machine å¦èµ·çç¤ï¼ä¸è½½boston,californiaåæcsvï¼ç¶åä¼ å
¥azure
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
dataset.to_csv('boston.csv')
Step1: 以ä¸ä»£ç æ¯å®ç°æ£æåå¸
Step2: åå¼ç两ç§è®¡ç®æ¹æ³
SSE ----误差平æ¹å/åæ¹å·®
ç´æ¹å¾çåæ³ï¼ç¬¬ä¸æ 表示åæ¹å·®å¨0-100以å
çæ大约350ç»
Step3: æ åå
æ ååè¿å,åå¼ä¸º0,æ¹å·®ä¸º1
Step4: è¿ä¸ªå½æ°è®¡ç®å
±åæ§
Step5: è¿ä¸ªå½æ°è®¡ç®ç¸å
³æ§,åºå«å°±æ¯è¾å
¥ç»è¿æ åå
Step6: Let's graph what happens when we correlate two variables. Using a scatterplot, we can easily visualize the two involved variables. A scatterplot is a graph where the values of two variables are treated as Cartesian coordinates; thus, for every (x, y) value a point is represented in the graph | Python Code:
from azureml import Workspace
ws = Workspace(
workspace_id='3c64d445b4c840dca9683dd47522eba3',
authorization_token='JaC5E2q5FouX14JhvCmcvmzagqV63q0oVIbu2jblLBdQ5e5wf/Y24Ed6uXLvbSUgbiao5iF85C3uufYKQgXoNw==',
endpoint='https://studioapi.azureml.net'
)
ds = ws.datasets['boston.csv']
df = ds.to_dataframe()
df.head()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
# If you are using IPython, this will make the images available in the Notebook
Explanation: The belowing code in azure can't work, because 403 error
boston = load_boston()
california = fetch_california_housing()
In local machine å¦èµ·çç¤ï¼ä¸è½½boston,californiaåæcsvï¼ç¶åä¼ å
¥azure
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
dataset.to_csv('boston.csv')
End of explanation
import matplotlib.mlab as mlab
x=np.linspace(-4,4,100)
for mean,variance in [(0,0.7),(1,1.5),(-2,0.5)]:
plt.plot(x,mlab.normpdf(x,mean,variance))
plt.show()
y=mlab.normpdf(x,0,1)
type(y)
Explanation: 以ä¸ä»£ç æ¯å®ç°æ£æåå¸
End of explanation
print(df['target'].mean())
print(np.mean(df['target']))
mean_expected_value=np.mean(df['target'])
df.ix[:,'target'].mean()
Square_errors=pd.Series(mean_expected_value-df['target'])**2
SSE=np.sum(Square_errors)
print('Sum of Squared Errors (SSE): %f'%SSE)
density_plot=Square_errors.plot('hist')
Explanation: åå¼ç两ç§è®¡ç®æ¹æ³
SSE ----误差平æ¹å/åæ¹å·®
ç´æ¹å¾çåæ³ï¼ç¬¬ä¸æ 表示åæ¹å·®å¨0-100以å
çæ大约350ç»
End of explanation
def standardize(x):
return (x-np.mean(x))/np.std(x)
standardize_target=standardize(df['target'])
standardize_target.std()
standardize_target.mean()
Explanation: æ åå
æ ååè¿å,åå¼ä¸º0,æ¹å·®ä¸º1
End of explanation
def covariance(variable_1, variable_2, bias=0):
observations = float(len(variable_1))
return np.sum((variable_1 - np.mean(variable_1)) * (variable_2 - np.mean(variable_2)))/(observations-min(bias,1))
Explanation: è¿ä¸ªå½æ°è®¡ç®å
±åæ§
End of explanation
def correlation(var1,var2,bias=0):
return covariance(standardize(var1), standardize(var2),bias)
from scipy.stats.stats import pearsonr
print ('Our correlation estimation: %0.5f' % (correlation(df['RM'], df['target'])))
print ('Correlation from Scipy pearsonr estimation: %0.5f' % pearsonr(df['RM'], df['target'])[0])
print(pearsonr(df['RM'],df['target']))
Explanation: è¿ä¸ªå½æ°è®¡ç®ç¸å
³æ§,åºå«å°±æ¯è¾å
¥ç»è¿æ åå
End of explanation
x_range = [df['RM'].min(),df['RM'].max()]
y_range = [df['target'].min(),df['target'].max()]
scatter_plot = df.plot(kind='scatter', x='RM', y='target',xlim=x_range, ylim=y_range)
meanY = scatter_plot.plot(x_range, [df['target'].mean(),df['target'].mean()], '--' , color='red', linewidth=1)
meanX = scatter_plot.plot([df['RM'].mean(),df['RM'].mean()], y_range, '--', color='red', linewidth=1)
Explanation: Let's graph what happens when we correlate two variables. Using a scatterplot, we can easily visualize the two involved variables. A scatterplot is a graph where the values of two variables are treated as Cartesian coordinates; thus, for every (x, y) value a point is represented in the graph
End of explanation |
95 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing the Random Forest Classifier from sci-kit learn
1. Import dataset
This tutorial uses the iris dataset (https
Step1: 2. Prepare training and testing data
Each flower in this dataset contains the following features and labels
* features - measurements of the flower petals and sepals
* labels - the flower species (setosa, versicolor, or virginica) represented as a 0, 1, or 2.
Our train_test_split function will seperate the data as follows
* (features_train, labels_train) - 80% of the data prepared for training
* (features_test, labels_test) - 20% of the data prepared for making our predictions and evaluating our model
Step2: 3. Create and fit the Random Forest Classifier
This tutorial uses the RandomForestClassifier model for our predictions, but you can experiment with other classifiers. To do so, import another classifier and replace the relevant code in this section.
Step3: 4. Make Predictions using Random Forest Classifier
Step4: Understanding our predictions
Our predictions will be an array of 0's 1's, and 2's, depending on which flower our algorithm believes each set of measurements to represent.
Step5: To intepret this, consider the first set of measurements in features_test
Step6: Our model believes that these measurements correspond to a setosa iris (label 0).
Step7: In this case, our model is correct, since the true label indicates that this was a setosa iris (label 0).
Step8: 5. Evaluate our model
For this section we will import two metrics from sklearn
Step9: As seen in the confusion matrix below, most predictions are accurate but our model misclassified one specimen of versicolor (our model thought that it was virginca).
Step10: As seen in the classification report below, our model has 97% precision, recall, and accuracy. | Python Code:
#Import dataset
from sklearn.datasets import load_iris
iris = load_iris()
Explanation: Implementing the Random Forest Classifier from sci-kit learn
1. Import dataset
This tutorial uses the iris dataset (https://en.wikipedia.org/wiki/Iris_flower_data_set) which comes preloaded with sklearn.
End of explanation
#Import train_test_split
from sklearn.model_selection import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(iris.data,iris.target,test_size=0.2,random_state=1)
Explanation: 2. Prepare training and testing data
Each flower in this dataset contains the following features and labels
* features - measurements of the flower petals and sepals
* labels - the flower species (setosa, versicolor, or virginica) represented as a 0, 1, or 2.
Our train_test_split function will seperate the data as follows
* (features_train, labels_train) - 80% of the data prepared for training
* (features_test, labels_test) - 20% of the data prepared for making our predictions and evaluating our model
End of explanation
#Import classifier
from sklearn.ensemble import RandomForestClassifier
#Create an instance of the RandomForestClassifier
rfc = RandomForestClassifier()
#Fit our model to the training features and labels
rfc.fit(features_train,labels_train)
Explanation: 3. Create and fit the Random Forest Classifier
This tutorial uses the RandomForestClassifier model for our predictions, but you can experiment with other classifiers. To do so, import another classifier and replace the relevant code in this section.
End of explanation
rfc_predictions = rfc.predict(features_test)
Explanation: 4. Make Predictions using Random Forest Classifier
End of explanation
print(rfc_predictions)
Explanation: Understanding our predictions
Our predictions will be an array of 0's 1's, and 2's, depending on which flower our algorithm believes each set of measurements to represent.
End of explanation
print(features_test[0])
Explanation: To intepret this, consider the first set of measurements in features_test:
End of explanation
print(rfc_predictions[0])
Explanation: Our model believes that these measurements correspond to a setosa iris (label 0).
End of explanation
print(labels_test[0])
Explanation: In this case, our model is correct, since the true label indicates that this was a setosa iris (label 0).
End of explanation
#Import pandas to create the confusion matrix dataframe
import pandas as pd
#Import classification_report and confusion_matrix to evaluate our model
from sklearn.metrics import classification_report, confusion_matrix
Explanation: 5. Evaluate our model
For this section we will import two metrics from sklearn: confusion_matrix and classification_report. They will help us understand how well our model did.
End of explanation
#Create a dataframe with the confusion matrix
confusion_df = pd.DataFrame(confusion_matrix(labels_test, rfc_predictions),
columns=["Predicted " + name for name in iris.target_names],
index = iris.target_names)
confusion_df
Explanation: As seen in the confusion matrix below, most predictions are accurate but our model misclassified one specimen of versicolor (our model thought that it was virginca).
End of explanation
print(classification_report(labels_test,rfc_predictions))
Explanation: As seen in the classification report below, our model has 97% precision, recall, and accuracy.
End of explanation |
96 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Lasagne
There are various libaries building on top of Theano to provide easy buidling blocks for designing deep neural networks. Some of them are
Step1: Build the MLP
Now we use the provided layers from Lasagne to build our MLP
Step2: Create the Train Function
After loading the data and defining the MLP, we can now create the train function.
Step3: Train the model
We run the training for some epochs and output the accurarcy of our network | Python Code:
import gzip
import cPickle
import numpy as np
import theano
import theano.tensor as T
import lasagne
# Load the pickle file for the MNIST dataset.
dataset = 'data/mnist.pkl.gz'
f = gzip.open(dataset, 'rb')
train_set, dev_set, test_set = cPickle.load(f)
f.close()
#train_set contains 2 entries, first the X values, second the Y values
train_x, train_y = train_set
dev_x, dev_y = dev_set
test_x, test_y = test_set
Explanation: Introduction to Lasagne
There are various libaries building on top of Theano to provide easy buidling blocks for designing deep neural networks. Some of them are:
- Lasagne (https://github.com/Lasagne/Lasagne)
- Blocks (https://github.com/mila-udem/blocks)
- Keras (http://keras.io/)
- OpenDeep (http://www.opendeep.org/)
All libaries are kind of similar but different in the details, for example in the design philosophy. I chose (after too little research) Lasagne as it will allow you to interact with Theano and the computation graph. Keep an eye onto this evolving area.
For a great example how to use Lasagne for MNIST see the Lasagne Tutorial: http://lasagne.readthedocs.org/en/latest/user/tutorial.html
Bascis
Lasagne provides you with several basic components to build your neural networks. Instead of defining your HiddenLayer and SoftmaxLayer as in the previous example, you can use existent implementations from the library and easily plug them together.
In the following we will reimplement the MLP for the MNIST-dataset using Lasagne. For more information on Lasagne see http://lasagne.readthedocs.org/en/latest/
Load your dataset
As before we load our dataset. See 2_MNIST for more details.
End of explanation
def build_mlp(n_in, n_hidden, n_out, input_var=None):
#Input layer, 1 dimension = number of samples, 2 dimension = input, our 28*28 image
l_in = lasagne.layers.InputLayer(shape=(None, n_in), input_var=input_var)
# Our first hidden layer with n_hidden units
# As nonlinearity we use tanh, you could also try rectify
l_hid1 = lasagne.layers.DenseLayer(incoming=l_in,
num_units=n_hidden, nonlinearity=lasagne.nonlinearities.tanh,
W=lasagne.init.GlorotUniform())
# Our output layer (a softmax layer)
l_out = lasagne.layers.DenseLayer(incoming=l_hid1,
num_units=n_out, nonlinearity=lasagne.nonlinearities.softmax)
return l_out
Explanation: Build the MLP
Now we use the provided layers from Lasagne to build our MLP
End of explanation
# Parameters
n_in = 28*28
n_hidden = 50
n_out = 10
# Create the network
x = T.dmatrix('x') # the data, one image per row
y = T.lvector('y') # the labels are presented as 1D vector of [int] labels
network = build_mlp(n_in, n_hidden, n_out, x)
# Create a loss expression for training, i.e., a scalar objective we want
# to minimize (for our multi-class problem, it is the cross-entropy loss):
prediction = lasagne.layers.get_output(network)
loss = lasagne.objectives.categorical_crossentropy(prediction, y)
loss = loss.mean()
# Create update expressions for training, i.e., how to modify the
# parameters at each training step. Here, we'll use Stochastic Gradient
# Descent (SGD) with Nesterov momentum, but Lasagne offers plenty more.
params = lasagne.layers.get_all_params(network, trainable=True)
updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate=0.01, momentum=0.9)
# Predict the labels
network_predict_label = T.argmax(lasagne.layers.get_output(network, deterministic=True), axis=1)
# Compile a function performing a training step on a mini-batch (by giving
# the updates dictionary) and returning the corresponding training loss:
train_fn = theano.function(inputs=[x, y], outputs=loss, updates=updates)
# Create the predict_labels function
predict_labels = theano.function(inputs=[x], outputs=network_predict_label)
Explanation: Create the Train Function
After loading the data and defining the MLP, we can now create the train function.
End of explanation
#Function that helps to iterate over our data in minibatches
def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
#Method to compute the accruarcy. Call predict_labels to get the labels for the dataset
def compute_accurarcy(dataset_x, dataset_y):
predictions = predict_labels(dataset_x)
errors = sum(predictions != dataset_y) #Number of errors
accurarcy = 1 - errors/float(len(dataset_y))
return accurarcy
number_of_epochs = 10
print "%d epochs" % number_of_epochs
for epoch in xrange(number_of_epochs):
for batch in iterate_minibatches(train_x, train_y, 20, shuffle=True):
inputs, targets = batch
train_fn(inputs, targets)
accurarcy_dev = compute_accurarcy(dev_x, dev_y)
accurarcy_test = compute_accurarcy(test_x, test_y)
print "%d epoch: Accurarcy on dev: %f, accurarcy on test: %f" % (epoch, accurarcy_dev, accurarcy_test)
print "DONE"
Explanation: Train the model
We run the training for some epochs and output the accurarcy of our network
End of explanation |
97 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook should be used to debug, improve or test the log visualization maker.
Step1: Loading the data
Step2: Preparing a test sample
Let's first use a particular session as a test case. We extract only the data relevant to that case
Step3: Plotting the data
Session with gap counting
Step4: Session with range and extrapolated range
Step5: Testing
Step6: TO DO
Ordered in some kind of general priority (in terms of need of feedback and desired feature)
Fix other Other category
fix visual component of build, delete and submit
clean visual - embellish
run on all 9 student pairs
What can invention be? (Rename as Other) - Two options | Python Code:
%load_ext autoreload
%autoreload 1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.options.display.max_rows = 1000
pd.options.display.max_columns = 60
#utils.py is where all our custom functions live is we set an autoreload on it.
%aimport utils
from utils import *
%aimport viz_utils
from viz_utils import *
Explanation: This notebook should be used to debug, improve or test the log visualization maker.
End of explanation
df_all = pd.read_excel('all data v3.xlsx', 'iLab data.txt', index_col=None, na_values=['NA'])
Explanation: Loading the data
End of explanation
df_test = prepare_session(df_all,'L-2567b17a:120eda25685:-8000')
df_gaps = prepare_session(df_all,'L-10f11766:120ecd4f63a:-8000')
Explanation: Preparing a test sample
Let's first use a particular session as a test case. We extract only the data relevant to that case
End of explanation
%aimport viz_utils
plot(df_gaps,to_plot,colors, column_to_use, function_to_use)
Explanation: Plotting the data
Session with gap counting
End of explanation
%aimport viz_utils
plot(df_test,to_plot,colors, column_to_use, function_to_use)
Explanation: Session with range and extrapolated range
End of explanation
df_test2 = pd.read_excel('all_method_tester.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
%autoreload
REGEX_SINGLE_VALUE_FIRST = "st\d \d(?:$|(?:\sst)|(?:\s[\-\+x/]\s[A-Z]))"
REGEX_SINGLE_VALUE_SECOND = "st\d [A-Z][\sa-z]+ [\-\+x/] \d(?:$|(?:\s?st))"
def single_value_usage(df):
usage= []
method1 = action_usage(df,'Cleaned method 1',REGEX_SINGLE_VALUE_FIRST)
usage.extend(action_usage(df,'Cleaned method 2',REGEX_SINGLE_VALUE_FIRST))
usage.extend(action_usage(df,'Cleaned method 1',REGEX_SINGLE_VALUE_SECOND))
usage.extend(action_usage(df,'Cleaned method 2',REGEX_SINGLE_VALUE_SECOND))
return clean_coords(usage)
single_value_usage(df_test2)
%aimport viz_utils
plot(df_test2,to_plot,colors, column_to_use, function_to_use)
Explanation: Testing
End of explanation
# #Using the example used for sketch.
# def export_df(df,name):
# select_df = df[["Session Id","Selection","Feedback Text","Cleaned method 1","Cleaned method 2","cases","Time_seconds","Timeshifted","Duration"]]
# writer = pd.ExcelWriter(name+'.xlsx')
# select_df.to_excel(writer,'Sheet1')
# writer.save()
#
# export_df(df_gaps,'gaps')
# export_df(df_test,'test')
Explanation: TO DO
Ordered in some kind of general priority (in terms of need of feedback and desired feature)
Fix other Other category
fix visual component of build, delete and submit
clean visual - embellish
run on all 9 student pairs
What can invention be? (Rename as Other) - Two options:
Lots of differernt pre-set possibilities
Anytime they use addition within a step?
Anytime they use multiplication
Use a combination of central tendency
Use a combination of methods (range and mean)
Any method that isn't other methods
after removing chuncks that fit our methods, is there anything left?
or
anything that has more than two steps subtracting time coords of methods.
Done
central tendency (average median sum) - add "choose 2 4 5 7" in regex
Show the method that eventually “cracks” the cases (succeeds to solve and move on)
Testing
What works
* Central al works
* Combo of central all works
* COunt gaps (even one gap) works. It doesn't light up when counting case numbers
* Combo of count gaps and Average
* Combo of count gaps and range
What doesn't
* Central choose...
* st1 5 st2 9 - 1 lights up range (good) but not single value
* Combo count al + distance doesn't light up distnace nor Other
Export dataframe
End of explanation |
98 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 1 - Interpolação
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
Step2: Dados um conjunto de pontos $x$, suas respectivas imagens $y$ e um intervalo de pontos x_new
Step3: Nesta figura tenta-se demonstrar a eficácia da rotina, comparando o polinómio com o presente no pdf
Step4: Como o método funciona, faz-se a interpolação em grau $n \in [ 1, 2, 3, 4 , 5]$ com $n+1$ pontos equidistantes no intervalo $x \in [-5, 5]$
A verde está representada a função original, $f(x) =\frac{1}{1+x^{2}}$, e a azul o polinómio obtido para cada caso.
Os pontos representados são os requeridos
Step5: Para um polinómio de grau 20, observa-se um óptimo ajuste em volta dos pontos centrais,
e uma oscilação imensa nos pontos junto aos extremos iniciais e finais, o que é efectivamente um mau resultado.
Isto permite verificar, como esperado, o fenómeno de Runge.
Nos pontos $x = -4.75$ e $x = 4.8$ encontramos as oscilações que fogem muito ao valor real.
Step6: 2.
O índice de refracção do poliestireno, medido para diferentes
comprimentos de onda $\lambda $ (correspondentes às riscas intensas do
espectro de sódio) é dado pela tabela seguinte
Step7: Implementa-se o método de Polinómio Interpolador pelo método de Lagrange
Step8: Tal como para o caso do Método de Interpolação de Newton, confirma-se que o método de Interpolação de Lagrange (polinómico) implementado se comporta conforme o esperado.
Step9: Utilizando o polinómio interpolador pelo método de Lagrange, determina-se o índice de refracção para $\lambda = 5000 \mathring A$ como | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 1 - Interpolação
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
End of explanation
func_x = lambda x: 1/ (1 + x**2) # Função dada
def xinterval(x_i, x_f, n):
" Gera um array de 'n' pontos equidistantes entre 'x_i' e 'x_f' "
xn = np.linspace(x_i, x_f, num = n)
return xn
def newtoninterp(x, y, x_new):
Dados um conjunto de pontos 'x', suas respectivas imagens 'y', e
um intervalo de pontos 'x_new' para o qual se vão aplicar o
polinómios
n = len(x)
def difdiv(xi):
"Cálculo dos coeficientes através do método das diferenças divididas"
d = list(y)
for j in range(1, n):
for i in range (n-1, j-1, -1): # intervalo escolhido de forma a evitar dependências de variáveis no cálculo
d[i] = (d[i]-d[i-1]) / (xi[i] - xi[i-j])
return d
def interpol(coef, x_pts, x_new):
y_new = []
for pt in x_new:
co = coef[len(coef)-1] # último coeficiente
for i in range(n-2, -1, -1): # intervalo para multiplicar do último ponto para o primeiro, sem dependências
co *= pt - x_pts[i]
co += coef[i]
y_new.append(co)
return y_new
coef_l = difdiv(x)
ypol = interpol(coef_l, x, x_new)
return ypol, coef_l
Explanation: Dados um conjunto de pontos $x$, suas respectivas imagens $y$ e um intervalo de pontos x_new:
+ Calculam-se os Coeficientes com o método das diferenças divididas
+ Para cada ponto de x_new, aplica-se o polinómio, obtendo os valores de y_new.
End of explanation
test_x, test_y = [1, 2, 4, 5, 8], [10, 5, 2, 4, 14]
plot_x_range = np.linspace(1, 8, num = 150)
pol_teste, pol_list = newtoninterp(test_x, test_y, plot_x_range)
pol_dado = lambda x: (1042/63) - (146/21)*x + (7/36)*x**2 + (5/21)*x**3 - (5/252)*x**4
plt.figure(figsize=(12, 5))
plt.scatter(test_x, test_y)
plt.plot(plot_x_range, pol_teste, 'c''-', label = 'interpolado')
plt.plot(plot_x_range, pol_dado(plot_x_range), 'r' '--', label = 'polinómio dado')
plt.title('Teste de implementação do método de interpolação polinomial de Newton')
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc='upper center')
plt.show()
Explanation: Nesta figura tenta-se demonstrar a eficácia da rotina, comparando o polinómio com o presente no pdf:
$p(x) = \frac{1042}{63} - \frac{146}{21} x + \frac{7}{36} x^{2} + \frac{5}{21} x^{3} - \frac{5}{252} x^{4}$
Verifica-se que o método está bem implementado, já que os resultados são idênticos.
End of explanation
fig = plt.figure(figsize= (20, 12))
x_span = np.linspace(-5, 5, num = 200)
x_set = [-4.92, -2.67, -1.58, 0.88, 2.22, 3.14, 4.37]
y_set = list(map(func_x, x_set))
for i in range(1, 6):
x_p = xinterval(-5, 5, i+1)
y_p = func_x(x_p)
y_gen, y_poli = newtoninterp(x_p, y_p, x_span)
ip = fig.add_subplot(3, 2, i)
ip.set_title('Interpolação de grau: ' + str(i))
print('Coeficientes da Interpolação de grau ' + str(i) + ':')
s=''
for p in range(0, len(y_poli)): s += 'C' + str(p) + ': ' + str(y_poli[p]) + '; '
print(s)
#plt.scatter(x_p, y_p)
plt.plot(x_span, y_gen, 'c' , label = 'polinómio')
plt.plot(x_span, func_x(x_span), 'r', label = r'$\frac{1}{1+x^{2}}$') # original function
plt.scatter(x_set, y_set)
if i == 4 or i == 5: plt.xlabel('$x_{i}$', size=20)
if i %2 != 0: plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc='best')
# plt.plot(x_span, exact_gen)
plt.show()
Explanation: Como o método funciona, faz-se a interpolação em grau $n \in [ 1, 2, 3, 4 , 5]$ com $n+1$ pontos equidistantes no intervalo $x \in [-5, 5]$
A verde está representada a função original, $f(x) =\frac{1}{1+x^{2}}$, e a azul o polinómio obtido para cada caso.
Os pontos representados são os requeridos:
[-4.92, -2.67, -1.58, 0.88, 2.22, 3.14, 4.37]
End of explanation
x_20 = xinterval(-5, 5, 21)
y_20 = list(map(func_x, x_20))
evalx_20 = [-4.75, 4.8]
evaly_20 = list(map(func_x, evalx_20))
pol_20, lpol_20 = newtoninterp(x_20, y_20, x_span)
print('Coeficientes da Interpolação:')
for p in range(0, len(lpol_20)): print('C' + str(p) + ': ' + str(lpol_20[p]))
fig = plt.figure(figsize = (10, 4))
i20 = fig.add_subplot(111)
i20.set_title('Interpolação de grau 20')
plt.plot(x_span, pol_20, 'c', label = 'interpolado')
plt.plot(x_span, func_x(x_span), 'r' '--' , label = r'$\frac{1}{1+x^{2}}$')
plt.scatter(evalx_20, evaly_20)
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.legend(loc = 'best')
plt.show()
Explanation: Para um polinómio de grau 20, observa-se um óptimo ajuste em volta dos pontos centrais,
e uma oscilação imensa nos pontos junto aos extremos iniciais e finais, o que é efectivamente um mau resultado.
Isto permite verificar, como esperado, o fenómeno de Runge.
Nos pontos $x = -4.75$ e $x = 4.8$ encontramos as oscilações que fogem muito ao valor real.
End of explanation
wl = [4358, 4861, 5896, 6563, 7679] # wavelength
n_wl = [1.6174, 1.6062, 1.5923, 1.5870, 1.5812] # n for each wavelength
Explanation: 2.
O índice de refracção do poliestireno, medido para diferentes
comprimentos de onda $\lambda $ (correspondentes às riscas intensas do
espectro de sódio) é dado pela tabela seguinte:
| $\lambda (\mathring A)$ | 4358 | 4861 | 5896 | 6563 | 7679 |
|---------|--------|--------|--------|--------|--------|
| n | 1.6174 | 1.6062 | 1.5923 | 1.5870 | 1.5812 |
End of explanation
def Lagrangepolinterp(x, y, x_new):
n = len(x)
y_new = []
for x_n in x_new:
lag_pols = []
for i in range(n):
l = 1
for k in range(n):
if k != i:
l *= (x_n - x[k]) / (x[i] - x[k])
lag_pols.append(l)
point = 0
for i, j in zip(y, lag_pols):
point += i * j
y_new.append(point)
return y_new
Explanation: Implementa-se o método de Polinómio Interpolador pelo método de Lagrange:
End of explanation
lag_pol_test = Lagrangepolinterp(test_x, test_y, plot_x_range)
plt.figure(figsize=(12, 5))
plt.plot(plot_x_range, lag_pol_test, 'r' '--', label = 'interpolação Lagrange')
plt.plot(plot_x_range, pol_dado(plot_x_range), 'c' '-', label = 'polinómio dado')
plt.scatter(test_x, test_y)
plt.legend(loc='upper center')
plt.title('Teste de implementação do método polinomial de Lagrange')
plt.xlabel('$x_{i}$', size=20)
plt.ylabel('$f(x_{i})$', size = 16)
plt.show()
x_in, x_fi, dx = 3500, 8500, 0.01 #intervalo de pontos entre 3000 e 9000 a cada 0.01
wl_range = np.arange(x_in, x_fi, dx)
n_refr = Lagrangepolinterp(wl, n_wl, wl_range)
plt.figure(figsize = (12,5))
plt.scatter(wl, n_wl, color = 'red', label = '$\lambda (n)$ da tabela')
plt.plot(wl_range, n_refr, 'c', label = '$\lambda (n)$ interpolado')
plt.title('Interpolação de $\lambda (n)$ utilizando o método polinomial de Lagrange')
plt.xlabel('$\lambda$', size=16)
plt.ylabel('$n$', size=18)
plt.legend(loc='upper right')
plt.show()
Explanation: Tal como para o caso do Método de Interpolação de Newton, confirma-se que o método de Interpolação de Lagrange (polinómico) implementado se comporta conforme o esperado.
End of explanation
n_5000 = n_refr[int((5000 - x_in) / dx)]
print(n_5000)
Explanation: Utilizando o polinómio interpolador pelo método de Lagrange, determina-se o índice de refracção para $\lambda = 5000 \mathring A$ como:
End of explanation |
99 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
#g_hidden_size = 128
#d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |