Datasets:
Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
0 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 1
Step1: If you've set up your environment properly, this cell should run without problems
Step2: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
Step5: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https
Step6: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example
Step7: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example
Step8: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
Step9: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
Step10: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays
Step11: Math operations on arrays happen element-wise. Here's what we mean
Step12: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip
Step13: Another Jupyter pro-tip
Step14: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint
Step15: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question
Step16: This is a useful recipe for plotting any function
Step17: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is
Step18: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note
Step19: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
Step20: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
Step21: Arrays allow you to assign to multiple places at once. The special character
Step22: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
Step23: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
Step24: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
Write your answer here, replacing this text.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains
Step25: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels
Step27: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
Step28: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation
Step29: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
Step30: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel
Step31: Let's plot the proportions you computed above on a bar chart
Step32: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
Step33: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
Write your answer here, replacing this text.
Final note
Step34: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
Step35: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https | Python Code:
!pip install -U okpy
Explanation: Homework 1: Setup and (Re-)Introduction to Python
Course Policies
Here are some important course policies. These are also located at
http://www.ds100.org/sp17/.
Tentative Grading
There will be 7 challenging homework assignments. Homeworks must be completed
individually and will mix programming and short answer questions. At the end of
each week of instruction we will have an online multiple choice quiz ("vitamin") that will
help you stay up-to-date with lecture materials. Labs assignments will be
graded for completion and are intended to help with the homework assignments.
40% Homeworks
13% Vitamins
7% Labs
15% Midterm
25% Final
Collaboration Policy
Data science is a collaborative activity. While you may talk with others about
the homework, we ask that you write your solutions individually. If you do
discuss the assignments with others please include their names at the top
of your solution. Keep in mind that content from the homework and vitamins will
likely be covered on both the midterm and final.
This assignment
In this assignment, you'll learn (or review):
How to set up Jupyter on your own computer.
How to check out and submit assignments for this class.
Python basics, like defining functions.
How to use the numpy library to compute with arrays of numbers.
1. Setup
If you haven't already, read through the instructions at
http://www.ds100.org/spring-2017/setup.
The instructions for submission are at the end of this notebook.
First, let's make sure you have the latest version of okpy.
End of explanation
import math
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from datascience import *
from client.api.notebook import Notebook
ok = Notebook('hw1.ok')
Explanation: If you've set up your environment properly, this cell should run without problems:
End of explanation
ok.auth(inline=True)
Explanation: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
End of explanation
2 + 2
# This is a comment.
# In Python, the ** operator performs exponentiation.
math.e**(-2)
print("Hello" + ",", "world!")
"Hello, cell output!"
def add2(x):
This docstring explains what this function does: it adds 2 to a number.
return x + 2
def makeAdder(amount):
Make a function that adds the given amount to a number.
def addAmount(x):
return x + amount
return addAmount
add3 = makeAdder(3)
add3(4)
# add4 is very similar to add2, but it's been created using a lambda expression.
add4 = lambda x: x + 4
add4(5)
sameAsMakeAdder = lambda amount: lambda x: x + amount
add5 = sameAsMakeAdder(5)
add5(6)
def fib(n):
if n <= 1:
return 1
# Functions can call themselves recursively.
return fib(n-1) + fib(n-2)
fib(4)
# A for loop repeats a block of code once for each
# element in a given collection.
for i in range(5):
if i % 2 == 0:
print(2**i)
else:
print("Odd power of 2")
# A list comprehension is a convenient way to apply a function
# to each element in a given collection.
# The String method join appends together all its arguments
# separated by the given string. So we append each element produced
# by the list comprehension, each separated by a newline ("\n").
print("\n".join([str(2**i) if i % 2 == 0 else "Odd power of 2" for i in range(5)]))
Explanation: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https://docs.python.org/3/tutorial/ is a good place to
start.
End of explanation
def nums_reversed(n):
...
_ = ok.grade('q01a')
_ = ok.backup()
Explanation: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example:
>>> nums_reversed(5)
'5 4 3 2 1'
Note: The ellipsis (...) indicates something you should fill in. It doesn't necessarily imply you should replace it with only one line of code.
End of explanation
def string_splosion(string):
...
_ = ok.grade('q01b')
_ = ok.backup()
Explanation: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example:
>>> string_splosion('Code')
'CCoCodCode'
>>> string_splosion('data!')
'ddadatdatadata!'
>>> string_splosion('hi')
'hhi'
End of explanation
def double100(nums):
...
_ = ok.grade('q01c')
_ = ok.backup()
Explanation: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
End of explanation
def median(number_list):
...
_ = ok.grade('q01d')
_ = ok.backup()
Explanation: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
End of explanation
array1 = np.array([2, 3, 4, 5])
array2 = np.arange(4)
array1, array2
Explanation: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays:
End of explanation
array1 * 2
array1 * array2
array1 ** array2
Explanation: Math operations on arrays happen element-wise. Here's what we mean:
End of explanation
np.arange?
Explanation: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip: Pull up the docs for any function in Jupyter by running a cell with
the function name and a ? at the end:
End of explanation
np.linspace
Explanation: Another Jupyter pro-tip: Pull up the docs for any function in Jupyter by typing the function
name, then <Shift>-<Tab> on your keyboard. Super convenient when you forget the order
of the arguments to a function. You can press <Tab> multiple tabs to expand the docs.
Try it on the function below:
End of explanation
xs = ...
ys = ...
_ = ok.grade('q02')
_ = ok.backup()
Explanation: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint: Use the np.sin function. You should be able to define each variable with one line of code.)
End of explanation
plt.plot(xs, ys)
Explanation: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question:
End of explanation
# Try plotting cos here.
Explanation: This is a useful recipe for plotting any function:
1. Use linspace or arange to make a range of x-values.
2. Apply the function to each point to produce y-values.
3. Plot the points.
You might remember from calculus that the derivative of the sin function is the cos function. That means that the slope of the curve you plotted above at any point xs[i] is given by cos(xs[i]). You can try verifying this by plotting cos in the next cell.
End of explanation
def derivative(xvals, yvals):
...
slopes = ...
slopes[:5]
_ = ok.grade('q03')
_ = ok.backup()
Explanation: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is:
(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])
If the difference between xs[i+1] and xs[i] were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use "really small" differences instead.
Question 3
Define a function called derivative that takes in an array of x-values and their
corresponding y-values and computes the slope of the line connecting each point to the next point.
>>> derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
np.array([2., 2.])
>>> derivative(np.arange(5), np.arange(5) ** 2)
np.array([0., 2., 4., 6.])
Notice that the output array has one less element than the inputs since we can't
find the slope for the last point.
It's possible to do this in one short line using slicing, but feel free to use whatever method you know.
Then, use your derivative function to compute the slopes for each point in xs, ys.
Store the slopes in an array called slopes.
End of explanation
...
...
Explanation: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note: Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error.
End of explanation
plt.plot(xs[:-1], slopes, label="Numerical derivative")
plt.plot(xs[:-1], np.cos(xs[:-1]), label="True derivative")
# You can just call plt.legend(), but the legend will cover up
# some of the graph. Use bbox_to_anchor=(x,y) to set the x-
# and y-coordinates of the center-left point of the legend,
# where, for example, (0, 0) is the bottom-left of the graph
# and (1, .5) is all the way to the right and halfway up.
plt.legend(bbox_to_anchor=(1, .5), loc="center left");
Explanation: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
End of explanation
# The zeros function creates an array with the given shape.
# For a 2-dimensional array like this one, the first
# coordinate says how far the array goes *down*, and the
# second says how far it goes *right*.
array3 = np.zeros((4, 5))
array3
# The shape attribute returns the dimensions of the array.
array3.shape
# You can think of array3 as an array containing 4 arrays, each
# containing 5 zeros. Accordingly, we can set or get the third
# element of the second array in array 3 using standard Python
# array indexing syntax twice:
array3[1][2] = 7
array3
# This comes up so often that there is special syntax provided
# for it. The comma syntax is equivalent to using multiple
# brackets:
array3[1, 2] = 8
array3
Explanation: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
End of explanation
array4 = np.zeros((3, 5))
array4[:, 2] = 5
array4
Explanation: Arrays allow you to assign to multiple places at once. The special character : means "everything."
End of explanation
array5 = np.zeros((3, 5))
rows = np.array([1, 0, 2])
cols = np.array([3, 1, 4])
# Indices (1,3), (0,1), and (2,4) will be set.
array5[rows, cols] = 3
array5
Explanation: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
End of explanation
twice_identity = ...
...
twice_identity
_ = ok.grade('q05')
_ = ok.backup()
Explanation: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
End of explanation
def read_file_lines(filename):
...
...
file1 = ...
file1[:5]
_ = ok.grade('q07')
_ = ok.backup()
Explanation: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
Write your answer here, replacing this text.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains:
1 2 3
3 4 5
7 8 9
the return value should be: ['1 2 3\n', '3 4 5\n', '7 8 9\n'].
Then, use the read_file_lines function on the file 1.txt, reading the contents
into a variable called file1.
Hint: Check out this Stack Overflow page on reading lines of files.
End of explanation
def lines_to_image(file_lines):
...
image_array = ...
# Make sure to call astype like this on the 3-dimensional array
# you produce, before returning it.
return image_array.astype(np.uint8)
image1 = ...
image1.shape
_ = ok.grade('q08')
_ = ok.backup()
Explanation: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels: red, green, and blue.
Question 8
Define a function called lines_to_image that takes in the contents of a
file as a list (such as file1). It should return an array containing integers of
shape (n_rows, n_cols, 3). That is, it contains the pixel triplets organized in the
correct number of rows and columns.
For example, if the file originally contained:
4 2
0 0 0
10 10 10
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
The resulting array should be a 3-dimensional array that looks like this:
array([
[ [0,0,0], [10,10,10] ],
[ [2,2,2], [3,3,3] ],
[ [4,4,4], [5,5,5] ],
[ [6,6,6], [7,7,7] ]
])
The string method split and the function np.reshape might be useful.
Important note: You must call .astype(np.uint8) on the final array before
returning so that numpy will recognize the array represents an image.
Once you've defined the function, set image1 to the result of calling
lines_to_image on file1.
End of explanation
def show_images(images, ncols=2, figsize=(10, 7), **kwargs):
Shows one or more color images.
images: Image or list of images. Each image is a 3-dimensional
array, where dimension 1 indexes height and dimension 2
the width. Dimension 3 indexes the 3 color values red,
blue, and green (so it always has length 3).
def show_image(image, axis=plt):
plt.imshow(image, **kwargs)
if not (isinstance(images, list) or isinstance(images, tuple)):
images = [images]
images = [image.astype(np.uint8) for image in images]
nrows = math.ceil(len(images) / ncols)
ncols = min(len(images), ncols)
plt.figure(figsize=figsize)
for i, image in enumerate(images):
axis = plt.subplot2grid(
(nrows, ncols),
(i // ncols, i % ncols),
)
axis.tick_params(bottom='off', left='off', top='off', right='off',
labelleft='off', labelbottom='off')
axis.grid(False)
show_image(image, axis)
# Show image1 here:
...
Explanation: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
End of explanation
# This array is provided for your convenience.
transformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])
def expand_image_range(image):
...
expanded1 = ...
show_images(expanded1)
_ = ok.grade('q10')
_ = ok.backup()
Explanation: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation:
old value | new value
========= | =========
0 | 12
1 | 37
2 | 65
3 | 89
4 | 114
5 | 137
6 | 162
7 | 187
8 | 214
9 | 240
10 | 250
This expands the color range of the image. For example, a pixel that previously
had the value [5 5 5] (almost-black) will now have the value [137 137 137]
(gray).
Set expanded1 to the expanded image1, then display it with show_images.
This page
from the numpy docs has some useful information that will allow you
to use indexing instead of for loops.
However, the slickest implementation uses one very short line of code.
Hint: If you index an array with another array or list as in question 5, your
array (or list) of indices can contain repeats, as in array1[[0, 1, 0]].
Investigate what happens in that case.
End of explanation
def reveal_file(filename):
...
filenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',
'6.txt', '7.txt', '8.txt', '9.txt', '10.txt']
expanded_images = ...
show_images(expanded_images, ncols=5)
Explanation: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
End of explanation
def proportion_by_channel(image):
...
image_proportions = ...
image_proportions
_ = ok.grade('q12')
_ = ok.backup()
Explanation: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel: red, green, or blue. Then
the function returns an array of length three containing the proportion of
pixels categorized as red, the proportion categorized as green, and the
proportion categorized as blue (respectively). (Again, don't count pixels
that are tied between 2 or 3 colors as any category, but do count them
in the denominator when you're computing proportions.)
For example:
```
test_im = np.array([
[ [5, 2, 2], [2, 5, 10] ]
])
proportion_by_channel(test_im)
array([ 0.5, 0, 0.5 ])
If tied, count neither as the highest
test_im = np.array([
[ [5, 2, 5], [2, 50, 50] ]
])
proportion_by_channel(test_im)
array([ 0, 0, 0 ])
```
Then, set image_proportions to the result of proportion_by_channel called
on each image in expanded_images as a 2d array.
Hint: It's fine to use a for loop, but for a difficult challenge, try
avoiding it. (As a side benefit, your code will be much faster.) Our solution
uses the NumPy functions np.reshape, np.sort, np.argmax, and np.bincount.
End of explanation
# You'll learn about Pandas and DataFrames soon.
import pandas as pd
pd.DataFrame({
'red': image_proportions[:, 0],
'green': image_proportions[:, 1],
'blue': image_proportions[:, 2]
}, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\
.iloc[::-1]\
.plot.barh();
Explanation: Let's plot the proportions you computed above on a bar chart:
End of explanation
def summer_or_winter(image):
...
_ = ok.grade('q13')
_ = ok.backup()
Explanation: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
End of explanation
import skimage as sk
import skimage.io as skio
def read_image(filename):
'''Reads in an image from a filename'''
return skio.imread(filename)
def compress_image(im):
'''Takes an image as an array and compresses it to look black.'''
res = im / 25
return res.astype(np.uint8)
def to_text_file(im, filename):
'''
Takes in an image array and a filename for the resulting text file.
Creates the encoded text file for later decoding.
'''
h, w, c = im.shape
to_rgb = ' '.join
to_row = '\n'.join
to_lines = '\n'.join
rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]
lines = to_lines([to_row(row) for row in rgb])
with open(filename, 'w') as f:
f.write('{} {}\n'.format(h, w))
f.write(lines)
f.write('\n')
summers = skio.imread_collection('orig/summer/*.jpg')
winters = skio.imread_collection('orig/winter/*.jpg')
len(summers)
sum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])
win_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])
for im, n in zip(summers, sum_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
for im, n in zip(winters, win_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
Explanation: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
Write your answer here, replacing this text.
Final note: While our approach here is simplistic, skin color segmentation
-- figuring out which parts of the image belong to a human body -- is a
key step in many algorithms such as face detection.
Optional: Our code to encode images
Here are the functions we used to generate the text files for this assignment.
Feel free to send not-so-secret messages to your friends if you'd like.
End of explanation
_ = ok.grade_all()
Explanation: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
End of explanation
# Now, we'll submit to okpy
_ = ok.submit()
Explanation: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https://okpy.org/cal/data100/sp17/.
End of explanation |
1 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Accessing C Struct Data
This notebook illustrates the use of @cfunc to connect to data defined in C.
Via CFFI
Numba can map simple C structure types (i.e. with scalar members only) into NumPy structured dtypes.
Let's start with the following C declarations
Step2: We can create my_struct data by doing
Step3: Using numba.cffi_support.map_type we can convert the cffi type into a Numba Record type.
Step4: The function type can be mapped in a signature
Step5: and @cfunc can take that signature directly
Step6: Testing the cfunc via the .ctypes callable
Step7: Manually creating a Numba Record type
Sometimes it is useful to create a numba.types.Record type directly. The easiest way is to use the Record.make_c_struct() method. Using this method, the field offsets are calculated from the natural size and alignment of prior fields.
In the example below, we will manually create the my_struct structure from above.
Step8: Here's another example to demonstrate the offset calculation
Step9: Notice how the byte at pad0 and pad1 moves the offset of f2 and d3.
A function signature can also be created manually | Python Code:
from cffi import FFI
src =
/* Define the C struct */
typedef struct my_struct {
int i1;
float f2;
double d3;
float af4[7];
} my_struct;
/* Define a callback function */
typedef double (*my_func)(my_struct*, size_t);
ffi = FFI()
ffi.cdef(src)
Explanation: Accessing C Struct Data
This notebook illustrates the use of @cfunc to connect to data defined in C.
Via CFFI
Numba can map simple C structure types (i.e. with scalar members only) into NumPy structured dtypes.
Let's start with the following C declarations:
End of explanation
# Make a array of 3 my_struct
mydata = ffi.new('my_struct[3]')
ptr = ffi.cast('my_struct*', mydata)
for i in range(3):
ptr[i].i1 = 123 + i
ptr[i].f2 = 231 + i
ptr[i].d3 = 321 + i
for j in range(7):
ptr[i].af4[j] = i * 10 + j
Explanation: We can create my_struct data by doing:
End of explanation
from numba import cffi_support
cffi_support.map_type(ffi.typeof('my_struct'), use_record_dtype=True)
Explanation: Using numba.cffi_support.map_type we can convert the cffi type into a Numba Record type.
End of explanation
sig = cffi_support.map_type(ffi.typeof('my_func'), use_record_dtype=True)
sig
Explanation: The function type can be mapped in a signature:
End of explanation
from numba import cfunc, carray
@cfunc(sig)
def foo(ptr, n):
base = carray(ptr, n) # view pointer as an array of my_struct
tmp = 0
for i in range(n):
tmp += base[i].i1 * base[i].f2 / base[i].d3 + base[i].af4.sum()
return tmp
Explanation: and @cfunc can take that signature directly:
End of explanation
addr = int(ffi.cast('size_t', ptr))
print("address of data:", hex(addr))
result = foo.ctypes(addr, 3)
result
Explanation: Testing the cfunc via the .ctypes callable:
End of explanation
from numba import types
my_struct = types.Record.make_c_struct([
# Provides a sequence of 2-tuples i.e. (name:str, type:Type)
('i1', types.int32),
('f2', types.float32),
('d3', types.float64),
('af4', types.NestedArray(dtype=types.float32, shape=(7,)))
])
my_struct
Explanation: Manually creating a Numba Record type
Sometimes it is useful to create a numba.types.Record type directly. The easiest way is to use the Record.make_c_struct() method. Using this method, the field offsets are calculated from the natural size and alignment of prior fields.
In the example below, we will manually create the my_struct structure from above.
End of explanation
padded = types.Record.make_c_struct([
('i1', types.int32),
('pad0', types.int8), # padding bytes to move the offsets
('f2', types.float32),
('pad1', types.int8), # padding bytes to move the offsets
('d3', types.float64),
])
padded
Explanation: Here's another example to demonstrate the offset calculation:
End of explanation
new_sig = types.float64(types.CPointer(my_struct), types.uintp)
print('signature:', new_sig)
# Our new signature matches the previous auto-generated one.
print('signature matches:', new_sig == sig)
Explanation: Notice how the byte at pad0 and pad1 moves the offset of f2 and d3.
A function signature can also be created manually:
End of explanation |
2 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas and Friends
Austin Godber
Mail
Step1: Background - NumPy - Arrays
Step2: Background - NumPy - Arrays
Arrays have NumPy specific types, dtypes, and can be operated on.
Step3: Now, on to Pandas
Pandas
Tabular, Timeseries, Matrix Data - labeled or not
Sensible handling of missing data and data alignment
Data selection, slicing and reshaping features
Robust data import utilities.
Advanced time series capabilities
Data Structures
Series - 1D labeled array
DataFrame - 2D labeled array
Panel - 3D labeled array (More D)
Assumed Imports
In my code samples, assume I import the following
Step4: Series
one-dimensional labeled array
holds any data type
axis labels known as index
implicit integert indexes
dict-like
Create a Simple Series
Step5: Series Operations
Step6: Series Operations - Cont.
Step7: Series Index
Step8: Date Convenience Functions
A quick aside ...
Step9: Datestamps as Index
Step10: Selecting By Index
Note that the integer index is retained along with the new date index.
Step11: Selecting by value
Step12: Selecting by Label (Date)
Step13: Series Wrapup
Things not covered but you should look into
Step14: DataFrame - Index/Column Names
Step15: DataFrame - Operations
Step16: See? You never need Excel again!
DataFrame - Column Access
Deleting a column.
Step17: DataFrame
Remember this, data2, for the next examples.
Step18: DataFrame - Column Access
As a dict
Step19: DataFrame - Column Access
As an attribute
Step20: DataFrame - Row Access
By row label
Step21: DataFrame - Row Access
By integer location
Step22: DataFrame - Cell Access
Access column, then row or use iloc and row/column indexes.
Step23: DataFrame - Taking a Peek
Look at the beginning of the DataFrame
Step24: DataFrame - Taking a Peek
Look at the end of the DataFrame.
Step25: DataFrame Wrap Up
Just remember,
A DataFrame is just a bunch of Series grouped together.
Any one dimensional slice returns a Series
Any two dimensional slice returns another DataFrame.
Elements are typically NumPy types or Objects.
Panel
Like DataFrame but 3 or more dimensions.
IO Tools
Robust IO tools to read in data from a variety of sources
CSV - pd.read_csv()
Clipboard - pd.read_clipboard()
SQL - pd.read_sql_table()
Excel - pd.read_excel()
Plotting
Matplotlib - s.plot() - Standard Python Plotting Library
Trellis - rplot() - An 'R' inspired Matplotlib based plotting tool
Bringing it Together - Data
The csv file (phx-temps.csv) contains Phoenix weather data from
GSOD
Step26: Bringing it Together - Code
Advanced read_csv(), parsing the dates and using them as the index, and naming the columns.
Step27: Bringing it Together - Plot
Step28: Boo, Pandas and Friends would cry if they saw such a plot.
Bringing it Together - Plot
Lets see a smaller slice of time
Step29: Bringing it Together - Plot
Lets operate on the DataFrame ... lets take the differnce between the highs and lows. | Python Code:
import numpy as np
# np.zeros, np.ones
data0 = np.zeros((2, 4))
data0
# Make an array with 20 entries 0..19
data1 = np.arange(20)
# print the first 8
data1[0:8]
Explanation: Pandas and Friends
Austin Godber
Mail: [email protected]
Twitter: @godber
Presented at DesertPy, Jan 2015.
What does it do?
Pandas is a Python data analysis tool built on top of NumPy that provides a
suite of data structures and data manipulation functions to work on those data
structures. It is particularly well suited for working with time series data.
Getting Started - Installation
Installing with pip or apt-get::
```
pip install pandas
or
sudo apt-get install python-pandas
```
Mac - Homebrew or MacPorts to get the dependencies, then pip
Windows - Python(x,y)?
Commercial Pythons: Anaconda, Canopy
Getting Started - Dependencies
Dependencies, required, recommended and optional
```
Required
numpy, python-dateutil, pytx
Recommended
numexpr, bottleneck
Optional
cython, scipy, pytables, matplotlib, statsmodels, openpyxl
```
Pandas' Friends!
Pandas works along side and is built on top of several other Python projects.
IPython
Numpy
Matplotlib
Pandas gets along with EVERYONE!
<img src='panda-on-a-unicorn.jpg'>
Background - IPython
IPython is a fancy python console. Try running ipython or ipython --pylab on your command line. Some IPython tips
```python
Special commands, 'magic functions', begin with %
%quickref, %who, %run, %reset
Shell Commands
ls, cd, pwd, mkdir
Need Help?
help(), help(obj), obj?, function?
Tab completion of variables, attributes and methods
```
Background - IPython Notebook
There is a web interface to IPython, known as the IPython notebook, start it
like this
```
ipython notebook
or to get all of the pylab components
ipython notebook --pylab
```
IPython - Follow Along
Follow along by connecting to TMPNB.ORG!
http://tmpnb.org
Background - NumPy
NumPy is the foundation for Pandas
Numerical data structures (mostly Arrays)
Operations on those.
Less structure than Pandas provides.
Background - NumPy - Arrays
End of explanation
# make it a 4,5 array
data = np.arange(20).reshape(4, 5)
data
Explanation: Background - NumPy - Arrays
End of explanation
print("dtype: ", data.dtype)
result = data * 20.5
print(result)
Explanation: Background - NumPy - Arrays
Arrays have NumPy specific types, dtypes, and can be operated on.
End of explanation
import pandas as pd
import numpy as np
Explanation: Now, on to Pandas
Pandas
Tabular, Timeseries, Matrix Data - labeled or not
Sensible handling of missing data and data alignment
Data selection, slicing and reshaping features
Robust data import utilities.
Advanced time series capabilities
Data Structures
Series - 1D labeled array
DataFrame - 2D labeled array
Panel - 3D labeled array (More D)
Assumed Imports
In my code samples, assume I import the following
End of explanation
s1 = pd.Series([1, 2, 3, 4, 5])
s1
Explanation: Series
one-dimensional labeled array
holds any data type
axis labels known as index
implicit integert indexes
dict-like
Create a Simple Series
End of explanation
# integer multiplication
print(s1 * 5)
Explanation: Series Operations
End of explanation
# float multiplication
print(s1 * 5.0)
Explanation: Series Operations - Cont.
End of explanation
s2 = pd.Series([1, 2, 3, 4, 5],
index=['a', 'b', 'c', 'd', 'e'])
s2
Explanation: Series Index
End of explanation
dates = pd.date_range('20130626', periods=5)
print(dates)
print()
print(dates[0])
Explanation: Date Convenience Functions
A quick aside ...
End of explanation
s3 = pd.Series([1, 2, 3, 4, 5], index=dates)
print(s3)
Explanation: Datestamps as Index
End of explanation
print(s3[0])
print(type(s3[0]))
print()
print(s3[1:3])
print(type(s3[1:3]))
Explanation: Selecting By Index
Note that the integer index is retained along with the new date index.
End of explanation
s3[s3 < 3]
Explanation: Selecting by value
End of explanation
s3['20130626':'20130628']
Explanation: Selecting by Label (Date)
End of explanation
data1 = pd.DataFrame(np.random.rand(4, 4))
data1
Explanation: Series Wrapup
Things not covered but you should look into:
Other instantiation options: dict
Operator Handling of missing data NaN
Reforming Data and Indexes
Boolean Indexing
Other Series Attributes:
index - index.name
name - Series name
DataFrame
2-dimensional labeled data structure
Like a SQL Table, Spreadsheet or dict of Series objects.
Columns of potentially different types
Operations, slicing and other behavior just like Series
DataFrame - Simple
End of explanation
dates = pd.date_range('20130626', periods=4)
data2 = pd.DataFrame(
np.random.rand(4, 4),
index=dates, columns=list('ABCD'))
data2
Explanation: DataFrame - Index/Column Names
End of explanation
data2['E'] = data2['B'] + 5 * data2['C']
data2
Explanation: DataFrame - Operations
End of explanation
# Deleting a Column
del data2['E']
data2
Explanation: See? You never need Excel again!
DataFrame - Column Access
Deleting a column.
End of explanation
data2
Explanation: DataFrame
Remember this, data2, for the next examples.
End of explanation
data2['B']
Explanation: DataFrame - Column Access
As a dict
End of explanation
data2.B
Explanation: DataFrame - Column Access
As an attribute
End of explanation
data2.loc['20130627']
Explanation: DataFrame - Row Access
By row label
End of explanation
data2.iloc[1]
Explanation: DataFrame - Row Access
By integer location
End of explanation
print(data2.B[0])
print(data2['B'][0])
print(data2.iloc[0,1]) # [row,column]
Explanation: DataFrame - Cell Access
Access column, then row or use iloc and row/column indexes.
End of explanation
data3 = pd.DataFrame(np.random.rand(100, 4))
data3.head()
Explanation: DataFrame - Taking a Peek
Look at the beginning of the DataFrame
End of explanation
data3.tail()
Explanation: DataFrame - Taking a Peek
Look at the end of the DataFrame.
End of explanation
# simple readcsv
phxtemps1 = pd.read_csv('phx-temps.csv')
phxtemps1.head()
Explanation: DataFrame Wrap Up
Just remember,
A DataFrame is just a bunch of Series grouped together.
Any one dimensional slice returns a Series
Any two dimensional slice returns another DataFrame.
Elements are typically NumPy types or Objects.
Panel
Like DataFrame but 3 or more dimensions.
IO Tools
Robust IO tools to read in data from a variety of sources
CSV - pd.read_csv()
Clipboard - pd.read_clipboard()
SQL - pd.read_sql_table()
Excel - pd.read_excel()
Plotting
Matplotlib - s.plot() - Standard Python Plotting Library
Trellis - rplot() - An 'R' inspired Matplotlib based plotting tool
Bringing it Together - Data
The csv file (phx-temps.csv) contains Phoenix weather data from
GSOD::
1973-01-01 00:00:00,53.1,37.9
1973-01-02 00:00:00,57.9,37.0
...
2012-12-30 00:00:00,64.9,39.0
2012-12-31 00:00:00,55.9,41.0
Bringing it Together - Code
Simple read_csv()
End of explanation
# define index, parse dates, name columns
phxtemps2 = pd.read_csv(
'phx-temps.csv', index_col=0,
names=['highs', 'lows'], parse_dates=True)
phxtemps2.head()
Explanation: Bringing it Together - Code
Advanced read_csv(), parsing the dates and using them as the index, and naming the columns.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
phxtemps2.plot() # pandas convenience method
Explanation: Bringing it Together - Plot
End of explanation
phxtemps2['20120101':'20121231'].plot()
Explanation: Boo, Pandas and Friends would cry if they saw such a plot.
Bringing it Together - Plot
Lets see a smaller slice of time:
End of explanation
phxtemps2['diff'] = phxtemps2.highs - phxtemps2.lows
phxtemps2['20120101':'20121231'].plot()
Explanation: Bringing it Together - Plot
Lets operate on the DataFrame ... lets take the differnce between the highs and lows.
End of explanation |
3 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
X_train.shape
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for kc in k_choices:
k_to_accuracies[kc] = []
for val_idx in range(num_folds):
XX = np.concatenate(X_train_folds[0:val_idx] + X_train_folds[val_idx+1: num_folds])
yy = np.concatenate(y_train_folds[0:val_idx] + y_train_folds[val_idx+1: num_folds])
classifier = KNearestNeighbor()
classifier.train(XX, yy)
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict(X_train_folds[val_idx], k=kc)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_train_folds[val_idx])
accuracy = float(num_correct) / y_train_folds[val_idx].shape[0]
k_to_accuracies[kc].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
4 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
正向传播和反向传播实现
反向传播算法
之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\theta}\left(x\right)$。
现在,为了计算代价函数的偏导数$\frac{\partial}{\partial\Theta^{(l)}_{ij}}J\left(\Theta\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层
可视化数据
利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据
Step1: 模型展示
按照默认 我们设计一个输入层,一个隐藏层,一个输出层
前向传播和代价函数
在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\newcommand{\subk}[1]{ #1_k }$ $$h_\theta\left(x\right)\in \mathbb{R}^{K}$$ $${\left({h_\theta}\left(x\right)\right)}_{i}={i}^{th} \text{output}$$
$J(\Theta) = -\frac{1}{m} \left[ \sum\limits_{i=1}^{m} \sum\limits_{k=1}^{k} {y_k}^{(i)} \log \subk{(h_\Theta(x^{(i)}))} + \left( 1 - y_k^{(i)} \right) \log \left( 1- \subk{\left( h_\Theta \left( x^{(i)} \right) \right)} \right) \right] + \frac{\lambda}{2m} \sum\limits_{l=1}^{L-1} \sum\limits_{i=1}^{s_l} \sum\limits_{j=1}^{s_{l+1}} \left( \Theta_{ji}^{(l)} \right)^2$
Step2: 反向传播
这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。
Step3: 初始话参数
到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。
我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:
Theta1 = rand(10, 11) (2eps) – eps
Step4: 反向传播
反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算
步骤:
随机初始化权重theta
实现前向传递对任何xi 都能取得h(xi)
实现Jθ
Step5: 梯度检验
梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\theta$,我们计算出在 $\theta$-$\varepsilon $ 处和 $\theta$+$\varepsilon $ 的代价值($\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\theta$ 处的代价值。 | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from scipy.io import loadmat
from sklearn.preprocessing import OneHotEncoder
data = loadmat('../data/andrew_ml_ex33507/ex3data1.mat')
data
X = data['X']
y = data['y']
X.shape, y.shape#看下维度
# 目前考虑输入是图片的像素值,20*20像素的图片有400个输入层单元,不包括需要额外添加的加上常数项。 材料已经提供了训练好的神经网络的参数,有25个单元和10个输出单元(10个输出)
weight = loadmat("../data/andrew_ml_ex33507/ex3weights.mat")
theta1, theta2 = weight['Theta1'], weight['Theta2']
theta1.shape, theta2.shape
sample_idx = np.random.choice(np.arange(data['X'].shape[0]), 100)
sample_images = data['X'][sample_idx, :]
#展示二进制图
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(8, 8))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(sample_images[5 * r + c].reshape((20, 20))).T,cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
Explanation: 正向传播和反向传播实现
反向传播算法
之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\theta}\left(x\right)$。
现在,为了计算代价函数的偏导数$\frac{\partial}{\partial\Theta^{(l)}_{ij}}J\left(\Theta\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层
可视化数据
利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据
End of explanation
def sigmoid(z):
return 1 / (1 + np.exp(-z))
#2st 上面传播规律,定义第一层,并计算第二层(隐藏层)的值,并添加额外值
def forward_propagate(X,theta1,theta2):
m= X.shape[0]
a1 = np.insert(X,0, values=np.ones(m), axis=1)
Z2 = a1*theta1.T
a2= np.insert(sigmoid(Z2),0, values=np.ones(m), axis=1)
Z3= a2*theta2.T
h= sigmoid(Z3)
return a1,Z2,a2,Z3,h
# 代价函数(不带规则化项(也叫权重衰减项) Y=R(5000*10) ,这里直接使用二维矩阵,代替循环累加
def cost(X,Y,theta1,theta2):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
# 对y标签进行编码 一开始我们得到的y是维500*1 的向量,但我们要把他编码成的矩阵。 比如说,原始y0=2,那么转化后的Y对应行就是[0,1,0...0],原始转化后的Y对应行就是[0,0...0,1]
# Scikitlearn有一个内置的编码函数,我们可以使用这个。
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y)
y_onehot.shape
y[0], y_onehot[0,:] # y0是数字0
# 初始化设置
input_size = 400
num_labels = 10
cost(X, y_onehot,theta1, theta2)
# 加入正则项
def cost_reg(X,Y,theta1,theta2,learning_rate):
m = X.shape[0]
X = np.matrix(X)
Y = np.matrix(Y)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
first = np.multiply(Y,np.log(h))
second = np.multiply((1-Y),np.log((1-h)))
J= np.sum(first+second)
# 计算正则时,第一项时不参与计算
J = (-1/m)*J + (float(learning_rate) / (2 * m))*(np.sum(np.power(theta1[:,1:],2))+np.sum(np.power(theta2[:,1:],2)))
return J
# theta1.shape,theta2.shape
cost_reg(X, y_onehot,theta1, theta2,1)
Explanation: 模型展示
按照默认 我们设计一个输入层,一个隐藏层,一个输出层
前向传播和代价函数
在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\newcommand{\subk}[1]{ #1_k }$ $$h_\theta\left(x\right)\in \mathbb{R}^{K}$$ $${\left({h_\theta}\left(x\right)\right)}_{i}={i}^{th} \text{output}$$
$J(\Theta) = -\frac{1}{m} \left[ \sum\limits_{i=1}^{m} \sum\limits_{k=1}^{k} {y_k}^{(i)} \log \subk{(h_\Theta(x^{(i)}))} + \left( 1 - y_k^{(i)} \right) \log \left( 1- \subk{\left( h_\Theta \left( x^{(i)} \right) \right)} \right) \right] + \frac{\lambda}{2m} \sum\limits_{l=1}^{L-1} \sum\limits_{i=1}^{s_l} \sum\limits_{j=1}^{s_{l+1}} \left( \Theta_{ji}^{(l)} \right)^2$
End of explanation
# 计算sigmoid函数的导数
def sigmoid_gradient(z):
return np.multiply(sigmoid(z) ,(1-sigmoid(z)))
# 检查
sigmoid_gradient(0)
Explanation: 反向传播
这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。
End of explanation
# 初始化设置
input_size = 400 #输入单元数量
hidden_size = 25 # y隐藏单元数量
num_labels = 10 # 输出单元数
epsilon = 0.001
theta01=np.random.rand(hidden_size,input_size+1) * 2*epsilon - epsilon# +1是添加偏置单元
theta02 =np.random.rand(num_labels,hidden_size+1)* 2*epsilon - epsilon
theta01.shape,theta02.shape
Explanation: 初始话参数
到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。
我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:
Theta1 = rand(10, 11) (2eps) – eps
End of explanation
# 分别得出
def forward_propagateNEW(X,thetalist):
m= X.shape[0]
a = np.insert(X,0, values=np.ones(m), axis=1)
alist=[a]
zlist=[]
for i in range(len(thetalist)):
theta= thetalist[i]
z = a * theta
# a= np.insert(sigmoid(z),0, values=np.ones(m), axis=1)
a=sigmoid(z)
if(i<len(thetalist)-1):
a= np.insert(a,0, values=np.ones(m), axis=1)
zlist.append(z)
alist.append(a)
return zlist,alist
# Δ 用delta1 和delta2 替代
def backpropRegSelf(input_size, hidden_size, num_labels, X, y, learning_rate,L=3): # 随机化后的 这里为3层
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
#初始化参数
theta1 = (np.random.random((input_size+1,hidden_size))- 0.5)* 0.24
theta2 = (np.random.random((hidden_size+1,num_labels))- 0.5)* 0.24
encoder = OneHotEncoder(sparse=False)
y_onehot = encoder.fit_transform(y) # 格式化y
# 前向计算 每层值
theta = [theta1, theta2]
zlist,alist = forward_propagateNEW(X, theta)# 返回 a1 z2 a2 。。。
# 初始化Deta
Delta=[]
for th in theta:
Delta.append(np.zeros(th.shape))
for i in range(m):
# 以计算a z
for l in range(L,1,-1): # 3,2 表示层数,最后一层已经算出来,单独列放
#最后一层
if l==L:
delta=alist[-1][i,:]-y_onehot[i,:] # 最后一层得δ
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta
else:
zl = zlist[l-2][i,:]
zl = np.insert(zl, 0, values=np.ones(1)) # (1, 26) 怎加偏执项
# d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
# delta1 = delta1 + (d2t[:,1:]).T * a1t
delta = np.multiply(delta*theta[l-1].T, sigmoid_gradient(zl)) #
# 因为数组从零开始,且 Delta 为 1 2 层开始 delta 从2 层开始 # (25, 401)# (10, 26)
Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta[:,1:]
# add the gradient regularization term
gradAll = None
for j in range(len(Delta)):
Delta[j][:,1:] = Delta[j][:,1:]/m + (theta[j][:,1:] * learning_rate) / m
if gradAll is None:
gradAll = np.ravel(Delta[j])
else:
tmp=np.ravel(Delta[j])
gradAll = np.concatenate([gradAll,tmp])
# Delta[:,:,1:] = Delta[:,:,1:] + (theta[:,:,1:] * learning_rate) / m
return gradAll
grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
print(grad2.shape)
def backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
# reshape the parameter array into parameter matrices for each layer
theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
# run the feed-forward pass
a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)
# initializations
J = 0
delta1 = np.zeros(theta1.shape) # (25, 401)
delta2 = np.zeros(theta2.shape) # (10, 26)
# compute the cost
for i in range(m):
first_term = np.multiply(-y[i,:], np.log(h[i,:]))
second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))
J += np.sum(first_term - second_term)
J = J / m
# add the cost regularization term
J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))
# perform backpropagation
for t in range(m):
a1t = a1[t,:] # (1, 401)
z2t = z2[t,:] # (1, 25)
a2t = a2[t,:] # (1, 26)
ht = h[t,:] # (1, 10)
yt = y[t,:] # (1, 10)
d3t = ht - yt # (1, 10)
z2t = np.insert(z2t, 0, values=np.ones(1)) # (1, 26)
d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)
delta1 = delta1 + (d2t[:,1:]).T * a1t
delta2 = delta2 + d3t.T * a2t
delta1 = delta1 / m
delta2 = delta2 / m
# add the gradient regularization term
delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m
delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m
# unravel the gradient matrices into a single array
grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))
return J, grad
# np.random.random(size) 返回size大小的0-1随机浮点数
params = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24
j,grad = backpropReg(params, input_size, hidden_size, num_labels, X, y, 1)
print(j,grad.shape)
# j2,grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)
# print(j2,grad2[0:10])
Explanation: 反向传播
反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算
步骤:
随机初始化权重theta
实现前向传递对任何xi 都能取得h(xi)
实现Jθ
End of explanation
# #J θ
# input_size = 400 #输入单元数量
# hidden_size = 25 # y隐藏单元数量
# num_labels = 10 # 输出单元数
def jcost(X, y,input_size, hidden_size, output_size,theta):
m = X.shape[0]
X = np.matrix(X)
y = np.matrix(y)
theta1 = np.reshape(theta[0:hidden_size*(input_size+1)],(hidden_size,input_size+1))#(25,401)
theta2 = np.reshape(theta[hidden_size*(input_size+1):],(output_size,hidden_size+1))#(10.26)
_,_,_,_,h=forward_propagate(X,theta1,theta2)
# multiply 矩阵size相同对应相乘
first = np.multiply(y,np.log(h))
second = np.multiply((1-y),np.log((1-h)))
J= np.sum(first+second)
J = (-1/m)*J
return J
def check(X,y,theta1,theta2,eps):
theta = np.concatenate((np.ravel(theta1), np.ravel(theta2)))
gradapprox=np.zeros(len(theta))
for i in range(len(theta)):
thetaplus = theta
thetaplus[i] = thetaplus[i] + eps
thetaminus = theta
thetaminus[i] = thetaminus[i] - eps
gradapprox[i] = (jcost(X,y,input_size,hidden_size,num_labels,thetaplus) - jcost(X,y,input_size,hidden_size,num_labels,thetaminus)) / (2 * epsilon)
return gradapprox
# theta01.shape , theta02.shape
# 计算很慢
gradapprox = check(X,y_onehot,theta1,theta2,0.001)
numerator = np.linalg.norm(grad2-gradapprox, ord=2) # Step 1'
denominator = np.linalg.norm(grad2, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2'
difference = numerator / denominator
print(difference)
# 使用工具库计算参数最优解
from scipy.optimize import minimize
# opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))
fmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate),
method='TNC', jac=True, options={'maxiter': 250})
fmin
X = np.matrix(X)
thetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))
thetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))
print(thetafinal1[0,1],grad2[1])
# 计算使用优化后的θ得出的预测
a1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2 )
y_pred = np.array(np.argmax(h, axis=1) + 1)
y_pred
# 最后,我们可以计算准确度,看看我们训练完毕的神经网络效果怎么样。
# 预测值与实际值比较
from sklearn.metrics import classification_report#这个包是评价报告
print(classification_report(y, y_pred))
hidden_layer = thetafinal1[:, 1:]
hidden_layer.shape
fig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(12, 12))
for r in range(5):
for c in range(5):
ax_array[r, c].matshow(np.array(hidden_layer[5 * r + c].reshape((20, 20))),cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
Explanation: 梯度检验
梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\theta$,我们计算出在 $\theta$-$\varepsilon $ 处和 $\theta$+$\varepsilon $ 的代价值($\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\theta$ 处的代价值。
End of explanation |
5 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
train_words = # The final subsampled word list
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
return
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None, None])
labels = tf.placeholder(tf.int32, [None, 1])
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.variable(tf.truncated_normal(n_vocab, n_embedding)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.variable(tf.truncated_normal(n_embedding,n_vocab, stdev=0.1) # create softmax weight matrix here
softmax_b = tf.variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='sampled_softmax_loss')
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
6 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The goal of this post is to investigate if it is possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym, like SECOORA for example.
In the cell above we do the usual
Step1: We need a list of all the Regional Associations we know.
Step2: To streamline the query we can create a function that instantiate the fes filter and returns the records.
Step3: I would not trust those number completely.
Surely some of the RA listed above have more than 0/1 record.
Note that we have more information in the csw.records.
Let's inspect one of SECOORA's stations for example.
Step4: We can verify the station type, title, and last date of modification.
Step5: The subjects field contains the variables and some useful keywords.
Step6: And we can access the full XML description for the station.
Step7: This query is very simple, but also very powerful.
We can quickly assess the data available for a certain Regional Association data with just a few line of code.
You can see the original notebook here. | Python Code:
from owslib.csw import CatalogueServiceWeb
endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint, timeout=30)
Explanation: The goal of this post is to investigate if it is possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym, like SECOORA for example.
In the cell above we do the usual: instantiate a Catalogue Service Web (csw) using the NGDC catalog endpoint.
End of explanation
ioos_ras = ['AOOS', # Alaska
'CaRA', # Caribbean
'CeNCOOS', # Central and Northern California
'GCOOS', # Gulf of Mexico
'GLOS', # Great Lakes
'MARACOOS', # Mid-Atlantic
'NANOOS', # Pacific Northwest
'NERACOOS', # Northeast Atlantic
'PacIOOS', # Pacific Islands
'SCCOOS', # Southern California
'SECOORA'] # Southeast Atlantic
Explanation: We need a list of all the Regional Associations we know.
End of explanation
from owslib.fes import PropertyIsEqualTo
def query_ra(csw, ra='SECOORA'):
q = PropertyIsEqualTo(propertyname='apiso:Keywords', literal=ra)
csw.getrecords2(constraints=[q], maxrecords=100, esn='full')
return csw
for ra in ioos_ras:
csw = query_ra(csw, ra)
ret = csw.results['returned']
word = 'records' if ret > 1 else 'record'
print("{0:>8} has {1:>3} {2}".format(ra, ret, word))
csw.records.clear()
Explanation: To streamline the query we can create a function that instantiate the fes filter and returns the records.
End of explanation
csw = query_ra(csw, 'SECOORA')
key = csw.records.keys()[0]
print(key)
Explanation: I would not trust those number completely.
Surely some of the RA listed above have more than 0/1 record.
Note that we have more information in the csw.records.
Let's inspect one of SECOORA's stations for example.
End of explanation
station = csw.records[key]
station.type, station.title, station.modified
Explanation: We can verify the station type, title, and last date of modification.
End of explanation
station.subjects
Explanation: The subjects field contains the variables and some useful keywords.
End of explanation
print(station.xml)
Explanation: And we can access the full XML description for the station.
End of explanation
HTML(html)
Explanation: This query is very simple, but also very powerful.
We can quickly assess the data available for a certain Regional Association data with just a few line of code.
You can see the original notebook here.
End of explanation |
7 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One Dimensional Visualisation
Data from https
Step1: Assign column headers to the dataframe
Step2: Refine the Data
Step3: Clean Rows & Columns
Lets start by dropping redundant columns - in airports data frame, we don't need type, source
Step4: Lets start by dropping redundant rows - in airlines data frame, we don't need id = -1
Step5: Check for Consistency
All routes have an airline_id which is in the airline dataset
All routes have an source_id and dest_id which is in the airport dataset
Step6: Remove missing values
Lets remove routes where there is no airline_id provided to us | Python Code:
import pandas as pd
# Read in the airports data.
airports = pd.read_csv("../data/airports.dat.txt", header=None, na_values=['\\N'], dtype=str)
# Read in the airlines data.
airlines = pd.read_csv("../data/airlines.dat.txt", header=None, na_values=['\\N'], dtype=str)
# Read in the routes data.
routes = pd.read_csv("../data/routes.dat.txt", header=None, na_values=['\\N'], dtype=str)
Explanation: One Dimensional Visualisation
Data from https://openflights.org/data.html
Airports
Airlines
Routes
Airports
Airport ID: Unique OpenFlights identifier for this airport.
Name: Name of airport. May or may not contain the City name.
City: Main city served by airport. May be spelled differently from Name.
Country: Country or territory where airport is located. See countries.dat to cross-reference to ISO 3166-1 codes.
IATA: 3-letter IATA code. Null if not assigned/unknown.
ICAO: 4-letter ICAO code. Null if not assigned.
Latitude: Decimal degrees, usually to six significant digits. Negative is South, positive is North.
Longitude: Decimal degrees, usually to six significant digits. Negative is West, positive is East.
*Altitude: In feet.
Timezone: Hours offset from UTC. Fractional hours are expressed as decimals, eg. India is 5.5.
DST: Daylight savings time. One of E (Europe), A (US/Canada), S (South America), O (Australia), Z (New Zealand), N (None) or U (Unknown). See also: Help: Time
Tz: database time zone Timezone in "tz" (Olson) format, eg. "America/Los_Angeles".
Type: Type of the airport. Value "airport" for air terminals, "station" for train stations, "port" for ferry terminals and "unknown" if not known. In airports.csv, only type=airport is included.
Source: "OurAirports" for data sourced from OurAirports, "Legacy" for old data not matched to OurAirports (mostly DAFIF), "User" for unverified user contributions. In airports.csv, only source=OurAirports is included.
Airlines
Airline ID: Unique OpenFlights identifier for this airline.
Name: Name of the airline.
Alias: Alias of the airline. For example, All Nippon Airways is commonly known as "ANA".
IATA: 2-letter IATA code, if available.
ICAO: 3-letter ICAO code, if available.
Callsign: Airline callsign.
Country: Country or territory where airline is incorporated.
Active: "Y" if the airline is or has until recently been operational, "N" if it is defunct. This field is not reliable: in particular, major airlines that stopped flying long ago, but have not had their IATA code reassigned (eg. Ansett/AN), will incorrectly show as "Y"
Routes
Airline: 2-letter (IATA) or 3-letter (ICAO) code of the airline.
Airline ID: Unique OpenFlights identifier for airline (see Airline).
Source airport: 3-letter (IATA) or 4-letter (ICAO) code of the source airport.
Source airport ID: Unique OpenFlights identifier for source airport (see Airport)
Destination airport: 3-letter (IATA) or 4-letter (ICAO) code of the destination airport.
Destination airport ID: Unique OpenFlights identifier for destination airport (see Airport)
Codeshare "Y" if this flight is a codeshare (that is, not operated by Airline, but another carrier), empty otherwise.
Stops: Number of stops on this flight ("0" for direct)
Equipment: 3-letter codes for plane type(s) generally used on this flight, separated by spaces
Acquire the Data
End of explanation
airports.columns = ["id", "name", "city", "country", "code", "icao", "latitude",
"longitude", "altitude", "offset", "dst", "timezone", "type", "source"]
airlines.columns = ["id", "name", "alias", "iata", "icao", "callsign", "country", "active"]
routes.columns = ["airline", "airline_id", "source", "source_id", "dest",
"dest_id", "codeshare", "stops", "equipment"]
airports.head()
airlines.head()
routes.head()
Explanation: Assign column headers to the dataframe
End of explanation
airports.head()
Explanation: Refine the Data
End of explanation
airports.drop(['type', 'source'], axis=1, inplace=True)
airports.head()
airports.shape
Explanation: Clean Rows & Columns
Lets start by dropping redundant columns - in airports data frame, we don't need type, source
End of explanation
airlines.drop(0, axis=0, inplace=True)
airlines.shape
airlines.head()
Explanation: Lets start by dropping redundant rows - in airlines data frame, we don't need id = -1
End of explanation
def checkConsistency (s1, s2):
true_count = s1.isin(s2).sum()
total_count = s1.count()
consistency = true_count / total_count
return consistency
not(routes.airline_id.isin(airlines.id))
??missing
checkConsistency(routes.airline_id, airlines.id)
checkConsistency(routes.source_id, airports.id)
checkConsistency(routes.dest_id, airports.id)
Explanation: Check for Consistency
All routes have an airline_id which is in the airline dataset
All routes have an source_id and dest_id which is in the airport dataset
End of explanation
import missingno as msno
%matplotlib inline
msno.matrix(airlines)
msno.matrix(airports)
routes[routes["airline_id"] == "\\N"].count()
routes = routes[routes["airline_id"] != "\\N"]
routes.shape
Explanation: Remove missing values
Lets remove routes where there is no airline_id provided to us
End of explanation |
8 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/JHI_STRAP_Web.png" style="width
Step1: Microarray data <a id="microarray_data"></a>
<div class="alert alert-warning">
Raw array data was previously converted to plain text comma-separated variable format from two `Excel` files
Step2: Import array data <a id="import_data"></a>
Step3: <div class="alert alert-warning">
We reduce the full dataset to only the raw intensity values. We also rename the columns in each of the `control` and `treatment` dataframes.
</div>
In both control and treatment datasets, the mapping of experimental samples (input and output) across the three replicates is
Step4: Data QA <a id="data_qa"></a>
We expect that there is good agreement between input and output raw intensities for each replicate control or treatment experiment. We also expect that there should be good agreement across replicates within the controls, and within the treatment. We inspect this agreement visually with a matrix of scatterplots, below.
The plot_correlation() function can be found in the accompanying tools.py module.
Step5: There is good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation.
Step6: There is - mostly - good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation. There appear to be three problematic probes in replicate 3 that we may need to deal with in the data cleanup.
<div class="alert alert-success">
<b>Taken together, these plots indicate
Step7: Interpolating values for problem probes <a id="interpolation"></a>
We replace the three clear outlying values for the three problematic probes in input.3 of the treatment array with interpolated values. We assume that input.1 and input.2 are typical of the input intensities for these three probes, and take the average of their values to substitute for input.3 for each.
Step8: We can visualise the change in correlation for the treatment dataframe that results
Step9: Normalisation <a id="normalisation"></a>
We expect the array intensity distribution to vary according to whether the sample was from the input (strong) or output (weak) set, and whether the sample came from the control or treatment pools. We therefore divide the dataset into four independently-normalised components
Step10: We visualise the resulting distributions, in violin plots
Step11: <div class="alert-success">
These plots illustrate that there is relative reduction in measured array intensity between control and treatment arrays for both the input and output arrays.
</div>
Wide to long form <a id="wide_to_long"></a>
We have four dataframes containing normalised data
Step12: Long form data has some advantages for melting into new arrangments for visualisation, analysis, and incorporation of new data. For instance, we can visualise the distributions of input and output log intensities against each other, as below
Step13: <div class="alert-success">
This visualisation again shows that treatment intensities are generally lower than control intensities, but also suggests that the bulk of output intensities are lower than input intensities.
<br /><br />
There is a population of low-intensity values for each set of arrays, however. These appear to have a slight increase in intensity in the output, compared to input arrays.
</div>
Probe matches to Sakai and DH10B <a id="probe_matches"></a>
<div class="alert-warning">
Evidence for potential hybridisation of probes to DH10B or Sakai isolates was determined by default `BLASTN` query of each probe sequence against chromosome and plasmid feature nucleotide sequences from the NCBI records
Step14: We then add parent gene annotations to the unique probes
Step15: <div class="alert-danger">
We will certainly be interested in probes that hybridise unambiguously to Sakai or to DH10B. The [array was however designed to report on several *E. coli* isolates](http
Step16: <div class="alert-success">
This leaves us with a dataset comprising
Step17: Write data <a id="write"></a>
<div class="alert-warning">
<b>We write the censored, normalised, long-format data to the `datasets/` subdirectory.</b>
</div>
Step18: For modelling with Stan, we assign indexes for common probe ID, locus tag, and array (combination of replicate and treatment) to each probe, before writing out the complete dataset.
Step19: For testing, we want to create two data subsets, one containing a reduced number of probes, and one with a reduced number of genes/locus tags. | Python Code:
%pylab inline
import os
import random
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import scipy
import seaborn as sns
from Bio import SeqIO
import tools
Explanation: <img src="images/JHI_STRAP_Web.png" style="width: 150px; float: right;">
Supplementary Information: Holmes et al. 2020
1. Data cleaning, normalisation and quality assurance
This notebook describes raw data import, cleaning, and QA, then writing out of processed data to the data/ subdirectory, for use in model fitting.
Table of Contents
Microarray data
Import array data
Data QA
Problematic probes
Interpolation for problematic probes
Normalisation
Wide to long form
Probe matches to Sakai and DH10B
Write data
Python imports
End of explanation
# Input array data filepaths
controlarrayfile = os.path.join('..', 'data', 'control_unix_endings_flags.csv') # control experiment array data (preprocessed)
treatmentarrayfile = os.path.join('..', 'data', 'treatment_unix_endings.csv') # treatment experiment array data (preprocessed)
Explanation: Microarray data <a id="microarray_data"></a>
<div class="alert alert-warning">
Raw array data was previously converted to plain text comma-separated variable format from two `Excel` files:
<ul>
<li> The file `AH alldata 12082013.xlsx` was converted to `data/treatment_unix_endings.csv`
<li> The file `AH alldata expt1 flagged 05092013.xlsx` was converted to `data/control_unix_endings_flags.csv`
</ul>
</div>
These describe microarray results for samples that underwent two treatments:
in vitro growth only - i.e. control: data/control_unix_endings_flags.csv
in vitro growth and plant passage - i.e. treatment: data/treatment_unix_endings.csv
End of explanation
control = pd.read_csv(controlarrayfile, sep=',', skiprows=4, index_col=0)
treatment = pd.read_csv(treatmentarrayfile, sep=',', skiprows=4, index_col=0)
# Uncomment the lines below to inspect the first few rows of each dataframe
#control.head()
#treatment.head()
len(control)
Explanation: Import array data <a id="import_data"></a>
End of explanation
colnames_in = ['Raw', 'Raw.1', 'Raw.2', 'Raw.3', 'Raw.4', 'Raw.5'] # raw data columns
colnames_out = ['input.1', 'output.1', 'input.2', 'output.2', 'input.3', 'output.3'] # renamed raw data columns
# Reduce control and treatment arrays to raw data columns only
control = control[colnames_in]
control.columns = colnames_out
treatment = treatment[colnames_in]
treatment.columns = colnames_out
Explanation: <div class="alert alert-warning">
We reduce the full dataset to only the raw intensity values. We also rename the columns in each of the `control` and `treatment` dataframes.
</div>
In both control and treatment datasets, the mapping of experimental samples (input and output) across the three replicates is:
replicate 1 input: Raw $\rightarrow$ input.1
replicate 1 output: Raw.1 $\rightarrow$ output.1
replicate 2 input: Raw.2 $\rightarrow$ input.2
replicate 2 output: Raw.3 $\rightarrow$ output.2
replicate 3 input: Raw.4 $\rightarrow$ input.3
replicate 3 output: Raw.5 $\rightarrow$ output.3
End of explanation
# Plot correlations for control data
tools.plot_correlation(control);
Explanation: Data QA <a id="data_qa"></a>
We expect that there is good agreement between input and output raw intensities for each replicate control or treatment experiment. We also expect that there should be good agreement across replicates within the controls, and within the treatment. We inspect this agreement visually with a matrix of scatterplots, below.
The plot_correlation() function can be found in the accompanying tools.py module.
End of explanation
# Plot correlations for treatment data
tools.plot_correlation(treatment);
Explanation: There is good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation.
End of explanation
# Select outlying treatment input.3 values
treatment.loc[treatment['input.3'] > 4e4]
# Define problem probes:
problem_probes = list(treatment.loc[treatment['input.3'] > 4e4].index)
Explanation: There is - mostly - good visual correlation between the intensities for the control arrays, and the Spearman's R values also indicate good correlation. There appear to be three problematic probes in replicate 3 that we may need to deal with in the data cleanup.
<div class="alert alert-success">
<b>Taken together, these plots indicate:</b>
<ul>
<li> the intensities of the control arrays are systematically larger than intensities for the treatment arrays, suggesting that the effects of noise may be proportionally greater for the treatment arrays. This might be a concern for reliably inferring enrichment or depletion in the treatment.
<li> the control arrays are good candidates for quantile normalisation (QN; $r > 0.95$, with similar density distributions)
<li> the treatment array `input.3` dataset is potentially problematic, due to three treatment probe datapoints with intensities greater than 40,000 units having large leverage.
</ul>
</div>
Problematic probes <a id="problem_probes"></a>
<div class="alert-warning">
We can readily identify problematic probes in treatment replicate 3, as they are the only probes with intensity greater than 40,000.
The problematic probes are:
<ul>
<li> <code>A_07_P000070</code>
<li> <code>A_07_P061472</code>
<li> <code>A_07_P052489</code>
</ul>
</div>
End of explanation
# Interpolate values
treatment.set_value(index=problem_probes, col='input.3',
value=treatment.loc[problem_probes][['input.1', 'input.2']].mean(1))
treatment.loc[problem_probes]
Explanation: Interpolating values for problem probes <a id="interpolation"></a>
We replace the three clear outlying values for the three problematic probes in input.3 of the treatment array with interpolated values. We assume that input.1 and input.2 are typical of the input intensities for these three probes, and take the average of their values to substitute for input.3 for each.
End of explanation
# Plot correlations for treatment data
tools.plot_correlation(treatment);
Explanation: We can visualise the change in correlation for the treatment dataframe that results:
End of explanation
input_cols = ['input.1', 'input.2', 'input.3'] # input columns
output_cols = ['output.1', 'output.2', 'output.3'] # output columns
# Normalise inputs and outputs for control and treatment separately
control_input = tools.quantile_norm(control, columns=input_cols)
control_output = tools.quantile_norm(control, columns=output_cols)
treatment_input = tools.quantile_norm(treatment, columns=input_cols)
treatment_output = tools.quantile_norm(treatment, columns=output_cols)
Explanation: Normalisation <a id="normalisation"></a>
We expect the array intensity distribution to vary according to whether the sample was from the input (strong) or output (weak) set, and whether the sample came from the control or treatment pools. We therefore divide the dataset into four independently-normalised components:
control_input
control_output
treatment_input
treatment_output
<br /><div class="alert-success">
We have established that because the input and output arrays in both control and treatment conditions have strong correlation across all intensities, and have similar intensity distributions, we are justified in using quantile (mean) normalisation.
</div>
End of explanation
# Make violinplots of normalised data
tools.plot_normalised(control_input, control_output,
treatment_input, treatment_output)
Explanation: We visualise the resulting distributions, in violin plots:
End of explanation
# Convert data from wide to long form
data = tools.wide_to_long(control_input, control_output,
treatment_input, treatment_output)
data.head()
Explanation: <div class="alert-success">
These plots illustrate that there is relative reduction in measured array intensity between control and treatment arrays for both the input and output arrays.
</div>
Wide to long form <a id="wide_to_long"></a>
We have four dataframes containing normalised data:
control_input
control_output
treatment_input
treatment_output
Each dataframe is indexed by the array probe systematic name, with three columns that correspond to replicates 1, 2, and 3 for either a control or a treatment run. For downstream analysis we want to organise this data as the following columns:
index: unique ID
probe: probe name (these apply across treatment/control and input/output)
input: normalised input intensity value (for a particular probe and replicate)
output: normalised input intensity value (for a particular probe and replicate)
treatment: 0/1 indicating whether the measurement was made for the control or treatment sample
replicate: 1, 2, 3 indicating which replicate the measurement was made from
<br /><div class="alert-warning">
We will add other columns with relevant data later, and to enable this, we convert the control and treatment data frames from wide (e.g. input.1, input.2, input.3 columns) to long (e.g. probe, input, output, replicate) form - once for the control data, and once for the treatment data. We match on a multi-index of probe and replicate.
</div>
End of explanation
# Visualise input v output distributions
tools.plot_input_output_violin(data)
Explanation: Long form data has some advantages for melting into new arrangments for visualisation, analysis, and incorporation of new data. For instance, we can visualise the distributions of input and output log intensities against each other, as below:
End of explanation
# BLASTN results files
sakai_blastfile = os.path.join('..', 'data', 'probes_blastn_sakai.tab')
dh10b_blastfile = os.path.join('..', 'data', 'probes_blastn_dh10b.tab')
# Obtain a dataframe of unique probes and their BLASTN matches
unique_probe_hits = tools.unique_probe_matches((sakai_blastfile, dh10b_blastfile))
Explanation: <div class="alert-success">
This visualisation again shows that treatment intensities are generally lower than control intensities, but also suggests that the bulk of output intensities are lower than input intensities.
<br /><br />
There is a population of low-intensity values for each set of arrays, however. These appear to have a slight increase in intensity in the output, compared to input arrays.
</div>
Probe matches to Sakai and DH10B <a id="probe_matches"></a>
<div class="alert-warning">
Evidence for potential hybridisation of probes to DH10B or Sakai isolates was determined by default `BLASTN` query of each probe sequence against chromosome and plasmid feature nucleotide sequences from the NCBI records:
<ul>
<li> `GCF_000019425.1_ASM1942v1_cds_from_genomic.fna`
<li> `GCF_000008865.1_ASM886v1_cds_from_genomic.fna`
</ul>
</div>
$ blastn -query Array/probe_seqlist.fas -subject Sakai/GCF_000008865.1_ASM886v1_cds_from_genomic.fna -outfmt 6 -out probes_blastn_sakai.tab -perc_identity 100
$ blastn -query Array/probe_seqlist.fas -subject DH10B/GCF_000019425.1_ASM1942v1_cds_from_genomic.fna -outfmt 6 -out probes_blastn_dh10b.tab -perc_identity 100
We first identify the probes that match uniquely at 100% identity to a single E. coli gene product from either Sakai or DH10B
End of explanation
# Sequence data files
sakai_seqfile = os.path.join('..', 'data', 'Sakai', 'GCF_000008865.1_ASM886v1_cds_from_genomic.fna')
dh10b_seqfile = os.path.join('..', 'data', 'DH10B', 'GCF_000019425.1_ASM1942v1_cds_from_genomic.fna')
# Add locus tag information to each unique probe
unique_probe_hits = tools.annotate_seqdata(unique_probe_hits, (sakai_seqfile, dh10b_seqfile))
Explanation: We then add parent gene annotations to the unique probes:
End of explanation
censored_data = pd.merge(data, unique_probe_hits[['probe', 'match', 'locus_tag']],
how='inner', on='probe')
censored_data.head()
Explanation: <div class="alert-danger">
We will certainly be interested in probes that hybridise unambiguously to Sakai or to DH10B. The [array was however designed to report on several *E. coli* isolates](http://www.ebi.ac.uk/arrayexpress/arrays/A-GEOD-13359/?ref=E-GEOD-46455), and not all probes should be expected to hybridise, so we could consider the non-uniquely matching probes not to be of interest, and censor them.
<br /><br />
A strong reason to censor probes is that we will be estimating locus tag/gene-level treatment effects, on the basis of probe-level intensity measurements. Probes that may be reporting on multiple genes may mislead our model fit, and so are better excluded.
</div>
We exclude non-unique matching probes by performing an inner join between the data and unique_probe_hits dataframes.
End of explanation
# Visually inspect the effect of censoring on distribution
tools.plot_input_output_violin(censored_data)
Explanation: <div class="alert-success">
This leaves us with a dataset comprising:
<ul>
<li> 49872 datapoints (rows)
<li> 8312 unique probes
<li> 6084 unique locus tags
</ul>
</div>
As can be seen in the violin plot below, censoring the data in this way removes a large number of low-intensity probes from all datasets.
End of explanation
# Create output directory
outdir = 'datasets'
os.makedirs(outdir, exist_ok=True)
# Output files
full_dataset = os.path.join(outdir, "normalised_array_data.tab") # all censored data
reduced_probe_dataset = os.path.join(outdir, "reduced_probe_data.tab") # subset of data grouped by probe
reduced_locus_dataset = os.path.join(outdir, "reduced_locus_data.tab") # subset of data grouped by locus tag
Explanation: Write data <a id="write"></a>
<div class="alert-warning">
<b>We write the censored, normalised, long-format data to the `datasets/` subdirectory.</b>
</div>
End of explanation
# Index on probes
indexed_data = tools.index_column(censored_data, 'probe')
# Index on locus tags
indexed_data = tools.index_column(indexed_data, 'locus_tag')
# Index on array (replicate X treatment)
indexed_data = tools.index_column(indexed_data, 'repXtrt')
# Uncomment the line below to inspect the data
#indexed_data.head(20)
# Write the full dataset to file
indexed_data.to_csv(full_dataset, sep="\t", index=False)
Explanation: For modelling with Stan, we assign indexes for common probe ID, locus tag, and array (combination of replicate and treatment) to each probe, before writing out the complete dataset.
End of explanation
# Reduced probe set
reduced_probes = tools.reduce_dataset(indexed_data, 'probe')
reduced_probes.to_csv(reduced_probe_dataset, sep="\t", index=False)
# Reduced locus tag set
reduced_lts = tools.reduce_dataset(indexed_data, 'locus_tag')
reduced_lts.to_csv(reduced_locus_dataset, sep="\t", index=False)
Explanation: For testing, we want to create two data subsets, one containing a reduced number of probes, and one with a reduced number of genes/locus tags.
End of explanation |
9 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do
Step1: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands
Step2: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy
Step3: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly
Step4: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded
Step5: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
Step6: 1.3 Iterative mapping
First of all, we need to import useful Python packages
Step7: Then we need to set some parameters and prepare our environment
Step8: Let's take a look at .sam files that were created during iterative mapping
Step9: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file)
Step10: Let's take a look at the created file
Step11: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts
Step12: Nice visualisation of the data
Step13: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
Step14: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
Step15: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
Step16: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
Step17: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069
Step18: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example. | Python Code:
# This is regular Python comment inside Jupyter "Code" cell.
# You can easily run "Hello world" in the "Code" cell (focus on the cell and press Shift+Enter):
print("Hello world!")
Explanation: <a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do:
Notebook basics
Reads maping
Data filtering
Binning
Hi-C data visualisation
Iterative correction
Compartments and TADs
If you have any questions, please, contact Aleksandra Galitsyna ([email protected])
<a id="basics"></a>
0. Notebook basics
If you are new to Python and Jupyter notebook, please, take a quick look through this small list of tips.
First of all, Jupyter notebook is organised in cells, which may contain text, comments and code blocks of any size.
End of explanation
%%bash
echo "Current directory is: "; pwd
echo "List of files in the current directory is: "; ls
Explanation: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands:
End of explanation
# Module import under custom name
import numpy as np
# You've started asking questions about it
np?
Explanation: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy:
End of explanation
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R1.fastq'
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R2.fastq'
Explanation: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly:
End of explanation
%%bash
ls ../GENOMES/HG19_FASTA
Explanation: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded:
End of explanation
#%%bash
#bowtie2-build /home/jovyan/GENOMES/HG19_FASTA/chr1.fa /home/jovyan/GENOMES/HG19_IND/hg19_chr1
#Time consuming step
%%bash
ls ../GENOMES/HG19_IND
Explanation: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
End of explanation
import os
from hiclib import mapping
from mirnylib import h5dict, genome
Explanation: 1.3 Iterative mapping
First of all, we need to import useful Python packages:
End of explanation
%%bash
which bowtie2
# Bowtie 2 path
%%bash
pwd
# Current working directory path
# Setting parameters and environmental variables
bowtie_path = '/opt/conda/bin/bowtie2'
enzyme = 'DpnII'
bowtie_index_path = '/home/jovyan/GENOMES/HG19_IND/hg19_chr1'
fasta_path = '/home/jovyan/GENOMES/HG19_FASTA/'
chrms = ['1']
# Reading the genome
genome_db = genome.Genome(fasta_path, readChrms=chrms)
# Creating directories for further data processing
if not os.path.exists('tmp/'):
os.mkdir('tmp/', exists_)
if not os.path.exists('../DATA/SAM/'):
os.mkdir('../DATA/SAM/')
# Set parameters for iterative mapping
min_seq_len = 25
len_step = 5
nthreads = 2
temp_dir = 'tmp'
bowtie_flags = '--very-sensitive'
infile1 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R1.fastq'
infile2 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R2.fastq'
out1 = '/home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam'
out2 = '/home/jovyan/DATA/SAM/K562_B-bulk_R2.chr1.sam'
# Iterative mapping itself. Time consuming step!
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile1,
out_sam_path = out1,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile2,
out_sam_path = out2,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
Explanation: Then we need to set some parameters and prepare our environment:
End of explanation
%%bash
ls /home/jovyan/DATA/SAM/
%%bash
head -n 10 /home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam.25
Explanation: Let's take a look at .sam files that were created during iterative mapping:
End of explanation
# Create the directory for output
if not os.path.exists('../DATA/HDF5/'):
os.mkdir('../DATA/HDF5/')
# Define file name for output
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
# Open output file
mapped_reads = h5dict.h5dict(out)
# Parse mapping data and write to output file
mapping.parse_sam(
sam_basename1 = out1,
sam_basename2 = out2,
out_dict = mapped_reads,
genome_db = genome_db,
enzyme_name = enzyme,
save_seqs = False,
keep_ids = False)
Explanation: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file):
End of explanation
%%bash
ls /home/jovyan/DATA/HDF5/
import h5py
# Reading the file
a = h5py.File('/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5')
# "a" variable has dictionary-like structure, we can view its keys, for example:
list( a.keys() )
# Mapping positions for forward reads are stored under 'cuts1' key:
a['cuts1'].value
Explanation: Let's take a look at the created file:
End of explanation
from hiclib import fragmentHiC
inp = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments_filtered.hdf5'
# Create output file
fragments = fragmentHiC.HiCdataset(
filename = out,
genome = genome_db,
maximumMoleculeLength= 500,
mode = 'w')
# Parse input data
fragments.parseInputData(
dictLike=inp)
# Filtering
fragments.filterRsiteStart(offset=5) # reads map too close to restriction site
fragments.filterDuplicates() # remove PCR duplicates
fragments.filterLarge() # remove too large restriction fragments
fragments.filterExtreme(cutH=0.005, cutL=0) # remove fragments with too high and low counts
# Some hidden filteres were also applied, we can check them all:
fragments.printMetadata()
Explanation: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts:
End of explanation
import pandas as pd
df_stat = pd.DataFrame(list(fragments.metadata.items()), columns=['Feature', 'Count'])
df_stat
df_stat['Ratio of total'] = 100*df_stat['Count']/df_stat.loc[2,'Count']
df_stat
Explanation: Nice visualisation of the data:
End of explanation
# Define file name for binned data. Note "{}" prepared for string formatting
out_bin = '/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'
res_kb = [100, 20] # Several resolutions in Kb
for res in res_kb:
print(res)
outmap = out_bin.format(str(res)+'kb') # String formatting
fragments.saveHeatmap(outmap, res*1000) # Save heatmap
del fragments # delete unwanted object
Explanation: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
End of explanation
# Importing visualisation modules
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
%matplotlib inline
from hiclib.binnedData import binnedDataAnalysis
res = 100 # Resolution in Kb
# prepare to read the data
data_hic = binnedDataAnalysis(resolution=res*1000, genome=genome_db)
# read the data
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'.format(str(res)+'kb'),'hic')
mtx = data_hic.dataDict['hic']
# show heatmap
plt.figure(figsize=[15,15])
plt.imshow(mtx[0:200, 0:200], cmap='jet', interpolation='None')
Explanation: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
End of explanation
# Additional data filtering
data_hic.removeDiagonal()
data_hic.removePoorRegions()
data_hic.removeZeros()
data_hic.iterativeCorrectWithoutSS(force=True)
data_hic.restoreZeros()
mtx = data_hic.dataDict['hic']
plt.figure(figsize=[15,15])
plt.imshow(mtx[200:500, 200:500], cmap='jet', interpolation='None')
Explanation: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
End of explanation
# Load compartments computed previously based on K562 dataset from Rao et al. 2014
eig = np.loadtxt('/home/jovyan/DATA/ANNOT/comp_K562_100Kb_chr1.tsv')
eig
from matplotlib import gridspec
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto')
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
End of explanation
mtx_Rao = np.genfromtxt('../DATA/ANNOT/Rao_K562_chr1.csv', delimiter=',')
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx_Rao[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto', vmax=1000)
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069:
End of explanation
# Import Python package
import lavaburst
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gam=[0.15, 0.25, 0.5, 0.75, 1.0] # set of parameters gamma for TADs calling
segments_dict = {}
for gam_current in gam:
print(gam_current)
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
segments_dict[gam_current] = segments.copy()
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
blues = sns.cubehelix_palette(0.4, gamma=0.5, rot=-0.3, dark=0.1, light=0.9, as_cmap=True)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap=blues)
cmap = mpl.cm.get_cmap('brg')
gammas = segments_dict.keys()
for n, gamma in enumerate(gammas):
segments = segments_dict[gamma]
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
a = segments[-1]
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5, label=gamma)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
ax.set_xlim([0,end_tmp-start_tmp])
ax.set_ylim([100,-100])
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_aspect(0.5)
#Let's check what are median TAD sized with different parameters:
for gam_current in gam:
segments = segments_dict[gam_current]
tad_lens = segments[:,1]-segments[:,0]
good_lens = (tad_lens>=200/res)&(tad_lens<100)
print(res*1000*np.mean(tad_lens[good_lens]))
Explanation: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example.
End of explanation |
10 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Download and Explore the Data
Step2: <h6> Plot the Data Points </h6>
Step3: Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation
Step4: <div align="right">
<a href="#createvar" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="createvar" class="collapse">
```
X = tf.placeholder(tf.float32, shape=(x_data.size))
Y = tf.placeholder(tf.float32,shape=(y_data.size))
# tf.Variable call creates a single updatable copy in the memory and efficiently updates
# the copy to relfect any changes in the variable values through out the scope of the tensorflow session
m = tf.Variable(3.0)
c = tf.Variable(2.0)
# Construct a Model
Ypred = tf.add(tf.multiply(X, m), c)
```
</div>
Create and Run a Session to Visualize the Predicted Line from above Graph
<h6> Feel free to change the values of "m" and "c" in future to check how the initial position of line changes </h6>
Step5: <div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
pred = session.run(Ypred, feed_dict={X
Step6: <div align="right">
<a href="#matmul12" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul12" class="collapse">
```
# normalization factor
nf = 1e-1
# seting up the loss function
loss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf))
```
</div>
Define an Optimization Graph to Minimize the Loss and Training the Model
Step7: <div align="right">
<a href="#matmul13" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul13" class="collapse">
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
#optimizer = tf.train.AdagradOptimizer(0.01 )
# pass the loss function that optimizer should optimize on.
train = optimizer.minimize(loss)
```
</div>
Initialize all the vairiables again
Step8: Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step
Get the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.
Step9: <div align="right">
<a href="#matmul18" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul18" class="collapse">
```
# run a session to train , get m and c values with loss function
_, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X
Step10: <div align="right">
<a href="#matmul199" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul199" class="collapse">
```
plt.plot(losses[ | Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.__version__
Explanation: <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png" width = 300, align = "center"></a>
<h1 align=center> <font size = 5> Exercise-Linear Regression with TensorFlow </font></h1>
This exercise is about modelling a linear relationship between "chirps of a cricket" and ground temperature.
In 1948, G. W. Pierce in his book "Songs of Insects" mentioned that we can predict temperature by listening to the frequency of songs(chirps) made by stripped Crickets. He recorded change in behaviour of crickets by recording number of chirps made by them at several "different temperatures" and found that there is a pattern in the way crickets respond to the rate of change in ground temperature 60 to 100 degrees of farenhite. He also found out that Crickets did not sing
above or below this temperature.
This data is derieved from the above mentioned book and aim is to fit a linear model and predict the "Best Fit Line" for the given "Chirps(per 15 Second)" in Column 'A' and the corresponding "Temperatures(Farenhite)" in Column 'B' using TensorFlow. So that one could easily tell what temperature it is just by listening to the songs of cricket.
Let's import tensorFlow and python dependencies
End of explanation
#downloading dataset
!wget -nv -O ../data/PierceCricketData.csv https://ibm.box.com/shared/static/fjbsu8qbwm1n5zsw90q6xzfo4ptlsw96.csv
df = pd.read_csv("../data/PierceCricketData.csv")
df.head()
Explanation: Download and Explore the Data
End of explanation
%matplotlib inline
x_data, y_data = (df["Chirps"].values,df["Temp"].values)
# plots the data points
plt.plot(x_data, y_data, 'ro')
# label the axis
plt.xlabel("# Chirps per 15 sec")
plt.ylabel("Temp in Farenhiet")
Explanation: <h6> Plot the Data Points </h6>
End of explanation
# Create place holders and Variables along with the Linear model.
m = tf.Variable(3, dtype=tf.float32)
c = tf.Variable(2, dtype=tf.float32)
x = tf.placeholder(dtype=tf.float32, shape=x_data.size)
y = tf.placeholder(dtype=tf.float32, shape=y_data.size)
# Linear model
y_pred = m * x + c
Explanation: Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation:
#### Ypred = m X + c
We have to estimate the values of the slope 'm' and the inrtercept 'c' to fit a line where, X is the "Chirps" and Ypred is "Predicted Temperature" in this case.
Create a Data Flow Graph using TensorFlow
Model the above equation by assigning arbitrary values of your choice for slope "m" and intercept "c" which can predict the temp "Ypred" given Chirps "X" as input.
example m=3 and c=2
Also, create a place holder for actual temperature "Y" which we will be needing for Optimization to estimate the actual values of slope and intercept.
End of explanation
#create session and initialize variables
session = tf.Session()
session.run(tf.global_variables_initializer())
#get prediction with initial parameter values
y_vals = session.run(y_pred, feed_dict={x: x_data})
#Your code goes here
plt.plot(x_data, y_vals, label='Predicted')
plt.scatter(x_data, y_data, color='red', label='GT')
Explanation: <div align="right">
<a href="#createvar" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="createvar" class="collapse">
```
X = tf.placeholder(tf.float32, shape=(x_data.size))
Y = tf.placeholder(tf.float32,shape=(y_data.size))
# tf.Variable call creates a single updatable copy in the memory and efficiently updates
# the copy to relfect any changes in the variable values through out the scope of the tensorflow session
m = tf.Variable(3.0)
c = tf.Variable(2.0)
# Construct a Model
Ypred = tf.add(tf.multiply(X, m), c)
```
</div>
Create and Run a Session to Visualize the Predicted Line from above Graph
<h6> Feel free to change the values of "m" and "c" in future to check how the initial position of line changes </h6>
End of explanation
loss = tf.reduce_mean(tf.squared_difference(y_pred*0.1, y*0.1))
Explanation: <div align="right">
<a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul1" class="collapse">
```
pred = session.run(Ypred, feed_dict={X:x_data})
#plot initial prediction against datapoints
plt.plot(x_data, pred)
plt.plot(x_data, y_data, 'ro')
# label the axis
plt.xlabel("# Chirps per 15 sec")
plt.ylabel("Temp in Farenhiet")
```
</div>
Define a Graph for Loss Function
The essence of estimating the values for "m" and "c" lies in minimizing the difference between predicted "Ypred" and actual "Y" temperature values which is defined in the form of Mean Squared error loss function.
$$ loss = \frac{1}{n}\sum_{i=1}^n{[Ypred_i - {Y}_i]^2} $$
Note: There are also other ways to model the loss function based on distance metric between predicted and actual temperature values. For this exercise Mean Suared error criteria is considered.
End of explanation
# Your code goes here
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)
Explanation: <div align="right">
<a href="#matmul12" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul12" class="collapse">
```
# normalization factor
nf = 1e-1
# seting up the loss function
loss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf))
```
</div>
Define an Optimization Graph to Minimize the Loss and Training the Model
End of explanation
session.run(tf.global_variables_initializer())
Explanation: <div align="right">
<a href="#matmul13" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul13" class="collapse">
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
#optimizer = tf.train.AdagradOptimizer(0.01 )
# pass the loss function that optimizer should optimize on.
train = optimizer.minimize(loss)
```
</div>
Initialize all the vairiables again
End of explanation
convergenceTolerance = 0.0001
previous_m = np.inf
previous_c = np.inf
steps = {}
steps['m'] = []
steps['c'] = []
losses=[]
for k in range(10000):
########## Your Code goes Here ###########
_, _l, _m, _c = session.run([train_op, loss, m, c], feed_dict={x: x_data, y: y_data})
steps['m'].append(_m)
steps['c'].append(_c)
losses.append(_l)
if (np.abs(previous_m - _m) or np.abs(previous_c - _c) ) <= convergenceTolerance :
print("Finished by Convergence Criterion")
print(k)
print(_l)
break
previous_m = _m
previous_c = _c
Explanation: Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step
Get the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.
End of explanation
# Your Code Goes Here
plt.plot(losses)
Explanation: <div align="right">
<a href="#matmul18" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul18" class="collapse">
```
# run a session to train , get m and c values with loss function
_, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X:x_data,Y:y_data})
```
</div>
Print the loss function
End of explanation
y_vals_pred = y_pred.eval(session=session, feed_dict={x: x_data})
plt.scatter(x_data, y_vals_pred, marker='x', color='blue', label='Predicted')
plt.scatter(x_data, y_data, label='GT', color='red')
plt.legend()
plt.ylabel('Temperature (Fahrenheit)')
plt.xlabel('# Chirps per 15 s')
session.close()
Explanation: <div align="right">
<a href="#matmul199" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="matmul199" class="collapse">
```
plt.plot(losses[:])
```
</div>
End of explanation |
11 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xpath always returns a list of results, but there's only one, so we'll use that
Step2: Why is there only one element? I don't know and don't have the time to care, so I'll write
a helper function that always gives me an outline of the HTML (sub)tree that I'm
currently processing.
Step3: It looks like everything we need is in the <tbody>, so we'll grab that.
Step4: There are only <tr> (rows) in here, it's probably the right place.
The first one is the header, the rest should be the countries
Step5: The 3rd column contains the country's name, but also some other crap
Step6: We need to dig deeper, so let's look at the complete HTML of that column | Python Code:
xpath_result = tree.xpath('/html/body/div[3]/div[3]/div[4]/div/table[2]')
table = xpath_result[0]
for elem in table:
print(elem)
Explanation: xpath always returns a list of results, but there's only one, so we'll use that:
End of explanation
def print_outline(tree, indent=0):
print the outline of the given lxml.html tree
indent_prefix = indent * ' '
print(indent_prefix + '<' + tree.tag + '>')
for elem in tree.iterchildren():
print_outline(elem, indent=indent+1)
print_outline(table)
Explanation: Why is there only one element? I don't know and don't have the time to care, so I'll write
a helper function that always gives me an outline of the HTML (sub)tree that I'm
currently processing.
End of explanation
table.getchildren()
tbody = table.getchildren()[0]
tbody
for elem in tbody.getchildren():
print(elem.tag, end=' ')
Explanation: It looks like everything we need is in the <tbody>, so we'll grab that.
End of explanation
rows = tbody.getchildren()
header = rows[0]
countries = rows[1:]
print(header.text_content())
countries[0].text_content()
Explanation: There are only <tr> (rows) in here, it's probably the right place.
The first one is the header, the rest should be the countries:
End of explanation
countries[0][2].text_content()
print_outline(countries[0][2])
Explanation: The 3rd column contains the country's name, but also some other crap:
End of explanation
from lxml import etree
etree.tostring(countries[0][2])
for country in countries:
name_column = country[2]
country_link = name_column.find('a') # get the first '<a>' subtree
country_name = country_link.get('title') # get the 'title' attribute of the link
print(country_name)
Explanation: We need to dig deeper, so let's look at the complete HTML of that column:
End of explanation |
Dataset description:
The Python Code Chatbot dataset is a collection of Python code snippets extracted from various publicly available datasets and platforms. It is designed to facilitate training conversational AI models that can understand and generate Python code. The dataset consists of a total of 1,37,183 prompts, each representing a dialogue between a human and an AI Scientist.
Prompt Card:
Each prompt in the dataset follows a specific format known as the "Prompt Card." The format is as follows:
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Problem described in human language
Python Code:
Human written Code
The length of the prompts varies within a range of 201 to 2590 tokens. The dataset offers a diverse set of conversational scenarios related to Python programming, covering topics such as code execution, debugging, best practices, code optimization, and more.
How to use:
To access the Python Code Chatbot dataset seamlessly, you can leverage the powerful "datasets" library. The following code snippet illustrates how to load the dataset:
from datasets import load_dataset
dataset = load_dataset("anujsahani01/TextCodeDepot")
Potential Use Cases:
This dataset can be utilized for various natural language processing tasks such as question-answering, conversational AI, and text generation. Some potential use cases for this dataset include:
- Training chatbots or virtual assistants that can understand and respond to Python-related queries from users.
- Developing AI models capable of generating Python code snippets based on user input or conversational context.
- Improving code completion systems by incorporating conversational context and user intent.
- Assisting programmers in debugging their Python code by providing relevant suggestions or explanations.
Feedback:
If you have any feedback, please reach out to me at: LinkedIn | GitHub
Your feedback is valuable in improving the quality and usefulness of this dataset.
Author: @anujsahani01
Happy Fine-tuning 🤗
- Downloads last month
- 151