text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lomb-Scargle Periodogram via scipy
Step2: Lomb-Scargle Periodogram via astroML
Step3: Try with more data!
Step4: Try with noisy data
Step5: Automatically detect significant periods
| <ASSISTANT_TASK:>
Python Code:
short_freq = (hdulist[0].header['STEPSIZE'] / (60. * 60. * 24.))
med_freq = (10. / (60. * 24.))
long_freq = 1. / 24.
max_freq = 10. / 24.
from scipy.signal import lombscargle
short_periods = np.linspace(short_freq, med_freq, len(times))
med_periods = np.linspace(med_freq, long_freq, len(times))
long_periods = np.linspace(long_freq, max_freq, len(times))
short_ang_freqs = 2 * np.pi / short_periods
med_ang_freqs = 2 * np.pi / med_periods
long_ang_freqs = 2 * np.pi / long_periods
short_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), short_ang_freqs)
med_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), med_ang_freqs)
long_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), long_ang_freqs)
short_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
med_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
long_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412)
ax3 = fig.add_subplot(413)
ax4 = fig.add_subplot(414)
ax1.minorticks_on()
ax1.plot(times, counts, 'b+')
ax2.plot(short_periods, short_power)
ax3.plot(med_periods, med_power)
ax4.plot(long_periods, long_power)
ax2.set(xlim=(short_freq, med_freq))
ax3.set(xlim=(med_freq, long_freq))
ax4.set(xlim=(long_freq, max_freq))
fig.tight_layout()
from astroML.time_series import lomb_scargle
errors = [0.0001 for item in counts]
short_power = lomb_scargle(times, counts, errors, short_ang_freqs)
med_power = lomb_scargle(times, counts, errors, med_ang_freqs)
long_power = lomb_scargle(times, counts, errors, long_ang_freqs)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412)
ax3 = fig.add_subplot(413)
ax4 = fig.add_subplot(414)
ax1.minorticks_on()
ax1.plot(times, counts, 'b+')
ax2.plot(short_periods, short_power)
ax3.plot(med_periods, med_power)
ax4.plot(long_periods, long_power)
ax2.set(xlim=(short_freq, med_freq))
ax3.set(xlim=(med_freq, long_freq))
ax4.set(xlim=(long_freq, max_freq))
fig.tight_layout()
# Open the data and get a subset
counts, times = [], []
for count, time in zip(all_counts, all_times):
if time > 56500:
counts.append(count)
times.append(time)
fig, ax = plt.subplots()
ax.plot(times, counts, 'b+')
short_periods = np.linspace(short_freq, med_freq, len(times))
med_periods = np.linspace(med_freq, long_freq, len(times))
long_periods = np.linspace(long_freq, max_freq, len(times))
short_ang_freqs = 2 * np.pi / short_periods
med_ang_freqs = 2 * np.pi / med_periods
long_ang_freqs = 2 * np.pi / long_periods
short_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), short_ang_freqs)
med_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), med_ang_freqs)
long_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), long_ang_freqs)
short_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
med_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
long_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412)
ax3 = fig.add_subplot(413)
ax4 = fig.add_subplot(414)
ax1.minorticks_on()
ax1.plot(times, counts, 'b+')
ax2.plot(short_periods, short_power)
ax3.plot(med_periods, med_power)
ax4.plot(long_periods, long_power)
ax2.set(xlim=(short_freq, med_freq))
ax3.set(xlim=(med_freq, long_freq))
ax4.set(xlim=(long_freq, max_freq))
fig.tight_layout()
# Open the data and get a subset
hdulist = fits.open('/grp/hst/hstlc/hst13902/outputs/composite/SDSSJ155304.92+354828.6_FUV_G130M_1309_curve.fits', mode='readonly')
subset = np.where(56853 < hdulist[1].data['mjd'])
counts = hdulist[1].data['net'] # use flux for longer timescales, otherwise use net
times = hdulist[1].data['mjd']
counts = counts.byteswap().newbyteorder()
times = times.byteswap().newbyteorder()
# Plot the sample
fig, ax = plt.subplots()
ax.plot(times, counts, 'b+')
short_periods = np.linspace(short_freq, med_freq, len(times))
med_periods = np.linspace(med_freq, long_freq, len(times))
long_periods = np.linspace(long_freq, max_freq, len(times))
short_ang_freqs = 2 * np.pi / short_periods
med_ang_freqs = 2 * np.pi / med_periods
long_ang_freqs = 2 * np.pi / long_periods
short_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), short_ang_freqs)
med_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), med_ang_freqs)
long_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), long_ang_freqs)
short_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
med_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
long_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412)
ax3 = fig.add_subplot(413)
ax4 = fig.add_subplot(414)
ax1.minorticks_on()
ax1.plot(times, counts, 'b+')
ax2.plot(short_periods, short_power)
ax3.plot(med_periods, med_power)
ax4.plot(long_periods, long_power)
ax2.set(xlim=(short_freq, med_freq))
ax3.set(xlim=(med_freq, long_freq))
ax4.set(xlim=(long_freq, max_freq))
fig.tight_layout()
# Open the data and get a subset
hdulist = fits.open('/grp/hst/hstlc/hst13902/outputs/composite/V-KL-UMA_FUV_G160M_1600_curve.fits', mode='readonly')
subset = np.where(56853 < hdulist[1].data['mjd'])
all_counts = hdulist[1].data['net'] # use flux for longer timescales, otherwise use net
all_times = hdulist[1].data['mjd']
counts, times = [], []
for count, time in zip(all_counts, all_times):
if 56853.930 < time < 56853.955:
counts.append(count)
times.append(time)
short_periods = np.linspace(short_freq, med_freq, len(times))
med_periods = np.linspace(med_freq, long_freq, len(times))
long_periods = np.linspace(long_freq, max_freq, len(times))
short_ang_freqs = 2 * np.pi / short_periods
med_ang_freqs = 2 * np.pi / med_periods
long_ang_freqs = 2 * np.pi / long_periods
short_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), short_ang_freqs)
med_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), med_ang_freqs)
long_power = lombscargle(np.asarray(times), np.asarray(counts) - np.asarray(counts).mean(), long_ang_freqs)
short_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
med_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
long_power *= 2 / (len(times) * np.asarray(counts).std() ** 2)
short_mean = np.mean(short_power)
med_mean = np.mean(med_power)
long_mean = np.mean(long_power)
short_std = np.std(short_power)
med_std = np.std(med_power)
long_std = np.std(long_power)
short_three_sigma = 3 * short_std
med_three_sigma = 3 * med_std
long_three_sigma = 3 * long_std
short_power_three_sigma = np.where(short_power > short_three_sigma)
med_power_three_sigma = np.where(med_power > med_three_sigma)
long_power_three_sigma = np.where(long_power > long_three_sigma)
short_period_three_sigma = np.where(short_power > short_three_sigma)
med_period_three_sigma = np.where(med_power > med_three_sigma)
long_period_three_sigma = np.where(long_power > long_three_sigma)
short_starting_index = short_power_three_sigma[0][0]
med_starting_index = med_power_three_sigma[0][0]
long_starting_index = long_power_three_sigma[0][0]
short_significant_periods = scipy.signal.argrelextrema(short_power[short_power_three_sigma], np.greater)
med_significant_periods = scipy.signal.argrelextrema(med_power[med_power_three_sigma], np.greater)
long_significant_periods = scipy.signal.argrelextrema(long_power[long_power_three_sigma], np.greater)
short_significant_periods_three_sigma = short_periods[[period + short_starting_index for period in short_significant_periods]]
med_significant_periods_three_sigma = med_periods[[period + med_starting_index for period in med_significant_periods]]
long_significant_periods_three_sigma = long_periods[[period + long_starting_index for period in long_significant_periods]]
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412)
ax3 = fig.add_subplot(413)
ax4 = fig.add_subplot(414)
ax1.minorticks_on()
ax1.plot(times, counts, 'b+')
ax2.plot(short_periods, short_power)
ax2.axhline(short_mean, color='r', linestyle='-')
ax2.axhline(short_three_sigma, color='g', linestyle='-')
for period in short_significant_periods_three_sigma:
ax2.axvline(period, color='k', linestyle='--')
ax2.set(xlim=(short_freq, med_freq))
ax3.plot(med_periods, med_power)
ax3.axhline(med_mean, color='r', linestyle='-')
ax3.axhline(med_three_sigma, color='g', linestyle='-')
for period in med_significant_periods_three_sigma:
ax2.axvline(period, color='k', linestyle='--')
ax3.set(xlim=(med_freq, long_freq))
ax4.plot(long_periods, long_power)
ax4.axhline(long_mean, color='r', linestyle='-')
ax4.axhline(long_three_sigma, color='g', linestyle='-')
for period in long_significant_periods_three_sigma:
ax2.axvline(period, color='k', linestyle='--')
ax4.set(xlim=(long_freq, max_freq))
fig.tight_layout()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Checkerboard
Step3: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step4: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
def checkerboard(size):
Return a 2d checkboard of 0.0 and 1.0 as a NumPy array
check = np.zeros((size,size),float)
check.fill(0.0)
n = 0
while n<(size):
if n % 2 == 0: #For even number rows, start filling 1's at position 0
p = 0
else: #For odd number rows, start filling 1's at position 1
p = 1
while p<(size):
check[n,p] = (1.0) #Fill 1's at position n,p
p = p + 2 #Skip one position in row before filling in a row (Key to the checkerboard pattern)
n = n + 1 #Move to next row
return check
#print (checkerboard(7)) #Was used to test output
#raise NotImplementedError()
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
va.set_block_size(10)
va.vizarray(checkerboard(20))
#raise NotImplementedError()
assert True
va.set_block_size(5)
va.vizarray(checkerboard(27))
#raise NotImplementedError()
assert True
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a mock light curve
Step2: Initialize the log posterior function
Step3: Optimize
Step4: Estimate the posterior
Step5: Analysis
Step6: Plot the apparent and true radius ratio posteriors
Step7: Make a corner plot to have a good overview to the posterior space
| <ASSISTANT_TASK:>
Python Code:
%pylab inline
import sys
from corner import corner
sys.path.append('.')
from src.mocklc import MockLC, SimulationSetup
from src.blendlpf import MockLPF
import src.plotting as pl
lc = MockLC(SimulationSetup('M', 0.1, 0.0, 0.0, 'short_transit', cteff=5500, know_orbit=True))
lc.create(wnsigma=[0.001, 0.001, 0.001, 0.001], rnsigma=0.00001, rntscale=0.5, nights=1);
lc.plot();
lpf = MockLPF('Example_1', lc)
lpf.print_parameters(columns=2)
lpf.optimize_global(1000)
lpf.plot_light_curves()
lpf.sample_mcmc(5000, reset=True, repeats=2)
df = lpf.posterior_samples()
pl.joint_radius_ratio_plot(df, fw=13, clim=(0.099, 0.12), htelim=(3570, 3630), ctelim=(2400,3800), blim=(0, 0.5), rlim=(3.8, 5.2));
pl.joint_contamination_plot(df, fw=13, clim=(0, 0.4), htelim=(3570, 3630), ctelim=(2400,3800), blim=(0, 0.5), rlim=(3.8, 5.2));
pl.marginal_radius_ratio_plot(df, bins=60, klim=(0.097, 0.12), figsize=(7,5));
corner(df.iloc[:,2:-3]);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: feim un DataFrame on cada columna conté els host_id de cada scrap i de nom li posam la data de l'scrap
Step2: Feim un dataframe amb l'índex dels IDs únics de tots els dataframes i hi afegim els valors de les altres llistes a la posició corresponent, deixant espais buits on no s'ha trobat el host_id
Step3: L'última entrada vàlida de cada fila ens dirà quin va ser el derrer cop que aquell host va ser vist en un scrap
| <ASSISTANT_TASK:>
Python Code:
df1 = pd.read_csv('listings/30042015/30042015.csv', sep = ";")
df2 = pd.read_csv('listings/17072015/17072015.csv', sep = ";")
df3 = pd.read_csv('listings/02102015/02102015.csv', sep = ";")
df4 = pd.read_csv('listings/03012016/03012016.csv', sep = ";")
df5 = pd.read_csv('listings/08122016/08122016.csv', sep = ";")
df6 = pd.read_csv('listings/08042017/08042017.csv', sep = ";")
dfs_l = (df1, df2, df3, df4, df5, df6)
#convertim a datime per cada df
for df in dfs_l:
df.host_since = pd.to_datetime(df.host_since, format="%Y-%m-%d")
df.last_scraped = pd.to_datetime(df.last_scraped, format="%Y-%m-%d")
l_hosts = [df['host_id'].values for df in dfs_l]
df_hosts = pd.DataFrame(l_hosts)
df_hosts = df_hosts.T
df_hosts.columns = ['2015-04-30','2015-07-17','2015-10-02','2016-01-03','2016-12-08','2017-04-08']
df_hosts = df_hosts.apply(lambda x: x.sort_values().values)
print ([len(x) for x in l_hosts])
df_hosts.head()
uniq_id=np.sort(np.unique(np.hstack(l_hosts)))
id_df = pd.DataFrame(uniq_id)
id_df.set_index(0, inplace=True)
#molt millorable
## Ignasi no miris :/
for date in tqdm_notebook(df_hosts.columns):
id_df[date]=''
for i in tqdm_notebook(id_df.index):
if np.any(df_hosts[date].isin([i])):
id_df[date].loc[i] = i
else:
id_df[date].loc[i] = np.nan
id_df.head()
last_seen = id_df.apply(lambda x: x.last_valid_index(), axis=1) #magic function last_valid_index!
last_seen = pd.DataFrame(last_seen, columns=['host_until'])
last_seen.host_until = pd.to_datetime(last_seen.host_until, format="%Y-%m-%d")
last_seen_dict = pd.Series(last_seen, index = last_seen.index).to_dict()
#mapejam el valor de l'ultima entrada valida al host_id per obtenir "host_until"
listing_tot = pd.concat(dfs_l)
listing_tot['host_until'] = listing_tot.host_id.map(last_seen_dict)
listing_tot.head()
listing_tot.to_csv('listings_host_until.csv',sep=';', index=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add to the function to allow amplitude to be varied and aadd in an additional slider to vary both f and a
Step2: Climate data
Step3: Plotting some live (ish) earthquake data...
Step4: This is great but one cool enhancement would be to make the size of the point represent the magnitude of the earthquake.
| <ASSISTANT_TASK:>
Python Code:
!conda install -y netcdf4
from netCDF4 import Dataset, num2date, date2num
from numpy import *
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
x = linspace(0, 1, 100) # generates a hundred values between 0 and 1
f = 2
a = 3
plt.plot(x, sin(2*pi*x*f))
def pltsin(f):
plt.plot(x, sin(2*pi*x*f))
pltsin(0.5)
interact(pltsin, f=(1, 10, 0.2), x = (1, 10, 0.2))
def pltsina(f, a):
plt.plot(x, a*sin(2*pi*x*f))
plt.ylim(-10.5, 10.5)
interact(pltsina, f=(1, 10, 0.2), a = (1, 10, 0.2))
f=Dataset ('ncep-data/air.sig995.2013.nc') # get individual data set out of the right folder
air = f.variables['air'] # get variable
plt.imshow(air[0,:,:]) # display first timestep
# Create function to browse through the days
def sh(time):
plt.imshow(air[time,:,:])
# Now make it interactive
interact(sh, time=(0, 355, 1))
# Browse variable
def sh(time =0, var='air', year = '2013'):
f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')
vv=f.variables[var]
plt.imshow(vv[time,:,:])
#Give a list of variables
variabs =['air', 'uwnd', 'vwnd', 'rhum']
year = ['2013', '2014', '2015']
# Now interact with it
interact(sh, time=(0, 355, 1), year = year, var=variabs)
help(sh)
from mpl_toolkits.basemap import Basemap
# create north polar sterographic projection
m=Basemap(projection='npstere', boundinglat=60, lon_0=0, resolution ='l')
m.fillcontinents(color='gray', lake_color='gray')
m.drawparallels(arange(-80.,81.,20.))
m.drawmeridians(arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
# Set up some variables
lon = f.variables['lon'][:]
lat = f.variables['lat'][:]
lon, lat = meshgrid(lon, lat)
x, y = m(lon, lat)
def sh(time =0, var='air', year = '2013'):
f=Dataset('ncep-data/'+var+'.sig995.'+year+'.nc')
vv=f.variables[var]
tt=f.variables['time']
dd=num2date(tt[time], tt.units)
m.fillcontinents(color='gray', lake_color='gray')
m.drawparallels(arange(-80.,81.,20.))
m.drawmeridians(arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
cs = m.contourf(x, y, vv[time,:,:]-273.15)
interact(sh, year=year, time=(0,355,1), var=variabs)
my_map = Basemap (projection='merc', lat_0=0, lon_0=30,
resolution='h', area_thresh=1000.0,
llcrnrlon=29, llcrnrlat=-1,
urcrnrlon=31, urcrnrlat=1)
# area threshold states how rivers etc look - scale, resolution sets resolution, llcrnlon etc sets box,
# lat and lon decide where you look
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='coral')
my_map.drawmapboundary()
my_map.drawmeridians(arange(0,360,30))
my_map.drawparallels(arange(-90, 90, 30))
lon=30
lat=0
x,y=my_map(lon, lat)
my_map.plot(x, y, 'bo', markersize=7.2)
plt.show() # here the function that decides actually plots
# This just lets the output of the following code samples
# display inline on this page, at an appropirate size
from pylab import rcParams
# Create a simple basemap
my_map = Basemap (projection='ortho', lat_0=50, lon_0=0,
resolution='l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red', lake_color='gray')
plt.show()
#Check the first few lats and longs
import csv
# Open the earthquake data file.
filename = '1.0_week.csv'
# Create empty lists for the latitudes and longitudes.
lats, lons, mags = [], [], []
# Read through the entire file, skip the first line,
# and pull out just the lats and lons.
with open(filename) as f:
# Create a csv reader object.
reader = csv.reader(f)
# Ignore the header row.
next(reader)
# Store the latitudes and longitudes in the appropriate lists.
for row in reader:
lats.append(float(row[1]))
lons.append(float(row[2]))
mags.append(float(row[4]))
# Display the first 5 lats and lons.
print('lats', lats[0:5])
print('lons', lons[0:5])
print('mags', mags[0:5])
### And now create a plot of these on a map projection
import csv
# Open the earthquake data file.
filename = '1.0_week.csv'
# Create empty lists for the latitudes and longitudes.
lats, lons, mags = [], [], []
# Read through the entire file, skip the first line,
# and pull out just the lats and lons.
with open(filename) as f:
# Create a csv reader object.
reader = csv.reader(f)
# Ignore the header row.
next(reader)
# Store the latitudes and longitudes in the appropriate lists.
for row in reader:
lats.append(float(row[1]))
lons.append(float(row[2]))
mags.append(float(row[4]))
# --- Build Map ---
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
eq_map = Basemap(projection='robin', resolution = 'l', area_thresh = 1000.0,
lat_0=52, lon_0=0)
eq_map.drawcoastlines()
eq_map.drawcountries()
eq_map.fillcontinents(color = 'coral')
eq_map.drawmapboundary()
eq_map.drawmeridians(np.arange(0, 360, 30))
eq_map.drawparallels(np.arange(-90, 90, 30))
min_marker_size = 1
for lon, lat, mags in zip(lons, lats, mags):
x,y = eq_map(lon, lat)
msize = mags * min_marker_size
eq_map.plot(x, y, , markersize=msize)
if mags >= 5.0
eqcolor = 'r'
elif: mags >= 1.0 and <= 3.0
eqcolor = 'g'
elif: <= 1.0
eqcolor = 'y
eq_map.plot(x, y, eqcolor, markersize=msize)
plt.show()
x,y
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The NumPy way
Step2: The Numexpr way
Step3: We were using 10 cores. Did our speedup come from multi-threading or loop-blocking/vectorization?
Step4: Wait, what happened? Why is single-threaded NumExpr slower than NumPy?
Step5: NumExpr changed the computation to double precision while we weren't looking! Floats like 1. are always interpreted as doubles (in Python and NumExpr), and NumExpr uses the highest precision of any operand (in symmetric binary operators, but not ** for some reason). This is in contrast to NumPy, which respects the dtype of the array operands.
Step6: So even in the single-threaded case, we got a 10% speedup. NumExpr achieves this by doing the division and subtraction on small blocks of the array rather than doing the divison on the whole array then a second pass to do the subtraction.
Step7: NumExpr is actually slower than NumPy in this simple case; it's possible that the NumExpr overhead isn't worth it for this simple operation. But if we were to switch to multi-threaded we would see a speed-up
Step8: Example
Step9: Numba
Step10: Numba has one main function
Step11: The first time we run the timing test, we get a message that one of the Numba timing tests took way longer than the others. That's because the first time we call the jitted function, it has to be compiled for the dtype of the arguments. The compiled function is cached, though, making subsequent calls faster.
Step12: Numba does let you call ctypes and CFFI functions in nopython mode, as we'll see later.
Step13: So we got an excellent speedup, but we also had to specify the input dtypes. There are dynamic ufuncs that give more flexibility, but sometimes they don't behave as expected with the parallel target.
Step14: Wait, why didn't we get any parallel speedup? In Numba, parallelization only happens when broadcasting, so let's write it in a way that broadcasts over the first dimension
Step15: I've hacked a fairly general solution that achieves parallelism by broadcasting over an array of indices (I call it parallel_bcast). I'm expecting it will be superceded by Numba functionality within a few years, though.
Step16: See below for a non-trivial example where prange fails to live up to its promise, however.
Step17: So we cut the runtime of the Numpy implementation by 1/3, and we didn't even consider the memory usage. The Numpy implementation doubles the memory usage of the input array, since it has to construct the radii. This can be a big problem when dealing with $2048^3$ FFT meshes, for example.
Step18: As we expected, the race condition caused the parallel version to give the wrong answer. In fact, it doesn't even give a consistent answer from run to run
Step19: We can write a parallel-safe version by giving each k its own temporary histogram space to write into
Step20: Finally, how do the timings stack up?
Step21: That didn't go as expected. If each thread is only operating on one element at a time, that could explain the inefficiency. Hopefully it's implemented internally to operate on nz/nthreads elements at a time, though.
Step22: That was about as fast as we expected, but it still gives the wrong answer. Maybe when prange supports nested loops it will be possible to write a correct version of this.
Step25: CFFI
Step26: We can see that we just compiled _gslffilib.so. Here's how we use it
Step27: And here's how we use it with Numba
Step28: There are other ways to use CFFI (ABI vs API, in-line vs out-of-line), but the above is the preferred approach for most applications.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import numexpr as ne
ne.set_num_threads(10);
rho = np.empty((512,512,512), dtype=np.float32)
rho[:] = np.random.random(rho.shape)
rho_mean = rho.mean(dtype=np.float64).astype(np.float32) # Use double precision for intermediate accumulations
%%timeit
delta = np.exp((rho/rho_mean - 1.)**2.)
%%timeit
delta = ne.evaluate('exp((rho/rho_mean - 1.)**2.)')
ne.set_num_threads(10)
%timeit ne.evaluate('exp((rho/rho_mean - 1.)**2.)')
ne.set_num_threads(1)
%timeit ne.evaluate('exp((rho/rho_mean - 1.)**2.)')
np_delta = np.exp((rho/rho_mean - 1.)**2.)
ne_delta = ne.evaluate('exp((rho/rho_mean - 1.)**2.)')
print np_delta.dtype
print ne_delta.dtype
ne_delta = ne.evaluate('exp((rho/rho_mean - 1)**2.)')
print ne_delta.dtype
ne.set_num_threads(10)
%timeit ne.evaluate('exp((rho/rho_mean - 1)**2.)')
ne.set_num_threads(1)
%timeit ne.evaluate('exp((rho/rho_mean - 1)**2.)')
rho_double = np.random.random((768,768,768))
rho = rho_double.copy()
print 'NumPy:\n\t',
%timeit np_rho2 = rho**2
rho = rho_double.copy()
print 'NumPy inplace:\n\t',
%timeit global rho; rho **= 2
rho = rho_double.copy()
print 'NumExpr:\n\t',
ne.set_num_threads(1)
%timeit ne_rho2 = ne.evaluate('rho**2')
rho = rho_double.copy()
print 'NumExpr inplace:\n\t',
ne_inplace_rho2 = rho
%timeit ne.evaluate('rho**2', out=ne_inplace_rho2)
ne.set_num_threads(10)
rho = rho_double.copy()
print 'NumExpr inplace, multi-threaded:\n\t',
ne_inplace_rho2 = rho
%timeit ne.evaluate('rho**2', out=ne_inplace_rho2)
rho = np.empty((512,512,512), dtype=np.float32)
rho[:] = np.random.random(rho.shape)
%timeit (np.sin(rho**2) + np.cos(rho**3))**.5
ne.set_num_threads(1)
%timeit ne.evaluate('(sin(rho**2) + cos(rho**3))**.5')
ne.set_num_threads(10)
%timeit ne.evaluate('(sin(rho**2) + cos(rho**3))**.5')
import os
os.environ['NUMBA_NUM_THREADS'] = '10'
import numba as nb
import numpy as np
arr = np.empty((4096,4096), dtype=np.float64)
arr[:] = np.random.random(arr.shape)
def py_sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
@nb.jit
def nb_sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
%timeit py_sum2d(arr)
%timeit nb_sum2d(arr)
import scipy.special
@nb.jit(nopython=True)
def nb_sum2d_j1(arr):
M, N = arr.shape
result = 0.
for i in range(M):
for j in range(N):
result += scipy.special.j1(arr[i,j])
return result
nb_sum2d_j1(arr)
@nb.vectorize([nb.float64(nb.float64, nb.float64)], nopython=True, target='cpu')
def vec_op(a, b):
return np.sin(a**b)**2.
@nb.vectorize([nb.float64(nb.float64, nb.float64)], nopython=True, target='parallel')
def parallel_vec_op(a, b):
return np.sin(a**b)**2.
%timeit vec_op(arr, 2*arr)
%timeit parallel_vec_op(arr, 2*arr)
@nb.guvectorize([(nb.float64[:,:], nb.float64[:,:], nb.float64[:,:])], '(nx,ny),(nx,ny)->(nx,ny)', target='cpu', nopython=True)
def guvec(a, b, c):
M, N = a.shape
for i in range(M):
for j in range(N):
c[i,j] = a[i,j]**b[i,j]
@nb.guvectorize([(nb.float64[:,:], nb.float64[:,:], nb.float64[:,:])], '(nx,ny),(nx,ny)->(nx,ny)', target='parallel', nopython=True)
def parallel_guvec(a, b, c):
M, N = a.shape
for i in range(M):
for j in range(N):
c[i,j] = a[i,j]**b[i,j]
%timeit guvec(arr, 2*arr)
%timeit parallel_guvec(arr, 2*arr)
@nb.guvectorize([(nb.float64[:], nb.float64[:], nb.float64[:])], '(ny),(ny)->(ny)', target='parallel', nopython=True)
def parallel_guvec(a, b, c):
N, = a.shape
for j in range(N):
c[j] = a[j]**b[j]
%timeit parallel_guvec(arr, 2*arr)
@nb.jit(nopython=True, parallel=True)
def nb_prange(a, b):
M, N = a.shape
c = np.empty_like(a)
for i in nb.prange(M):
for j in range(N):
c[i,j] = a[i,j]**b[i,j]
%timeit nb_prange(arr, 2*arr)
def py_radial_hist(values, box, bin_edges):
nx,ny,nz = values.shape
X,Y,Z = np.ogrid[0:box:box/nx, 0:box:box/ny, 0:box:box/nz]
radii = X**2 + Y**2 + Z**2
return np.histogram(radii, bins=bin_edges**2, weights=values)
@nb.jit(nopython=True)
def nb_radial_hist(values, boxsize, bin_edges):
nx,ny,nz = values.shape
histogram = np.zeros(len(bin_edges)-1)
# Do binning with squared distances
bin_edges = bin_edges**2
nbp1 = len(bin_edges)
for i in range(nx):
dx2 = (boxsize/nx*i)**2
for j in range(ny):
dy2 = (boxsize/ny*j)**2
for k in range(nz):
dz2 = (boxsize/nz*k)**2
dist = dx2 + dy2 + dz2
if dist < bin_edges[0] or dist > bin_edges[-1]:
continue
for b in range(1,nbp1):
if dist < bin_edges[b]:
histogram[b-1] += values[i,j,k]
break
else: # last bin is closed
histogram[-1] += values[i,j,k]
return histogram
box = 1.
bin_edges = np.linspace(0,box,100)
values = np.random.random((512,512,512))
%timeit py_radial_hist(values, box, bin_edges)
%timeit nb_radial_hist(values, box, bin_edges)
@nb.jit(nopython=True, parallel=True)
def BAD_nb_parallel_radial_hist(values, boxsize, bin_edges):
nx,ny,nz = values.shape
histogram = np.zeros(len(bin_edges)-1)
# parallel=True quirk: some versions wouldn't compile without squaring in-place
bin_edges = bin_edges**2
nbp1 = len(bin_edges)
for i in range(nx):
for j in range(ny):
# another quirk: prange must be in inner loop
for k in nb.prange(nz):
dx2 = (boxsize/nx*i)**2
dy2 = (boxsize/ny*j)**2
dz2 = (boxsize/nz*k)**2
dist = dx2 + dy2 + dz2
if dist < bin_edges[0] or dist > bin_edges[-1]:
continue
for b in range(1,nbp1):
if dist < bin_edges[b]:
histogram[b-1] += values[i,j,k]
break
else: # last bin is closed
histogram[-1] += values[i,j,k]
# also some versions wouldn't compile without a return!
return histogram
py_answer = py_radial_hist(values, box, bin_edges)[0]
nb_answer = nb_radial_hist(values, box, bin_edges)
BAD_parallel_answer = BAD_nb_parallel_radial_hist(values, box, bin_edges)
print np.allclose(py_answer, nb_answer)
print np.allclose(py_answer, BAD_parallel_answer)
BAD_parallel_answer2 = BAD_nb_parallel_radial_hist(values, box, bin_edges)
print np.allclose(BAD_parallel_answer, BAD_parallel_answer2)
@nb.jit(nopython=True, parallel=True)
def nb_parallel_radial_hist(values, boxsize, bin_edges):
nx,ny,nz = values.shape
histogram = np.zeros((nz, len(bin_edges)-1))
bin_edges = bin_edges**2
nbp1 = len(bin_edges)
for i in range(nx):
for j in range(ny):
# prange only works on inner loop
for k in nb.prange(nz):
dx2 = (boxsize/nx*i)**2
dy2 = (boxsize/ny*j)**2
dz2 = (boxsize/nz*k)**2
dist = dx2 + dy2 + dz2
if dist < bin_edges[0] or dist > bin_edges[-1]:
continue
for b in range(1,nbp1):
if dist < bin_edges[b]:
histogram[k, b-1] += values[i,j,k]
break
else: # last bin is closed
histogram[k, -1] += values[i,j,k]
# Silly! This could be written as
# reduced_hist = histogram.sum(axis=0)
# but Numba auto-parallelization doesn't support axis reductions
reduced_hist = np.zeros(len(bin_edges)-1)
for b in range(len(reduced_hist)):
for k in range(nz):
reduced_hist[b] += histogram[k, b]
return reduced_hist
parallel_answer = nb_parallel_radial_hist(values, box, bin_edges)
print np.allclose(py_answer, parallel_answer)
%timeit nb_radial_hist(values, box, bin_edges)
%timeit BAD_nb_parallel_radial_hist(values, box, bin_edges)
%timeit nb_parallel_radial_hist(values, box, bin_edges)
@nb.jit(nopython=True, parallel=True)
def BAD_nb_parallel_radial_hist_flat(values, boxsize, bin_edges):
nx,ny,nz = values.shape
histogram = np.zeros(len(bin_edges)-1)
bin_edges = bin_edges**2
nbp1 = len(bin_edges)
for x in nb.prange(nx*ny*nz):
i = x / (ny*nz)
j = (x % (ny*nz)) / nz
k = x % nz
dx2 = (boxsize/nx*i)**2
dy2 = (boxsize/ny*j)**2
dz2 = (boxsize/nz*k)**2
dist = dx2 + dy2 + dz2
if dist < bin_edges[0] or dist > bin_edges[-1]:
continue
for b in range(1,nbp1):
if dist < bin_edges[b]:
histogram[b-1] += values[i,j,k]
break
else: # last bin is closed
histogram[-1] += values[i,j,k]
return histogram
%timeit BAD_nb_parallel_radial_hist_flat(values, box, bin_edges)
import Abacus.Analysis.PowerSpectrum.Histogram as AbacusHistogram
#reload(AbacusHistogram.Histogram.)
# Pre-compute the mean radius of the bin and the counts per bin
bin_info = AbacusHistogram.RadialBinGrid(box, values, bin_edges)
%timeit AbacusHistogram.RadialBinGrid(box, values, bin_edges, bin_info=bin_info)
%%writefile build_ffilib.py
if __name__ == '__main__':
# Compile the FFI lib
import cffi
ffibuilder = cffi.FFI()
ffibuilder.set_source('_gslffilib',
r
#include <gsl/gsl_sf_bessel.h> // This gets compiled
, libraries=['gsl', 'gslcblas'])
ffibuilder.cdef(
double gsl_sf_bessel_j1 (double x); // This can be copied straight from the man page and is parsed by CFFI
)
ffibuilder.compile(verbose=True)
!python build_ffilib.py
%ls *ffi*
import _gslffilib
_gslffilib.lib.gsl_sf_bessel_j1(1.)
import numpy as np
import numba as nb
import numba.cffi_support
numba.cffi_support.register_module(_gslffilib)
gsl_sf_bessel_j1 = _gslffilib.lib.gsl_sf_bessel_j1
import scipy.special
@nb.jit(nopython=True)
def nb_sum2d_cffi(arr):
M, N = arr.shape
result = 0.
for i in range(M):
for j in range(N):
result += gsl_sf_bessel_j1(arr[i,j])
return result
nb_sum2d_cffi(np.random.random((128,128)))
import ctypes as ct
libgsl = ct.cdll.LoadLibrary("libgsl.so")
# Set the argument and return types
libgsl.gsl_sf_bessel_j1.restype = ct.c_double
libgsl.gsl_sf_bessel_j1.argtypes = [ct.c_double]
libgsl.gsl_sf_bessel_j1(1.)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: No more setup needed! We can run the simulation and plot our observables.
Step4: The Mean Square Displacement of an active particle is characterized by a longer ballistic regime and an increased diffusion coefficient for longer lag times. In the overdamped limit it is given by
Step5: Before we go to the second part, it is important to clear the state of the system.
Step6: Rectification
Step7: Exercise
Step8: Exercise
Step9: Even though the potential energy inside the geometry is 0 in every part of the accessible region, the active particles are clearly not Boltzmann distributed (homogenous density). Instead, they get funneled into the right half, showing the inapplicability of equilibrium statistical mechanics.
Step10: Hydrodynamics of self-propelled particles
Step11: Exercise
Step12: Exercise
Step13: We can also export the particle and fluid data to .vtk format to display the results with a visualization software like ParaView.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import tqdm
import numpy as np
import espressomd.observables
import espressomd.accumulators
espressomd.assert_features(
["ENGINE", "ROTATION", "MASS", "ROTATIONAL_INERTIA", "CUDA"])
ED_PARAMS = {'time_step': 0.01,
'box_l': 3*[10.],
'skin': 0.4,
'active_velocity': 5,
'kT': 1,
'gamma': 1,
'gamma_rotation': 1,
'mass': 0.1,
'rinertia': 3*[1.],
'corr_tmax': 100}
ED_N_SAMPLING_STEPS = 5000000
system = espressomd.System(box_l=ED_PARAMS['box_l'])
system.cell_system.skin = ED_PARAMS['skin']
system.time_step = ED_PARAMS['time_step']
pos_obs = espressomd.observables.ParticlePositions(
ids=[part_act.id, part_pass.id])
msd = espressomd.accumulators.Correlator(obs1=pos_obs,
corr_operation="square_distance_componentwise",
delta_N=1,
tau_max=ED_PARAMS['corr_tmax'],
tau_lin=16)
system.auto_update_accumulators.add(msd)
vel_obs = espressomd.observables.ParticleVelocities(
ids=[part_act.id, part_pass.id])
vacf = espressomd.accumulators.Correlator(obs1=vel_obs,
corr_operation="componentwise_product",
delta_N=1,
tau_max=ED_PARAMS['corr_tmax'],
tau_lin=16)
system.auto_update_accumulators.add(vacf)
ang_obs = espressomd.observables.ParticleAngularVelocities(
ids=[part_act.id, part_pass.id])
avacf = espressomd.accumulators.Correlator(obs1=ang_obs,
corr_operation="componentwise_product",
delta_N=1,
tau_max=ED_PARAMS['corr_tmax'],
tau_lin=16)
system.auto_update_accumulators.add(avacf)
for i in tqdm.tqdm(range(100)):
system.integrator.run(int(ED_N_SAMPLING_STEPS/100))
system.auto_update_accumulators.remove(msd)
msd.finalize()
system.auto_update_accumulators.remove(vacf)
vacf.finalize()
system.auto_update_accumulators.remove(avacf)
avacf.finalize()
taus_msd = msd.lag_times()
msd_result = msd.result()
msd_result = np.sum(msd_result, axis=2)
taus_vacf = vacf.lag_times()
vacf_result = np.sum(vacf.result(), axis=2)
taus_avacf = avacf.lag_times()
avacf_result = np.sum(avacf.result(), axis=2)
fig_msd = plt.figure(figsize=(10, 6))
plt.plot(taus_msd, msd_result[:, 0], label='active')
plt.plot(taus_msd, msd_result[:, 1], label='passive')
plt.xlim((taus_msd[1], None))
plt.loglog()
plt.xlabel('t')
plt.ylabel('MSD(t)')
plt.legend()
plt.show()
def acf_stable_regime(x, y):
Remove the noisy tail in autocorrelation functions of finite time series.
cut = np.argmax(y <= 0.) - 2
assert cut >= 1
return (x[1:cut], y[1:cut])
fig_vacf = plt.figure(figsize=(10, 6))
plt.plot(*acf_stable_regime(taus_vacf, vacf_result[:, 0]), label='active')
plt.plot(*acf_stable_regime(taus_vacf, vacf_result[:, 1]), label='passive')
plt.xlim((taus_vacf[1], None))
plt.loglog()
plt.xlabel('t')
plt.ylabel('VACF(t)')
plt.legend()
plt.show()
fig_avacf = plt.figure(figsize=(10, 6))
plt.plot(*acf_stable_regime(taus_avacf, avacf_result[:, 0]), label='active')
plt.plot(*acf_stable_regime(taus_avacf, avacf_result[:, 1]), label='passive')
plt.xlim((taus_avacf[1], None))
plt.loglog()
plt.xlabel('t')
plt.ylabel('AVACF(t)')
plt.legend()
plt.show()
def clear_system(system):
system.part.clear()
system.thermostat.turn_off()
system.constraints.clear()
system.auto_update_accumulators.clear()
system.time = 0.
clear_system(system)
import espressomd.shapes
import espressomd.math
RECT_PARAMS = {'length': 100,
'radius': 20,
'funnel_inner_radius': 3,
'funnel_angle': np.pi / 4.0,
'funnel_thickness': 0.1,
'n_particles': 500,
'active_velocity': 7,
'time_step': 0.01,
'wca_sigma': 0.5,
'wca_epsilon': 0.1,
'skin': 0.4,
'kT': 0.1,
'gamma': 1.,
'gamma_rotation': 1}
RECT_STEPS_PER_SAMPLE = 100
RECT_N_SAMPLES = 500
TYPES = {'particles': 0,
'boundaries': 1}
box_l = np.array(
[RECT_PARAMS['length'], 2*RECT_PARAMS['radius'], 2*RECT_PARAMS['radius']])
system.box_l = box_l
system.cell_system.skin = RECT_PARAMS['skin']
system.time_step = RECT_PARAMS['time_step']
system.thermostat.set_langevin(
kT=RECT_PARAMS['kT'], gamma=RECT_PARAMS['gamma'], gamma_rotation=RECT_PARAMS['gamma_rotation'], seed=42)
cylinder = espressomd.shapes.Cylinder(
center=0.5 * box_l,
axis=[1, 0, 0], radius=RECT_PARAMS['radius'], length=RECT_PARAMS['length'], direction=-1)
system.constraints.add(shape=cylinder, particle_type=TYPES['boundaries'])
# Setup walls
wall = espressomd.shapes.Wall(dist=0, normal=[1, 0, 0])
system.constraints.add(shape=wall, particle_type=TYPES['boundaries'])
wall = espressomd.shapes.Wall(dist=-RECT_PARAMS['length'], normal=[-1, 0, 0])
system.constraints.add(shape=wall, particle_type=TYPES['boundaries'])
funnel_length = (RECT_PARAMS['radius']-RECT_PARAMS['funnel_inner_radius']
)/np.tan(RECT_PARAMS['funnel_angle'])
com_deviations = list()
times = list()
def moving_average(data, window_size):
return np.convolve(data, np.ones(window_size), 'same') / window_size
smoothing_window = 10
com_smoothed = moving_average(com_deviations, smoothing_window)
fig_rect = plt.figure(figsize=(10, 6))
plt.plot(times[smoothing_window:-smoothing_window],
com_smoothed[smoothing_window:-smoothing_window])
plt.xlabel('t')
plt.ylabel('center of mass deviation')
plt.show()
clear_system(system)
import espressomd.lb
HYDRO_PARAMS = {'box_l': 3*[25],
'time_step': 0.01,
'skin': 1,
'agrid': 1,
'dens': 1,
'visc': 1,
'gamma': 1,
'mass': 5,
'dipole_length': 2,
'active_force': 0.1,
'mode': 'pusher'}
HYDRO_N_STEPS = 2000
system.box_l = HYDRO_PARAMS['box_l']
system.cell_system.skin = HYDRO_PARAMS['skin']
system.time_step = HYDRO_PARAMS['time_step']
system.min_global_cut = HYDRO_PARAMS['dipole_length']
box_l = np.array(HYDRO_PARAMS['box_l'])
pos = box_l/2.
pos[2] = -10.
system.integrator.run(HYDRO_N_STEPS)
vels = np.squeeze(lbf[:, int(system.box_l[1]/2), :].velocity)
vel_abs = np.linalg.norm(vels, axis=2)
lb_shape = lbf.shape
xs, zs = np.meshgrid(np.linspace(0.5, box_l[0] - 0.5, num=lb_shape[0]),
np.linspace(0.5, box_l[2] - 0.5, num=lb_shape[2]))
fig_vels, ax_vels = plt.subplots(figsize=(10, 6))
im = plt.pcolormesh(vel_abs.T, cmap='YlOrRd')
plt.quiver(xs, zs, vels[:, :, 0].T, vels[:, :, 2].T, angles='xy', scale=0.005)
circ = plt.Circle(particle.pos_folded[[0, 2]], 0.5, color='blue')
ax_vels.add_patch(circ)
ax_vels.set_aspect('equal')
plt.xlabel('x')
plt.ylabel('z')
cb = plt.colorbar(im, label=r'$|v_{\mathrm{fluid}}|$')
plt.show()
lbf.write_vtk_velocity('./fluid.vtk')
system.part.writevtk('./particle.vtk')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Allow to display the output of plotting commands in notebook
Step3: Function read_data
Step5: Function main
Step6: Run main
| <ASSISTANT_TASK:>
Python Code:
import xarray as xr
import numpy as np
import os, sys
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
%matplotlib inline
def read_data(file_name):
Read netcdf file and return variables:
rlat, rlon, var, px and py.
# read the dataset
ds = xr.open_dataset(file_name)
# retrieve the data of variable tas, coordinate variables rlat and rlon, and rotated pole
var = ds.tas[0,:,:]
rlat = ds.rlat[:]
rlon = ds.rlon[:]
pole = ds.rotated_pole
try:
# retrieve attribute grid_north_pole_longitude
if hasattr(pole,'grid_north_pole_longitude'):
px = pole.attrs['grid_north_pole_longitude']
# retrieve attribute grid_north_pole_latitude
if hasattr(pole,'grid_north_pole_latitude'):
py = pole.attrs['grid_north_pole_latitude']
except:
print('Unexpected error:', sys.exc_info()[0])
raise
return rlon, rlat, var, px, py
def main():
Draw variable tas on map using the RotatedPole projection.
The coordinate variables rlat and rlon are required.
dir_name = 'data/'
file_name = 'rotated_curvilinear_data.nc'
fname = os.path.join(dir_name,file_name)
# read file content and return relevant variables
rlon, rlat, var, pole_lon, pole_lat = read_data(fname)
# initialize plot
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-46, 70, 20, 75], crs=ccrs.PlateCarree())
# set fill colors for ocean and land areas
ax.add_feature(cartopy.feature.OCEAN, color='white', zorder=0)
ax.add_feature(cartopy.feature.LAND, color='lightgray',zorder=0,
linewidth=0.5, edgecolor='black')
# add gridlines
ax.gridlines(draw_labels=True, linewidth=0.5, color='gray', xlocs=range(-180,180,15), ylocs=range(-90,90,15))
# add coastlines
ax.coastlines(resolution='50m', linewidth=0.3, color='black')
# add title
ax.set_title('Python: rotated curvilinear grid', fontsize=10, fontweight='bold')
# set projection
crs = ccrs.RotatedPole(pole_longitude=pole_lon, pole_latitude=pole_lat)
# contour fill plot
ax.contourf(rlon, rlat, var, levels=15, cmap='RdYlBu_r', transform=crs)
# save the plot output to PNG file (and display it in notebook if '%matplotlib inline' is set)
plt.savefig('Py_rotated_curvilinear_grid_1.png', bbox_inches='tight', dpi=200)
if __name__ == '__main__':
main()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: lassuk lepesekben
Step2: ugy nez ki ez a kifejezes a hibas a 2016-al. ez a zert van, mert ez az oszlop nem valos datumkent van ertelmezve. ket lehetoseg van, vagy atkonvertaljuk datumma, vagy manualisan levagjuk az evet. az elso modszerrel van gey problema
Step3: ezert az egyeteln mod a manualis favagas...
Step4: most kellene mukodjon - ez erdemben nem valtoztatja meg a vizualizaciot
Step5: vissa az eredetihez
Step6: Nezd meg az a fuggvenyt, hogy pandas copy illetve deepcopy. Ez igy nem masolja le a dataframe-et, csak a pointereket valtoztatja meg..
| <ASSISTANT_TASK:>
Python Code:
df
df=pd.read_csv(csv_path)
df[(df[u'year'] <= 2016)]
print pd.Timestamp.min
print pd.Timestamp.max
year2=[]
for i in df['year']:
try: year2.append(int(i[6:10]))
except: year2.append(np.nan)
df['year']=year2
df[(df[u'year'] <= 2016)]
df = df[(df[u'reclat'] != 0.0) & (df[u'reclong'] != 0.0) & (df[u'reclong'] >= -180) & (df[u'reclong'] <= 180) & (df[u'year'] >= 860) & (df[u'year'] <= 2016)]
df.head()
df=df.drop(u'id', axis=1)
df.dropna(inplace=True)
valid=df
valid.index=[i for i in range(len(valid.index))]
valid.rename(columns={'mass (g)': 'mass'}, inplace = True)
tipus=[i for i in range(len(valid.index))]
orszag=[i for i in range(len(valid.index))]
for i in range(len(valid.index)):
recclass=valid.loc[i]['recclass']
if 'Iron' in recclass:
tipus[i]='Iron'
elif 'Mesosiderite' in recclass or 'Pallasite' in recclass:
tipus[i]='Stony-Iron'
else:
tipus[i]='Stony'
# Antarktisszal problema!!
if valid.loc[i]['reclat'] <= -60:
orszag[i]='Antarctica'
else:
coordinate=valid.loc[i]['reclat'], valid.loc[i]['reclong']
orszag[i]=rg.get(coordinate)['country']
valid['type']=tipus
valid['country']=orszag
gf=[]
for i in range(len(valid.index)):
seged={}
for j in range(len(valid.loc[i])):
seged[valid.columns[j]]=str(valid.loc[i][j])
gf.append(seged)
gf
json_path='Meteorite_Landings3.json'
file(json_path,'w').write(json.dumps(gf))
#Orszag szerinti eloszlas
plt.rcParams['font.size'] = 20
valid_cgroup=valid.groupby('country')
valid_sum=valid_cgroup.size()
valid_sum.sort_values(ascending=True, inplace=True)
valid_sum.plot.pie(figsize=(20, 20))
# Latott es megtalalt szetvalasztasa
#fell=valid[valid['fall']=="Fell"].copy()
#found=valid[valid['fall']=="Found"].copy()
#found.index=[i for i in range(len(found.index))]
#fell.index=[i for i in range(len(fell.index))]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Lasso (L1 penalty)
Step4: Linear models for classification
Step5: Multi-Class linear classification
| <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from figures import plot_linear_svc_regularization
plot_linear_svc_regularization()
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Short Tutorial
Step2: Write the trees column to a file
Step3: Get Astral
Step4: Run Astral
Step5: Plot astral species tree
| <ASSISTANT_TASK:>
Python Code:
# conda install ipyrad -c bioconda
# conda install toytree -c eaton-lab
import pandas as pd
import toytree
# load the tree table from CSV
tree_table = pd.read_csv(
"./analysis-treeslider/test.tree_table.csv",
index_col=0,
)
# examine top of table
tree_table.head()
outfile = open("trees.nwk", "w")
outfile.write("\n".join(tree_table.tree.tolist()))
outfile.close()
%%bash
wget -q https://github.com/smirarab/ASTRAL/raw/master/Astral.5.6.3.zip
unzip -qo Astral.5.6.3
%%bash
java -jar Astral/astral.5.6.3.jar -i trees.nwk > astral.tre 2>astral.err
tre = toytree.rtree.coaltree(10)
tre.
tre.rotate_node(names=["r0", "r1", "r2"])
tre.draw(node_labels=True);
tre = toytree.tree("astral.tre").root('reference')
tre.draw(node_labels="support", tip_labels_align=True);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Naive concept of simultaneous deformation
Step2: To divide simple shear deformation with $\gamma$=1 to n incremental steps
Step3: To check that supperposition of those increments give as total deformation, we can use allclose numpy function
Step4: Knowing that deformation superposition is not cimmutative, we can check that axial ratio of finite strain resulting from simple shear superposed on pure shear and vice-versa is really different
Step5: Lets try to split those deformation to two increments and mutually mix them
Step6: It is now close to each other, but still quite different. So let's split it to much more increments....
Step7: Now it is very close. Let's visualize how finite strain converge with increasing number of increments
Step8: Using spatial velocity gradient
Step9: Spatial velocity gradient could be obtained as matrix logarithm of deformation gradient
Step10: Total spatial velocity gradient of simulatanous deformation could be calculated by summation of individual ones
Step11: Resulting deformation gradient could be calculated as matrix exponential of total spatial velocity gradient
Step12: Lets overlay it on previous diagram
Step13: Decomposition of spatial velocity gradient
Step14: Check that decomposition give total spatial velocity gradient
Step15: Visualize spatial velocity gradients for rate of deformation tensor
Step16: Visualize spatial velocity gradients for spin tensor
| <ASSISTANT_TASK:>
Python Code:
%pylab inline
from sg2lib import *
gamma = 1
Sx = 2
Fs = array([[1, gamma], [0, 1]])
Fp = array([[Sx, 0], [0, 1/Sx]])
n = 10
Fsi = array([[1, gamma/n], [0, 1]])
print('Incremental deformation gradient:')
print(Fsi)
array_equal(matrix_power(Fsi, n), Fs)
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
print('Incremental deformation gradient:')
print(Fpi)
allclose(matrix_power(Fpi, n), Fp)
u,s,v = svd(Fs @ Fp)
print('Axial ratio of finite strain resulting from simple shear superposed on pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(Fp @ Fs)
print('Axial ratio of finite strain resulting from pure shear superposed on simple shear: {}'.format(s[0]/s[1]))
Fsi = array([[1, gamma/2], [0, 1]])
Fpi = array([[Sx**(1/2), 0], [0, Sx**(-1/2)]])
u,s,v = svd(Fsi @ Fpi @ Fsi @ Fpi)
print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(Fpi @ Fsi @ Fpi @ Fsi)
print('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))
n = 100
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
print('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))
arp = []
ars = []
ninc = range(1, 201)
for n in ninc:
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
arp.append(s[0]/s[1])
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
ars.append(s[0]/s[1])
figure(figsize=(16, 4))
semilogy(ninc, arp, 'r', label='Pure shear first')
semilogy(ninc, ars, 'g', label='Simple shear first')
legend()
xlim(1, 200)
xlabel('Number of increments')
ylabel('Finite strain axial ratio');
from scipy.linalg import expm, logm
Lp = logm(Fp)
Ls = logm(Fs)
L = Lp + Ls
F = expm(L)
u,s,v = svd(F)
sar = s[0]/s[1]
print('Axial| ratio of finite strain of simultaneous pure shear and simple shear: {}'.format(sar))
arp = []
ars = []
ninc = range(1, 201)
for n in ninc:
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
arp.append(s[0]/s[1])
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
ars.append(s[0]/s[1])
figure(figsize=(16, 4))
semilogy(ninc, arp, 'r', label='Pure shear first')
semilogy(ninc, ars, 'g', label='Simple shear first')
legend()
xlim(1, 200)
axhline(sar)
xlabel('Number of increments')
ylabel('Finite strain axial ratio');
L = logm(Fs)
D = (L + L.T)/2
W = (L - L.T)/2
allclose(D + W, L)
vel_field(D)
vel_field(W)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe
Step2: We can get a summary of how the ocular artifact manifests across each channel
Step3: Now we'll do the same for the heartbeat artifacts, using
Step4: Filtering to remove slow drifts
Step5: Fitting and plotting the ICA solution
Step6: Some optional parameters that we could have passed to the
Step7: Here we can pretty clearly see that the first component (ICA000) captures
Step8: <div class="alert alert-info"><h4>Note</h4><p>
Step9: We can also plot some diagnostics of each IC using
Step10: In the remaining sections, we'll look at different ways of choosing which ICs
Step11: Now that the exclusions have been set, we can reconstruct the sensor signals
Step12: Using an EOG channel to select ICA components
Step13: Note that above we used
Step14: The last of these plots is especially useful
Step15: Much better! Now we've captured both ICs that are reflecting the heartbeat
Step16: Selecting ICA components using template matching
Step17: Now let's run
Step18: The first figure shows the template map, while the second figure shows all
Step19: Notice that subject 1 does seem to have an IC that looks like it reflects
Step20: Now we get the message At least 1 IC detected for each subject (which is
Step21: Notice that the first subject has 3 different labels for the IC at index 0
Step22: As a final note, it is possible to extract ICs numerically using the
| <ASSISTANT_TASK:>
Python Code:
import os
import mne
from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,
corrmap)
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60.)
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.apply_baseline(baseline=(None, -0.2))
eog_evoked.plot_joint()
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.apply_baseline(baseline=(None, -0.2))
ecg_evoked.plot_joint()
filt_raw = raw.copy()
filt_raw.load_data().filter(l_freq=1., h_freq=None)
ica = ICA(n_components=15, random_state=97)
ica.fit(filt_raw)
raw.load_data()
ica.plot_sources(raw)
ica.plot_components()
# blinks
ica.plot_overlay(raw, exclude=[0], picks='eeg')
# heartbeats
ica.plot_overlay(raw, exclude=[1], picks='mag')
ica.plot_properties(raw, picks=[0, 1])
ica.exclude = [0, 1] # indices chosen based on various plots above
# ica.apply() changes the Raw object in-place, so let's make a copy first:
reconst_raw = raw.copy()
ica.apply(reconst_raw)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
reconst_raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
del reconst_raw
ica.exclude = []
# find which ICs match the EOG pattern
eog_indices, eog_scores = ica.find_bads_eog(raw)
ica.exclude = eog_indices
# barplot of ICA component "EOG match" scores
ica.plot_scores(eog_scores)
# plot diagnostics
ica.plot_properties(raw, picks=eog_indices)
# plot ICs applied to raw data, with EOG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged EOG epochs, with EOG matches highlighted
ica.plot_sources(eog_evoked)
ica.exclude = []
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = ica.find_bads_ecg(raw, method='correlation',
threshold='auto')
ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
ica.plot_scores(ecg_scores)
# plot diagnostics
ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
ica.plot_sources(ecg_evoked)
# refit the ICA with 30 components this time
new_ica = ICA(n_components=30, random_state=97)
new_ica.fit(filt_raw)
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = new_ica.find_bads_ecg(raw, method='correlation',
threshold='auto')
new_ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
new_ica.plot_scores(ecg_scores)
# plot diagnostics
new_ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
new_ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
new_ica.plot_sources(ecg_evoked)
# clean up memory before moving on
del raw, filt_raw, ica, new_ica
mapping = {
'Fc5.': 'FC5', 'Fc3.': 'FC3', 'Fc1.': 'FC1', 'Fcz.': 'FCz', 'Fc2.': 'FC2',
'Fc4.': 'FC4', 'Fc6.': 'FC6', 'C5..': 'C5', 'C3..': 'C3', 'C1..': 'C1',
'Cz..': 'Cz', 'C2..': 'C2', 'C4..': 'C4', 'C6..': 'C6', 'Cp5.': 'CP5',
'Cp3.': 'CP3', 'Cp1.': 'CP1', 'Cpz.': 'CPz', 'Cp2.': 'CP2', 'Cp4.': 'CP4',
'Cp6.': 'CP6', 'Fp1.': 'Fp1', 'Fpz.': 'Fpz', 'Fp2.': 'Fp2', 'Af7.': 'AF7',
'Af3.': 'AF3', 'Afz.': 'AFz', 'Af4.': 'AF4', 'Af8.': 'AF8', 'F7..': 'F7',
'F5..': 'F5', 'F3..': 'F3', 'F1..': 'F1', 'Fz..': 'Fz', 'F2..': 'F2',
'F4..': 'F4', 'F6..': 'F6', 'F8..': 'F8', 'Ft7.': 'FT7', 'Ft8.': 'FT8',
'T7..': 'T7', 'T8..': 'T8', 'T9..': 'T9', 'T10.': 'T10', 'Tp7.': 'TP7',
'Tp8.': 'TP8', 'P7..': 'P7', 'P5..': 'P5', 'P3..': 'P3', 'P1..': 'P1',
'Pz..': 'Pz', 'P2..': 'P2', 'P4..': 'P4', 'P6..': 'P6', 'P8..': 'P8',
'Po7.': 'PO7', 'Po3.': 'PO3', 'Poz.': 'POz', 'Po4.': 'PO4', 'Po8.': 'PO8',
'O1..': 'O1', 'Oz..': 'Oz', 'O2..': 'O2', 'Iz..': 'Iz'
}
raws = list()
icas = list()
for subj in range(4):
# EEGBCI subjects are 1-indexed; run 3 is a left/right hand movement task
fname = mne.datasets.eegbci.load_data(subj + 1, runs=[3])[0]
raw = mne.io.read_raw_edf(fname)
# remove trailing `.` from channel names so we can set montage
raw.rename_channels(mapping)
raw.set_montage('standard_1005')
# fit ICA
ica = ICA(n_components=30, random_state=97)
ica.fit(raw)
raws.append(raw)
icas.append(ica)
# use the first subject as template; use Fpz as proxy for EOG
raw = raws[0]
ica = icas[0]
eog_inds, eog_scores = ica.find_bads_eog(raw, ch_name='Fpz')
corrmap(icas, template=(0, eog_inds[0]))
for index, (ica, raw) in enumerate(zip(icas, raws)):
fig = ica.plot_sources(raw)
fig.suptitle('Subject {}'.format(index))
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9)
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9, label='blink',
plot=False)
print([ica.labels_ for ica in icas])
icas[3].plot_components(picks=icas[3].labels_['blink'])
icas[3].exclude = icas[3].labels_['blink']
icas[3].plot_sources(raws[3])
template_eog_component = icas[0].get_components()[:, eog_inds[0]]
corrmap(icas, template=template_eog_component, threshold=0.9)
print(template_eog_component)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Template representation variant 1
Step2: Template representation variant 2
Step3: Template representation variant 3
| <ASSISTANT_TASK:>
Python Code:
# import variable setting dictionaries from dkrz data ingest tool chain
# and remove __doc__ strings from dictionary (would clutter PROV graph visualizations)
from provtemplates import workflow_steps
from collections import MutableMapping
from contextlib import suppress
def delete_keys_from_dict(dictionary, keys):
for key in keys:
with suppress(KeyError):
del dictionary[key]
for value in dictionary.values():
if isinstance(value, MutableMapping):
delete_keys_from_dict(value, keys)
workflow_dict = workflow_steps.WORKFLOW_DICT
from provtemplates import provconv
import prov.model as prov
import six
import itertools
from provtemplates import workflow_steps
ns_dict = {
'prov':'http://www.w3.org/ns/prov#',
'var':'http://openprovenance.org/var#',
'vargen':'http://openprovenance.org/vargen#',
'tmpl':'http://openprovenance.org/tmpl#',
'foaf':'http://xmlns.com/foaf/0.1/',
'ex': 'http://example.org/',
'orcid':'http://orcid.org/',
#document.set_default_namespace('http://example.org/0/')
'rdf':'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs':'http://www.w3.org/2000/01/rdf-schema#',
'xsd':'http://www.w3.org/2001/XMLSchema#',
'ex1': 'http://example.org/1/',
'ex2': 'http://example.org/2/'
}
prov_doc01 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc02 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc03 = provconv.set_namespaces(ns_dict,prov.ProvDocument())
prov_doc1 = prov_doc01.bundle("var:data-ingest-wflow")
prov_doc2 = prov_doc02.bundle("var:data-ingest-wflow")
prov_doc3 = prov_doc03.bundle("var:data-ingest-wflow")
prov_doc01.set_default_namespace('http://enes.org/ns/ingest#')
prov_doc02.set_default_namespace('http://enes.org/ns/ingest#')
prov_doc03.set_default_namespace('http://enes.org/ns/ingest#')
def gen_bundles(workflow_dict,prov_doc):
global_in_out = prov_doc.entity('var:wf_doc')
for wflow_step, wflow_stepdict in workflow_dict.items():
nbundle = prov_doc.bundle('var:'+wflow_step)
out_node = nbundle.entity('var:'+wflow_step+'_out')
agent = nbundle.agent('var:'+wflow_step+'_agent')
activity = nbundle.activity('var:'+wflow_step+'_activity')
in_node = nbundle.entity('var:'+wflow_step+'_in')
nbundle.wasGeneratedBy(out_node,activity)
nbundle.used(activity,in_node)
nbundle.wasAssociatedWith(activity,agent)
nbundle.wasDerivedFrom(in_node,out_node)
nbundle.used(activity,global_in_out)
nbundle.wasGeneratedBy(global_in_out,activity)
def in_bundles(workflow_dict,prov_doc):
first = True
out_nodes = []
nbundle = prov_doc
for wflow_step, wflow_stepdict in workflow_dict.items():
#nbundle = prov_doc.bundle('var:'+wflow_step)
out_node = nbundle.entity('var:'+wflow_step+'_out')
agent = nbundle.agent('var:'+wflow_step+'_agent')
activity = nbundle.activity('var:'+wflow_step+'_activity')
if first:
in_node = nbundle.entity('var:'+wflow_step+'_in')
nbundle.used(activity,in_node)
first = False
out_nodes.append((nbundle,out_node,agent,activity))
return out_nodes
def chain_bundles(nodes):
'''
chaining based on "used" activity relationship
'''
i = 1
for (nbundle,out_node,agent,activity) in nodes[1:]:
(prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]
nbundle.used(activity,prev_out)
i += 1
for (nbundle,out_node,agent,activity) in nodes:
nbundle.wasGeneratedBy(out_node,activity)
nbundle.wasAssociatedWith(activity,agent)
def chain_hist_bundles(nodes,prov_doc):
'''
chaining based on "used" activity relationship
add an explicit end_result composing all the generated
intermediate results
'''
i = 1
for (nbundle,out_node,agent,activity) in nodes[1:]:
(prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]
nbundle.used(activity,prev_out)
i += 1
for (nbundle,out_node,agent,activity) in nodes:
nbundle.wasGeneratedBy(out_node,activity)
nbundle.wasAssociatedWith(activity,agent)
wf_out = prov_doc.entity("ex:wf_result")
wf_agent = prov_doc.agent("ex:workflow_handler")
wf_activity = prov_doc.activity("ex:wf_trace_composition")
prov_doc.wasGeneratedBy(wf_out,wf_activity)
prov_doc.wasAssociatedWith(wf_activity,wf_agent)
for (nbundle,out_node,agent,activity) in nodes:
prov_doc.used(wf_activity,out_node)
# generate prov_template options and print provn representation
gen_bundles(workflow_dict,prov_doc01)
print(prov_doc01.get_provn())
%matplotlib inline
prov_doc01.plot()
prov_doc01.serialize('data-ingest1.rdf',format='rdf')
nodes = in_bundles(workflow_dict,prov_doc2)
chain_bundles(nodes)
print(prov_doc02.get_provn())
%matplotlib inline
prov_doc02.plot()
from prov.dot import prov_to_dot
dot = prov_to_dot(prov_doc02)
prov_doc02.serialize('ingest-prov-version2.rdf',format='rdf')
dot.write_png('ingest-prov-version2.png')
gnodes = in_bundles(workflow_dict,prov_doc3)
chain_hist_bundles(gnodes,prov_doc3)
print(prov_doc03.get_provn())
dot = prov_to_dot(prov_doc03)
dot.write_png('ingest-prov-version3.png')
%matplotlib inline
prov_doc03.plot()
prov_doc03.serialize('data-ingest3.rdf',format='rdf')
# ------------------ to be removed --------------------------------------
# generate prov_template options and print provn representation
gen_bundles(workflow_dict,prov_doc1)
print(prov_doc1.get_provn())
nodes = in_bundles(workflow_dict,prov_doc2)
chain_bundles(nodes)
print(prov_doc2.get_provn())
gnodes = in_bundles(workflow_dict,prov_doc3)
chain_hist_bundles(gnodes,prov_doc3)
print(prov_doc3.get_provn())
%matplotlib inline
prov_doc1.plot()
prov_doc2.plot()
prov_doc3.plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate label column for the training data
Step2: Create and fit Spark ML model
Step3: Predict whether the aircraft will be late
Step4: Check model performance
| <ASSISTANT_TASK:>
Python Code:
training = sqlContext.read.parquet("s3://zoltanctoth-flights/training.parquet")
test = sqlContext.read.parquet("s3://zoltanctoth-flights/training.parquet")
test.printSchema()
test.first()
training.cache()
test.cache()
from pyspark.sql.types import DoubleType
from pyspark.sql.functions import udf
is_late = udf(lambda delay: 1.0 if delay > 0 else 0.0, DoubleType())
training = training.withColumn("is_late",is_late(training.arrdelay))
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import VectorAssembler
from pyspark.ml import Pipeline
# Create feature vectors. Ignore arr_delay and it's derivate, is_late
feature_assembler = VectorAssembler(
inputCols=[x for x in training.columns if x not in ["is_late","arrdelay"]],
outputCol="features")
reg = LogisticRegression().setParams(
maxIter = 100,
labelCol="is_late",
predictionCol="prediction")
model = Pipeline(stages=[feature_assembler, reg]).fit(training)
predicted = model.transform(test)
predicted.take(1)
predicted = predicted.withColumn("is_late",is_late(predicted.arrdelay))
predicted.crosstab("is_late","prediction").show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And some more specialized dependencies
Step2: Configuration for this figure.
Step3: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step4: We want to plot the spike depth, which is the 'H' field in the chest.
Step5: Plot the bubble height, the 'H' keys, vs. time.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d, InterpolatedUnivariateSpline
from scipy.optimize import bisect
import json
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
config = Foo()
config.name0 = "HighAspect/HA_base/HA_base"
config.name1 = "HighAspect/HA_visc/HA_visc"
#config.arch_end = "maxhutch#alpha-admin/~/pub/"
config.arch_end = "alcf#dtn_mira/projects/alpha-nek/"
c0 = Chest(path = "{:s}-results".format(config.name0),
open = partial(glopen, endpoint=config.arch_end),
open_many = partial(glopen_many, endpoint=config.arch_end))
sc0 = CachedSlict(c0)
with glopen(
"{:s}.json".format(config.name0), mode='r',
endpoint = config.arch_end,
) as f:
params0 = json.load(f)
c1 = Chest(path = "{:s}-results".format(config.name1),
open = partial(glopen, endpoint=config.arch_end),
open_many = partial(glopen_many, endpoint=config.arch_end))
sc1 = CachedSlict(c1)
with glopen(
"{:s}.json".format(config.name1), mode='r',
endpoint = config.arch_end,
) as f:
params1 = json.load(f)
c0.prefetch(sc0[:,'H'].full_keys())
c0.prefetch(sc0[:,'w_max_z'].full_keys())
c1.prefetch(sc1[:,'H'].full_keys())
c1.prefetch(sc1[:,'w_max_z'].full_keys())
spl0 = InterpolatedUnivariateSpline(sc0[:,'H'].keys(), sc0[:,'H'].values(), k=3)
dHdt0 = spl0.derivative()
Ts0 = np.linspace(sc0[:,'H'].keys()[0], sc0[:,'H'].keys()[-1], 1000)
Vb0 = -dHdt0(Ts0)
Vm0 = np.array([np.max(sc0[t,'w_max_z']) for t in sc0[:,'w_max_z'].keys()])
Vc0 = np.array([sc0[t,'w_max_z'][sc0[t,'w_max_z'].shape[0]/2] for t in sc0[:,'w_max_z'].keys()])
theory0 = np.sqrt(
params0["atwood"] * params0["g"] * params0["extent_mesh"][0] / np.pi
+ (2.*np.pi*params0["viscosity"] / params0["extent_mesh"][0])**2
) - (2.*np.pi*params0["viscosity"] / params0["extent_mesh"][0])
spl1 = InterpolatedUnivariateSpline(sc1[:,'H'].keys(), sc1[:,'H'].values(), k=3)
dHdt1 = spl1.derivative()
Ts1 = np.linspace(sc1[:,'H'].keys()[0], sc1[:,'H'].keys()[-1], 1000)
Vb1 = -dHdt1(Ts1)
Vm1 = np.array([np.max(sc1[t,'w_max_z']) for t in sc1[:,'w_max_z'].keys()])
Vc1 = np.array([sc1[t,'w_max_z'][sc1[t,'w_max_z'].shape[0]/2] for t in sc1[:,'w_max_z'].keys()])
theory1 = np.sqrt(
params1["atwood"] * params1["g"] * params1["extent_mesh"][0] / np.pi
+ (2.*np.pi*params1["viscosity"] / params1["extent_mesh"][0])**2
) - (2.*np.pi*params1["viscosity"] / params1["extent_mesh"][0])
fig, axs = plt.subplots(2,1, sharex=True)
axs[0].plot(Ts0, -spl0(Ts0));
axs[0].plot(Ts1, -spl1(Ts1));
#axs[0].plot(sc[:,'H'].keys(), -np.array(sc[:,'H'].values()), 'yo');
axs[0].set_ylabel('Depth (m)')
axs[1].plot(Ts0, Vb0, 'b-');
axs[1].plot(Ts1, Vb1, 'b-');
axs[1].plot(sc0[:,'w_max_z'].keys(), Vm0, 'g-');
axs[1].plot(sc1[:,'w_max_z'].keys(), Vm1, 'g-');
axs[1].plot(sc0[:,'w_max_z'].keys(), Vc0, 'r-');
axs[1].plot(sc1[:,'w_max_z'].keys(), Vc1, 'r-');
axs[1].plot([Ts0[0],Ts0[-1]], [theory0, theory0], 'k--');
axs[1].plot([Ts1[0],Ts1[-1]], [theory1, theory1], 'k--');
axs[1].set_ylabel('Velocity (scaled)')
axs[1].set_xlabel('Time (s)');
plt.savefig('Figure1.eps')
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, matplotlib, slict, chest, glopen, globussh
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Peak finding
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def find_peaks(a):
Find the indices of the local maxima in a sequence.
# YOUR CODE HERE
#raise NotImplementedError()
ind=[]
#next two if checks end points
if a[0]> a[1]:
ind.append(0)
if a[len(a)-1]>a[len(a)-2]:
ind.append(len(a)-1)
#finds local maxima in string by comparing adjacent
for i in range(1,len(a)-1):
if a[i]>a[i-1] and a[i]>a[i+1]:
ind.append(i)
#sorts by increasing order
return sorted(ind)
find_peaks([2,0,1,0,2,0,1])
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
#raise NotImplementedError()
def pimax(x):
'''uses find_peaks to find the local maxima then finds the space between the maxima and
plots the distribution of space between local maxima'''
pi=np.ones(10000)
for i in range(len(x)):
pi[i]=int(x[i])
m = find_peaks(pi)
dist = np.diff(m)
p = plt.hist(dist,bins=17)
plt.title('Distances Between Local Maxima in First 10000 digtis of $\pi$')
plt.xlabel('Distance Between Local Maxima')
plt.ylabel('Number of Times Occured')
plt.grid(False)
plt.xlim([1,19])
a=range(2,19)
plt.xticks(a[::2])
plt.ylim(0,1100)
plt.show()
pimax(pi_digits_str)
assert True # use this for grading the pi digits histogram
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Búsqueda algebráica de puntos fijos
Step2: Punto fijo oscilatorio
Step3: ¿Qué pasará con infinitas iteraciones?
| <ASSISTANT_TASK:>
Python Code:
def g(x, alpha, beta):
assert alpha >= 0 and beta >= 0
return (alpha*x)/(1 + (beta * x))
def plot_cobg(x, alpha, beta):
y = np.linspace(x[0],x[1],300)
g_y = g(y, alpha, beta)
cobweb(lambda x: g(x, alpha, beta), y, g_y)
# configura gráfica interactiva
interact(plot_cobg,
x=widgets.FloatRangeSlider(min=0.01, max=3, step=0.01,
value=[0.02, 3],
continuous_update=False),
alpha=widgets.FloatSlider(min=0.001, max=30,step=0.01,
value=12, continuous_update=False),
beta=widgets.FloatSlider(min=0.001, max=30,step=0.01,
value=7, continuous_update=False))
# primera iterada
f0 = (alpha*x)/(1+beta*x)
Eq(f(x),f0)
# segunda iterada
# subs-tituye f0 en la x de f0 para generar f1
f1 = simplify(f0.subs(x, f0))
Eq(f(f(x)), f1)
# tercera iterada
f2 = simplify(f1.subs(x, f1))
Eq(f(f(f(x))), f2)
# cuarta iterada
f3 = simplify(f2.subs(x, f2))
Eq(f(f(f(f(x)))), f3)
# puntos fijos resolviendo la primera iterada
solveset(Eq(f1,x),x)
(alpha-1)/beta
def solve_g(a, b):
y = list(np.linspace(0,float(list(solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)).pop()),2))
for t in range(30):
y.append(g(y[t], a, b))
zoom = plt.plot(y)
print("ultimos 15 de la serie:")
pprint(y[-15:])
print("\npuntos fijos:")
return solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)
# gráfica interactiva
interact(solve_g,
a=widgets.IntSlider(min=0, max=30, step=1,
value=11, continuous_update=False,
description='alpha'),
b=widgets.IntSlider(min=0, max=30, step=1,
value=5, continuous_update=False,
description='beta'))
# con alfa=1 y beta=1
Eq(collect(f3, x), x/(x+1))
def plot_g(x, alpha, beta):
pprint(x)
y = np.linspace(x[0],x[1],300)
g_y = g(y, alpha, beta)
fig1 = plt.plot(y, g_y)
fig1 = plt.plot(y, y, color='red')
plt.axis('equal')
interact(plot_g,
x=widgets.FloatRangeSlider(min=0, max=30, step=0.01, value=[0,1], continuous_update=False),
alpha=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False),
beta=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Global paramters
Step2: File locations
Step3: Preview data using Experiment()
Step4: Data
Step5: Spectra
Step6: Remove unwanted data points
Step7: Retrieve values for X-axis
Step8: In 'neutron wavelength'
Step9: Retrieve values for Y-axis
Step10: In 'neutron transmission'
Step11: In 'neutron attenuation' & remove baseline
Step12: Retrieve interpolated values for both X-axis & Y-axis
Step13: Plot raw in various ways
Step14: Attenuation vs. recorded time
Step15: Attenuation vs. neutron wavelength
Step16: Attenuation vs. neutron energy
Step17: Remove baseline for plot
Step18: Operation to experiment data
Step19: Create sample layer
Step20: Calibration
Step21: Class initialization
Step22: Equations for (time-wavelength-energy) conversion
Step23: Retrieve calibrated parameters
Step24: Plot calibration result
Step25: Hide table
Step26: Hide grid
Step27: Plot before to compare
Step28: Show all elements
Step29: Show all isotopes
Step30: Show specified items
Step31: Note
Step32: Fit resonances
Step33: Fitting equations
Step34: Output fitted result in molar concentration
Step35: Plot fitting result
Step36: Hide difference plot
Step37: Hide table
Step38: Hide grid & show before
Step39: Show all elements
Step40: Show all isotopes
Step41: Show specified items
Step42: Fit isotopic ratios
| <ASSISTANT_TASK:>
Python Code:
import os
import sys
root_folder = os.path.dirname(os.getcwd())
sys.path.append(root_folder)
import ResoFit
from ResoFit.calibration import Calibration
from ResoFit.fitresonance import FitResonance
from ResoFit.experiment import Experiment
from ResoFit._utilities import Layer
import numpy as np
import matplotlib.pyplot as plt
energy_min = 7
energy_max = 150
energy_step = 0.01
folder = 'data/_data_for_tutorial'
data_file = 'raw_data.csv'
spectra_file = 'spectra.txt'
experiment = Experiment(data_file=data_file,
spectra_file=spectra_file,
folder=folder)
experiment.data
experiment.spectra
experiment.slice(slice_start=3, slice_end=2801, reset_index=False)
experiment.x_raw(offset_us=0., source_to_detector_m=16)
experiment.x_raw(angstrom=True, offset_us=0., source_to_detector_m=16)
experiment.y_raw()
experiment.y_raw(transmission=True)
experiment.y_raw(baseline=True)
experiment.xy_scaled(energy_min=energy_min, energy_max=energy_max, energy_step=energy_step,
angstrom=False, transmission=False,
offset_us=0, source_to_detector_m=15,
baseline=True)
experiment.plot_raw(x_axis='number')
plt.show()
experiment.plot_raw(x_axis='time')
plt.show()
experiment.plot_raw(x_axis='time', time_unit='s')
plt.show()
experiment.plot_raw(x_axis='lambda', lambda_xmax=0.12,
offset_us=0, source_to_detector_m=16)
plt.show()
experiment.plot_raw(x_axis='energy', energy_xmax=150,
offset_us=0, source_to_detector_m=16)
plt.show()
experiment.plot_raw(offset_us=0, source_to_detector_m=16,
x_axis='energy', baseline=True,
energy_xmax=150)
plt.show()
layer_1 = 'U'
thickness_1 = 0.05 # mm
density_1 = None # g/cm^3 (if None or omitted, pure solid density will be used in fitting step)
layer_2 = 'Gd'
thickness_2 = 0.05 # mm
density_2 = None # g/cm^3 (if None or omitted, pure solid density will be used in fitting step)
layer = Layer()
layer.add_layer(layer=layer_1, thickness_mm=thickness_1, density_gcm3=density_1)
layer.add_layer(layer=layer_2, thickness_mm=thickness_2, density_gcm3=density_2)
source_to_detector_m = 16.
offset_us = 0
calibration = Calibration(data_file=data_file,
spectra_file=spectra_file,
raw_layer=layer,
energy_min=energy_min,
energy_max=energy_max,
energy_step=energy_step,
folder=folder,
baseline=True)
calibration.calibrate(source_to_detector_m=source_to_detector_m,
offset_us=offset_us,
vary='all')
calibration.calibrated_offset_us
calibration.calibrated_source_to_detector_m
calibration.plot()
calibration.plot(table=False)
calibration.plot(table=False, grid=False)
calibration.plot(table=False, grid=False, before=True)
calibration.plot(table=False, grid=False, all_elements=True)
calibration.plot(table=False, grid=False, all_isotopes=True)
calibration.plot(table=False, grid=False, items_to_plot=['U', 'U-238', 'Gd-156'])
calibration.plot(table=False, grid=False, items_to_plot=['U', 'U*', 'Gd-156'])
fit = FitResonance(spectra_file=spectra_file,
data_file=data_file,
folder=folder,
energy_min=energy_min,
energy_max=energy_max,
energy_step=energy_step,
calibrated_offset_us=calibration.calibrated_offset_us,
calibrated_source_to_detector_m=calibration.calibrated_source_to_detector_m,
baseline=True)
fit_result = fit.fit(layer, vary='density')
fit.molar_conc()
fit.plot()
fit.plot(error=False)
fit.plot(table=False)
fit.plot(table=False, grid=False, before=True)
fit.plot(table=False, grid=False, all_elements=True)
fit.plot(table=False, grid=False, all_isotopes=True)
fit.plot(table=False, grid=False, items_to_plot=['U', 'U-238', 'Gd-156'])
fit.plot(table=False, grid=False, items_to_plot=['U', 'U*', 'Gd-156'])
fit.fit_iso(layer='U')
fit.molar_conc()
fit.plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's extract years and months again
Step2: And now, we're going to filter out bad cases again. However, this time, we're going to be a bit more selective. We're going to include cases where the closed date is missing.
Step3: This table shows exactly what I'm talking about - as we get closer to the current day, the average resolution time falls more and more. If censorig is occuring, we might expect that the proportion of cases closed is also decreasing over time. This is generally the case
| <ASSISTANT_TASK:>
Python Code:
#Like before, we're going to select the relevant columns from the database:
connection = psycopg2.connect('dbname= threeoneone user=threeoneoneadmin password=threeoneoneadmin')
cursor = connection.cursor()
cursor.execute('''SELECT createddate, closeddate, borough FROM service;''')
data = cursor.fetchall()
data = pd.DataFrame(data)
data.columns = ['createddate','closeddate','borough']
data['cryear'] = [x.year for x in data['createddate']]
data['crmonth'] = [x.month for x in data['createddate']]
#filter out bad casesa
import datetime
today = datetime.date(2016,5,29)
janone = datetime.date(2010,1,1)
data = data.loc[(data['closeddate'] > data['createddate']) | (data['closeddate'].isnull() == True)]
databyyear = data.groupby(by='cryear').count()
databyyear
data['timedelta'] = data['closeddate'] - data['createddate']
data['timedeltaint'] = [x.days if pd.isnull(x) == False else None for x in data['timedelta']]
data.groupby(by='cryear').mean()
databyyear['propclosed'] = databyyear['closeddate']/databyyear['createddate']
databyyear
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Sklearn function to generate random points
Step3: Function to compute kmeans and plot clusters.
Step4: Function to change the point set by a few points
Step5: Plot points and theoretical cluster centers
Step6: Plot points and k-means cluster centers
Step7: Change points and recompute cluster centers.
Step8: Part 2
Step9: A Multiprocessor Implementation
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # for plot styling
import numpy as np
import threading
import time
from sklearn.datasets.samples_generator import make_blobs
from sklearn.cluster import KMeans
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, StreamArray, run
from IoTPy.agent_types.op import map_window, map_element
from IoTPy.agent_types.sink import sink_element
from examples.K_Means.kmeans import normally_distributed_points
from IoTPy.concurrency.multicore import get_processes, get_processes_and_procs
from IoTPy.concurrency.multicore import terminate_stream
from IoTPy.concurrency.multicore import get_proc_that_inputs_source
from IoTPy.concurrency.multicore import extend_stream
# CREATE POINTS IN BLOBS AROUND CENTERS
# centers is the list of centers of the blobs.
CENTERS = np.array([[1.0, 1.0], [1.0, -1.0],
[-1.0, -1.0], [-1.0, 1.0]])
# n_features is the number of features which is also the dimension
# of the space.
n_features=2
# n_centers is the number of centers.
n_centers = len(CENTERS)
# n_samples is the number of points.
n_samples=20
# cluster_std is the standard deviation of points around the centers.
stdev = 0.2
# Create the points
points, blob_cluster_ids = make_blobs(
n_samples, n_features, centers=CENTERS,
cluster_std=stdev, random_state=0)
# random_state=0 starts random number generator from the same
# initial value for every run.
# points is the set of points in the blobs
# centers[blob_cluster_ids[i]] is the center that generated points[i]
# Use kmeans to cluster points into n_cluster clusters.
n_clusters = 4
kmeans = KMeans(n_clusters)
# Function to compute k-means cluster, plot clusters and centers, and
# return centers.
def plot_clusters_and_return_centers(points):
kmeans.fit(points)
# cluster_ids[j] is the id of the cluster associated with points[j].
cluster_ids = kmeans.predict(points)
# Plot the points. The parameter c identifies colors of points.
plt.scatter(points[:, 0], points[:, 1], c=cluster_ids,
s=50, cmap='viridis')
# Plot the centers of the clusters computed by k-means.
# kmeans.cluster_centers_ is the list of computed centers
# of the clusters
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black',
s=50, alpha=0.5);
plt.show()
return centers
# FUNCTION TO CHANGE THE POINT SET BY A FEW POINTS
# Generate more points around the centers. The number of points
# generated is n_more_points
# Create a sliding window in which n_more_points are generated around
# each center, and the previous n_more_points*n_clusters points are
# removed.
n_more_points=2
def change_points(points, n_more_points, n_clusters):
for center in CENTERS:
# points_around_center is list of n_more_points points
# distributed normally around the theoretical centers.
# Create new points
new_points, new_blob_cluster_ids = make_blobs(
n_more_points, n_features, centers=CENTERS,
cluster_std=stdev, random_state=0)
#points_around_center = normally_distributed_points(
#center, stdev=stdev, num_points=n_more_points)
# Append these new points to the previous set of points.
points = np.append(points, new_points, axis=0)
# Remove earlier points from the set of points.
# Since we added n_more_points to each cluster, we remove
# n_more_points*n_clusters.
points = points[n_more_points*n_clusters:]
return points
# Plot points where color of a point is associated with
# the cluster center that generated that point.
plt.scatter(points[:, 0], points[:, 1],
c=blob_cluster_ids, s=50, cmap='viridis');
# Plot the centers around which the blobs were generated.
plt.scatter(CENTERS[:, 0], CENTERS[:, 1], c='black', s=200, alpha=0.5);
# Plot points where color of a point is determined by
# the kmeans cluster associated with that point.
# Note that previously we associated the color of a point with
# the center that generated the point. This is the theoretical center.
# In this plot the color is associated with the cluster center computed
# using kmeans.
# The theoretical centers of the clusters are likely to be
# different from the centers computed by kmeans.
plot_clusters_and_return_centers(points)
points = change_points(points, n_more_points, n_clusters)
plot_clusters_and_return_centers(points)
def test_kmeans(n_steps, n_more_points,
n_samples, n_features, CENTERS, stdev):
# Create streams
stream_of_points = StreamArray(
name='stream_of_points', dimension=n_features)
stream_of_centers = Stream(name='stream_of_centers')
# Create agents
map_window(func=plot_clusters_and_return_centers,
in_stream=stream_of_points,
out_stream=stream_of_centers,
window_size=n_samples,
step_size=n_more_points);
sink_element(func=lambda v: print ('center: \n', v),
in_stream=stream_of_centers);
# Start the computation.
# First extend the stream with points around the true centers.
# Compute points around true centers.
points, blob_cluster_ids = make_blobs(
n_samples, n_features,
centers= CENTERS, cluster_std=stdev, random_state=0)
stream_of_points.extend(points)
# Sleep to mimic a stream coming from an external source.
time.sleep(0.001)
run()
# Extend the stream with n_more_points at each step.
for i in range(n_steps-1):
# Compute points to be added to the stream.
new_points, new_blob_cluster_ids = make_blobs(
n_more_points, n_features, centers=CENTERS,
cluster_std=stdev, random_state=0)
# Extend the stream with the new points and run.
stream_of_points.extend(new_points)
# Sleep to mimic a stream coming from an external source.
time.sleep(0.001)
run()
test_kmeans(n_steps=3, n_more_points=2, n_samples=20, n_features=2,
CENTERS= np.array(
[[1.0, 1.0], [1.0, -1.0], [-1.0, 1.0], [-1.0, -1.0]]),
stdev=0.1)
# Step 0: Define agent functions, source threads
# and actuator threads (if any).
# Step 0.0: Define agent functions.
# kmeans_agent is the agent for the process called ComputeKmeansProcess.
def kmeans_agent(in_streams, out_streams):
def h(window):
print ('plot')
return plot_clusters_and_return_centers(np.array(window))
map_window(func=h,
in_stream=in_streams[0],
out_stream=out_streams[0],
window_size=n_samples,
step_size=n_more_points);
# print_centers_agent is the agent for the process called
# PrintCentersProcess.
def print_centers_agent(in_streams, out_streams):
sink_element(func=lambda v: print ('center: \n', v),
in_stream=in_streams[0]);
# Step 0.1: Define source thread target.
n_steps = 2
def generate_points_thread_target(procs):
# Create a set of random points around CENTERS
points, blob_cluster_ids = make_blobs(
n_samples, n_features, centers=CENTERS,
cluster_std=stdev, random_state=0)
# Add points incrementally.
for _ in range(n_steps):
points_as_list_of_lists = list([
list(v) for v in points
])
# Extend stream with these additional points
#extend_stream(procs, data=points_as_list_of_lists,
#stream_name='stream_of_points')
extend_stream(procs, data=points,
stream_name='stream_of_points')
time.sleep(0.001)
# Create additional random points around CENTERS
points, blob_cluster_ids = make_blobs(
n_more_points, n_features, centers=CENTERS,
cluster_std=stdev, random_state=0)
# Terminate stream because the stream will not be extended
# further.
terminate_stream(procs, stream_name='stream_of_points')
# Step 1: multicore_specification of streams and processes.
# Specify Streams: list of pairs (stream_name, stream_type).
# Specify Processes: name, agent function,
# lists of inputs and outputs and sources, additional arguments.
multicore_specification = [
# Streams
[('stream_of_points', 'x'), ('stream_of_centers', 'x')],
# Processes
[{'name': 'ComputeKmeansProcess', 'agent': kmeans_agent,
'inputs':['stream_of_points'], 'sources':['stream_of_points'],
'outputs': ['stream_of_centers']},
{'name': 'PrintCentersProcess', 'agent': print_centers_agent,
'inputs':['stream_of_centers']},
]]
# Step 2: Create processes.
processes, procs = get_processes_and_procs(multicore_specification)
# Step 3: Create threads (if any)
generate_points_thread = threading.Thread(
target=generate_points_thread_target, args=(procs,))
# Step 4: Specify which process each thread runs in.
# thread_0 runs in the process called 'coordinator'
procs['Compute_kmeans'].threads = [generate_points_thread]
# Step 5: Start, join and terminate processes.
for process in processes: process.start()
for process in processes: process.join()
for process in processes: process.terminate()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Utilização da Regressão Linear e Avaliação do Algoritmo
Step2: Visualização da Regressão Linear
| <ASSISTANT_TASK:>
Python Code:
# Importando Bibliotecas
import csv
import matplotlib.pyplot as plt
from math import sqrt
from random import randrange
# Definição da função que transforma um conjunto de dados inteiro em float
def str_column_to_float(data):
newData = []
for lines in data:
aux = [float(x) for x in lines]
newData.append(aux)
return newData
# Definição da função que carrega o arquivo .csv
def load_csv(filename):
lines = csv.reader(open(filename + ".csv", "r"))
data = list(lines)
return str_column_to_float(data)
# Função responsável por separar o Conjunto de Treino de Conjunto de Teste
def split_train_test(data, test_size=0.4):
dataTrain = list(data)
dataTest = []
size = int(len(dataTrain)*test_size)
for _ in range(size):
idx = randrange(len(dataTrain))
dataTest.append(dataTrain.pop(idx))
return [dataTrain, dataTest]
# Definição da função que calcula média de um conjunto de valores
def mean(values):
return sum(values)/len(values)
# Definição da função que calcula a variância de um conjunto de valores
def variance(values, mean):
return sum([(x-mean)**2 for x in values])
# Calcula a covariância entre duas variáveis X e Y
def covariance(x, mean_x, y, mean_y):
covar = 0.0
for i in range(len(x)):
covar += (x[i] - mean_x) * (y[i] - mean_y)
return covar
# Calcula os coeficientes da regressão de um data conjunto de dados
def coefficients(dataset):
x = [row[0] for row in dataset]
y = [row[1] for row in dataset]
x_mean, y_mean = mean(x), mean(y)
b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean)
b0 = y_mean - b1 * x_mean
return [b0, b1]
# Realiza uma Regressão Linear, obtendo os parâmetros por um conjunto de treino
# e realizando predições sobre um conjunto de teste
def simple_linear_regression(train, test):
predictions = list()
b0, b1 = coefficients(train)
for row in test:
ypred = b0 + b1 * row[0]
predictions.append(ypred)
return predictions
# Calcula o valor da métrica RMSE (Root Mean Squared Error)
def rmse_metric(actual, predicted):
sum_error = 0.0
for i in range(len(actual)):
prediction_error = predicted[i] - actual[i]
sum_error += (prediction_error ** 2)
mean_error = sum_error / float(len(actual))
return sqrt(mean_error)
# Avalia a qualidade de um determinado algoritmo sobre um Dataset utilizando
# o RMSE como métrica
def evaluate_algorithm(train_set, test_set, algorithm):
predicted = algorithm(train_set, test_set)
actual = [row[-1] for row in test_set]
rmse = rmse_metric(actual, predicted)
return rmse
# Baseline: assume que o valor de resposta é sempre a média dos preços
def baseline(train_set, test_set):
pays = [row[-1] for row in train_set]
meanValue = mean(pays)
predictions = [meanValue for i in range(len(test_set))]
return predictions
data = load_csv("insurance")
[trainData, testData] = split_train_test(data, test_size=0.4)
coef = coefficients(trainData)
predRegre = simple_linear_regression(trainData, testData)
predBase = baseline(trainData, testData)
avalRegre = evaluate_algorithm(trainData, testData, simple_linear_regression)
avalBase = evaluate_algorithm(trainData, testData, baseline)
print("A Regressão Linear Simples obteve os coeficientes: alfa = {:.2f} e beta = {:.2f}".format(coef[0], coef[1]))
print("O erro RMSE desse algoritmo é {:.2f}".format(avalRegre))
print("O erro RMSE para o baseline (média dos pagamentos) é {:.2f}".format(avalBase))
print("\nPrimeiras 10 predições:")
for i in range(10):
print("Y real: {:.2f} \t| Y previsto: {:.2f} \t | Y baseline: {:.2f}".format(testData[i][-1], predRegre[i], predBase[i]))
# Separando entrada de saída
X_train = [row[0] for row in trainData]; X_test = [row[0] for row in testData]
Y_train = [row[1] for row in trainData]; Y_test = [row[1] for row in testData]
predTrain = simple_linear_regression(trainData, trainData)
predTest = predRegre
plt.figure(figsize=[10, 5])
plt.subplot(1,2,1); plt.title("Dados de Treino")
plt.scatter(X_train, Y_train, label="Train Data")
plt.plot(X_train, predTrain, 'r-', label="Regression Line")
plt.grid()
plt.legend()
plt.subplot(1,2,2); plt.title("Dados de Teste")
plt.scatter(X_test, Y_test, c="green", label="Test Data")
plt.plot(X_test, predTest, 'r-', label="Regression Line")
plt.grid()
plt.legend()
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Project layout
Step4: And so is the __init__.py file.
Step5: The setup.py file contains the classical metadata, plus a special header. this header basically states if pythran is available, use it, otherwise fallback to the python file.
Step6: Running setup.py
Step7: But if pythran is no longer in the PYTHONPATH, the installation does not fail
Step8: In case of binary distribution, the native module is generated alongside the original source.
Step9: And if pythran is not in the PYTHONPATH, this still work \o/
| <ASSISTANT_TASK:>
Python Code:
!rm -rf hello setup.py && mkdir hello
%%file hello/hello.py
#pythran export hello()
def hello():
Wave hello.
print("Hello from Pythran o/")
%%file hello/__init__.py
Hello package, featuring a Pythran kernel.
from hello import hello
%%file setup.py
from distutils.core import setup
try:
from pythran.dist import PythranExtension, PythranBuildExt
setup_args = {
'cmdclass': {"build_ext": PythranBuildExt},
'ext_modules': [PythranExtension('hello.hello', sources = ['hello/hello.py'])],
}
except ImportError:
print("Not building Pythran extension")
setup_args = {}
setup(name = 'hello',
version = '1.0',
description = 'Yet another demo package',
packages = ['hello'],
**setup_args)
%%sh
rm -rf build dist
python setup.py sdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.tar.gz | grep -E 'hello/hello.(py|cpp)' -o | sort
%%sh
rm -rf build dist
PYTHONPATH= python setup.py sdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.tar.gz | grep -E 'hello/hello.py' -o
%%sh
rm -rf build dist
python setup.py bdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.linux-x86_64.tar.gz | grep -E 'hello/hello.(py|cpp)' -o
%%sh
rm -rf build dist
PYTHONPATH= python setup.py bdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.linux-x86_64.tar.gz | grep -E 'hello/hello.py' -o
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Custom plot line
Step2: A custom 2D plot, based on our first example.
| <ASSISTANT_TASK:>
Python Code:
from pylab import *
t = arange(0.0, 2.0,0.01)
y = sin(2*pi*t)
plot(t, y)
xlabel('Time (s)')
ylabel('Voltage (mV)')
title('The simplest one, buddies')
grid(True)
show()
from pylab import *
t = arange(0.0, 2.0,0.01)
y = sin(2*pi*t)
plot(t, y, color='red')
xlabel('Time (s)')
ylabel('Voltage (mV)')
title('The simplest one, buddies')
grid(True)
show()
from pylab import *
t = arange(0.0, 2.0,0.01)
y = sin(2*pi*t)
plot(t, y, color='green', linestyle='-.', linewidth=3)
xlabel('Time (s)', fontweight='bold', fontsize=14)
ylabel('Voltage (mV)', fontweight='bold', fontsize=14)
title('The simplest one, buddies')
grid(True)
show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
Step2: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
Step3: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Step4: While-loops are similar to if statements, in the sense that they also have a conditional statement that is built into it and it executes when the conditional is True. However, the only difference is, it will KEEP executing that segment of code until the conditional statement becomes False.
Step5: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
Step6: But...what if the conditional statement is always true?
Step7: So the outline for a function is
Step9: When defining your own functions, you can also use multiple input variables. For example, if we want to find the greatest common divisor (gcd) of two numbers, we could apply something called Euclid's algorithm.
Step10: Challenge 1 - Fibonacci Sequence and the Golden Ratio
Step11: The ratio of successive elements in the Fibonacci sequence converges to
Step12: C. Numpy Arrays - Review of Basics and Some More Advanced Topics
Step13: Here we are calling in the contents of numpy and giving it the shorthand name 'np' for convenience.
Step14: As we learned in Lecture 1, numpy arrays are convenient because they allow us to do math across the whole array and not just individual numbers.
Step15: The documentation of possible functions that can be applied to integers and floats (i.e. single numbers), as well as numpy arrays, can be found here
Step16: Now how do we assign a new value to an element of the array? We use the following "square bracket" notation
Step17: Now you try it. Store the second Fibonacci number in the second position of your array and use a print statement to verify that you have done so.
Step18: Python array indexing is fairly straightforward once you get the hang of it.
Step19: Now, sometimes its useful to access more than one element of an array. Let's say that we have an array with 100 elements in the range [0,10] (including endpoints). If you recall, this can be done via the np.linspace() function.
Step20: Now then, in order to get a range of elements rather than simply a single one, we use the notation
Step21: If you want everything passed a certain point of the array (including that point), then you would just eliminate the right number, for example
Step22: Finally, simply using the "
Step23: Now then, remember that,
Step24: Now that we've defined intV (short for "integrate v"), let's use it real quick, just to test it out. Let dt = 0.1 (meaning, your taking a step forward in time by 0.1 seconds).
Step25: As you can see, $V_{x}$ hasn't changed, but $V_{y}$ has decreased, representing the projectile slowing down as it's going upward.
Step26: Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there!
Step27: Now, note that we've defined the while-loop such that it doesn't stop exactly at 0. Firstly, this was strategic, since the initial y = 0, and thus the while-loop wouldn't initialize to begin with (you can try to change it). One way you can overcome this issue is to decrease dt, meaning that you're letting less time pass between each step. Ideally, you'd want dt to be infinitely small, but we don't have that convenience in reality. Re-run the cells, but with dt = 0.01 and we will get much closer to the correct answer.
Step28: Now, all you have to do, is each time the while-loop executes, you use np.append() for the x and y arrays, adding the new values to the end of them.
Step29: If everything turns out alright, you should get the characteristic parabola.
Step30: Now then, let's say we are doing a timing experiment, where we look at the brightness of an object as a function of time. This is actually a very common type of measurement that you may do in research, such as looking for dips in the brightness of stars as a way to detect planets.
Step31: Now we have the data loaded into Python as a numpy array, and one handy thing you can do is to use Python to find the dimensions of the array. This is done by using ".shape" as so.
Step32: In this format, we know that this is a 2x1000 array (two rows, 1000 columns). Another way you can think about this is that you have two 1000-element arrays contained within another array, where each of those arrays are elements (think of it as an array of arrays).
Step33: Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So,
Step34: Looking at our data, you see clear spikes that jump well above most of the signal. (I've added this to the data to represent outliers that may sometimes appear when you're messing with raw data, and those must be dealt with). In astronomy, you sometimes have relativistic charged particles, not from your source, that hit the detector known as cosmic rays, and we often have to remove these.
Step35: In this case, the conditional statement that we have used is signal < cutOff.
Step36: Now let's plot it. You try.
Step37: Now that you have your data all cleaned up, it would be nice if we could save it for later and not have to go through the process of cleaning it up every time. Fear not! Python has you covered.
Step38: Then, we can use either the np.save() function or the np.savetxt function, the first saving the array into a '.npy' file and the other, into a '.txt' file. The syntax is pretty much the same for each.
Step39: Now that your data files are saved, you can load them up again, using np.loadtxt() and np.load() for .txt and .npy files respectively. We used np.loadtxt() above, and np.load works the same way. So, let's load in the .npy file and see if our data was saved correctly.
Step40: Now, let's see if you can do the same thing, but with the .txt file that we saved.
| <ASSISTANT_TASK:>
Python Code:
#Example conditional statements
x = 1
y = 2
x<y #x is less than y
#x is greater than y
x>y
#x is less-than or equal to y
x<=y
#x is greater-than or equal to y
x>=y
#Example of and operator
(1<2)and(2<3)
#Example of or operator
(1<2)or(2>3)
#Example of not operator
not(1>2)
x = 1
y = 2
if (x < y):
print("Yup, totally true!")
else:
print("Nope, completely wrong!")
x = 1
y = 2
if (x>y):
print("The condition is True!")
x+y
x = 1
while (x <= 10):
print(x)
x = x+1
x = 2
i = 0 #dummy variable
while (i<10):
x = 2*x
print(x)
i = i+1
#another way to write this is i+=1, but it's idiosyncratic and we won't use it here
#Defining a square root function
def sqrt(x):
if (x < 0):
print("Your input is not positive!")
else:
return x**(1/2)
sqrt(4)
sqrt(-4)
import math
print(math.sqrt(25))
print(math.sin(math.pi/2))
print(math.exp(math.pi)-math.pi)
def gcd(a, b):
Calculate the Greatest Common Divisor of a and b.
Unless b==0, the result will have the same sign as b (so that when
b is divided by it, the result comes out positive).
while b > 0:
a, b = b, a%b
return a
print(gcd(120,16))
#Your code goes here
#Your code goes here
import numpy as np
x = np.array([1,2,3,4,5])
print(x)
y = x**2
print(y)
data = np.zeros(10)
print(data)
data[0] = fib(0)
print(data[0])
#Your code goes here
#Your code goes here
x = np.linspace(0,10,100)
x[0:3]
#Your code goes here
#Your code here
def intV(v,g,dt):
deltaVy = g*dt
vXnew = v[0]
vYnew = v[1]+deltaVy
return np.array([vXnew,vYnew])
dt = 0.1
intV(v,g,dt)
#Your code goes here
#Your code goes here
x = np.array([]) #defining an empty array that will store x position
y = np.array([]) #defining an empty array that will store y position
#Your code goes here
%matplotlib inline
import matplotlib.pyplot as plt
timeseriesData = np.loadtxt("./lecture2_data/timeseries_data.txt")
timeseriesData.shape
t = timeseriesData[0,:]
signal = timeseriesData[1,:]
#Your code goes here
cutOff = 15.
signalFix = signal[signal < cutOff]
tFix = t[signal < cutOff]
#Your code goes here
dataFix = np.array([tFix,signalFix])
np.save('./lecture2_data/dataFix.npy',dataFix)
np.savetxt('./lecture2_data/dataFix.txt',dataFix)
data = np.load('./lecture2_data/dataFix.npy')
t = data[0,:]
signal = data[1,:]
plt.plot(t,signal)
plt.show()
#Your code goes here
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([[ 0, 1, 2, 3, 4, 5],
[ 5, 6, 7, 8, 9, 10],
[10, 11, 12, 13, 14, 15],
[15, 16, 17, 18, 19, 20],
[20, 21, 22, 23, 24, 25]])
result = np.diag(np.fliplr(a))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Загрузка обработчика комментариев
Step2: Обработка данных
Step3: Обучение модели
Step4: Результаты
Step5: Классификатор 5 / не 5
Step6: Cнизижаем размерность
Step7: Обучаем классификатор
| <ASSISTANT_TASK:>
Python Code:
# загрузка из файла
reviews_test = pd.read_csv('data/reviews_test.csv', header=0, encoding='utf-8')
reviews_train = pd.read_csv('data/reviews_train.csv', header=0, encoding='utf-8')
reviews_internet = pd.read_csv('data/internet_reviews.csv', header=0, encoding='utf-8')
# обучающая выборка
X_train_raw = reviews_train.comment
y_train_raw = reviews_train.reting
# проверочная выборка
X_test_raw = reviews_test.comment
y_test_raw = reviews_test.reting
# дополнительная обучающая выборка
X_internet = reviews_internet.comment
y_internet = reviews_internet.rating
DIR = 'data/w2v_models/'
MODEL_NAME = 'tenth.norm-sz500-w7-cb0-it5-min5.w2v'
#MODEL_NAME = 'all.norm-sz100-w10-cb0-it1-min100.w2v'
# initialazing comment preprocessor
sp = SentenceProcessor(DIR + MODEL_NAME)
sp.stop_list = stop_words
sp.sample_len = 100
WORD_VECTOR_LENGTH = sp.w2v.vector_size # 500 each word
SENTENCE_LENGTH = sp.sample_len # 100 words
print ('Размерность коментария: {}x{}'.format(SENTENCE_LENGTH, WORD_VECTOR_LENGTH))
# приводим оба датафрема к одному формату
del reviews_internet['Unnamed: 0']
reviews_internet = reviews_internet.rename(columns={'rating':'reting'})
# склеиваем данные
new_reviews = reviews_train.append(reviews_internet, ignore_index=True)
# добавляем новый столбец
new_reviews['bag_of_words'] = ' '
# первичная предобработка данных(токенизация, нормализация, исправление орфографических ошибок)
bag_of_bags = []
for i in tqdm_notebook(range(len(new_reviews.comment))):
bag_of_bags.append(' '.join(sp.process(new_reviews.comment[i])))
# добавляем обработанные комментарии в новый столбец
new_reviews['bag_of_words'] = bag_of_bags
# сохраняем наши данные в .csv формате
new_reviews.to_csv('data/new_reviews.csv')
# на всякий случай сохраняем обработанные комментарии в .pkl
with open('bag_{}.pkl'.format(len(bag_of_bags)), 'wb') as f:
pickle.dump(bag_of_bags, f)
# читаем из файла если предыдущие шаги уже выполнены
new_reviews = pd.read_csv('data/new_reviews.csv', encoding='cp1251')
# полная обучающая выборка
X_train_all = new_reviews.bag_of_words
y_train_all = new_reviews.reting
# обучающая выборка от МВидео (без дополнительных отзывов об интернет провайдерах)
X_train = new_reviews.bag_of_words[:12410]
y_train = new_reviews.reting[:12410]
# тестовая выборка
X_val = reviews_test.comment
y_val = reviews_test.reting
from utils.generators import train_generator, test_generator
from keras.regularizers import l2 as L2
l2 = L2(0.01)
MODELS_DIR = 'saved_models/'
# создаем модель нейросети
input = keras.layers.Input(shape=(SENTENCE_LENGTH, WORD_VECTOR_LENGTH))
l_0 = keras.layers.Dense(256)(input)
l_1 = keras.layers.Conv1D(nb_filter=256, filter_length=2, activation='relu', W_regularizer=l2)(input)
l_2 = keras.layers.Conv1D(nb_filter=256, filter_length=3, activation='relu', W_regularizer=l2)(input)
l_3 = keras.layers.Conv1D(nb_filter=256, filter_length=4, activation='relu', W_regularizer=l2)(input)
merge = keras.layers.Merge(mode='concat', concat_axis=1) ([l_0, l_1, l_2, l_3])
l_5 = keras.layers.GlobalMaxPooling1D()(merge)
l_6 = keras.layers.Dropout(0.5)(l_5)
output = keras.layers.Dense(1, activation=None, W_regularizer=l2)(l_6)
model = keras.models.Model(input, output, name='model_5')
model.summary()
# загрузка обученной модели
model = keras.models.load_model('saved_models/model_5_mse_0.9056_extra')
# обучение
optimizer = keras.optimizers.SGD(lr=0.01)
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse'])
model.fit_generator(
train_generator(X_train, y_train, sp, shuffle=True),
nb_epoch=10,
samples_per_epoch=len(y_train),
validation_data=train_generator(X_val, y_val, sp, shuffle=False),
nb_val_samples=len(y_val),
verbose=2,
)
# сохранение обчунной модели
model.save('saved_models/trained_model_1(keras==1.2.2)')
# метрики
errors = get_score(model, X_val_vec, y_val.values, sparse=40)
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score
def to_bool(y):
y_bool = []
for a in y:
if a == 5.0:
y_bool.append(1)
else:
y_bool.append(0)
return y_bool
y_train_bool = to_bool(y_train)
y_test_bool = to_bool(y_test)
p = PCA(n_components=100)
p.fit(X_train.reshape((X_train.shape[0], -1)))
X_train_transformed = p.transform(X_train.reshape((X_train.shape[0], -1)))
X_test_transformed = p.transform(X_test.reshape((X_test.shape[0], -1)))
clf = SVC(C=100.0)
clf.fit(X_train_transformed, y_train_bool)
y_pred = clf.predict(X_test_transformed)
accuracy_score(y_test_bool, y_pred)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following code cells illustrate how different types of cell outputs are decoded.
Step2: Primitive outputs
Step3: Image outputs
Step4: HTML outputs
Step5: Preformatted text outputs
Step6: Errors
| <ASSISTANT_TASK:>
Python Code:
# Python
// JavaScript
# No output
# Plain text output
"Hello world"
True
False
42
import math
math.pi
dict(a=1,b=2)
list(range(10))
dict(a='string', b=1, c=3.14, d=[1, 2, 3], e=dict(f=1))
# Stream output
print("Just a string")
# Matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x))
plt.show()
# Plotly
import plotly.express as px
fig = px.scatter(px.data.iris(), x="sepal_width", y="sepal_length")
fig.show()
# Altair
import altair as alt
import pandas as pd
alt.renderers.enable('mimetype')
source = pd.DataFrame({
'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'],
'b': [28, 55, 43, 91, 81, 53, 19, 87, 52]
})
alt.Chart(source).mark_bar().encode(
x='a',
y='b'
)
import pandas as pd
import numpy as np
df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
# By default a `text/html` representation of the data frame is generated
df
# Print the data.frame so that is output as `text/plain`
print(df)
A syntax error
foo()
import sys
print("Whooaah!", file=sys.stderr)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How does Tensorflow Low Level API look like?
Step2: Interactive usage of Low Level API
Step3: Calling a TensorFlow Model deployed on Google Cloud ML Service
Step4: What can Keras do for us?
| <ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
import pandas as pd
print(pd.__version__)
assert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')
# graph definition
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
# launching the graph in a session
with tf.Session() as sess:
result = sess.run([product])
print(result)
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.subtract(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session when we're done.
sess.close()
!cat sample_iris.json
# Example for iris, model exported as Tensorflow
# gsutil cp -R 1 gs://irisnn
# create model and version at https://console.cloud.google.com/mlengine
# in a DOS shell on local machine in this folder
# gcloud ml-engine predict --model=irisnn --json-instances=./sample_iris.json
# SCORES
# [0.9954029321670532, 0.004596732556819916, 3.3544753819114703e-07]
!curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/speed-limit-signs.zip
from zipfile import ZipFile
zip = ZipFile(r'speed-limit-signs.zip')
zip.extractall('.')
# Calling a Keras based model to classify speed limit signs
!curl -H "Content-Type: application/json" -X GET -d '{"url": "https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/4/100-sky-cutoff-detail.jpg", "model": "default" }' http://ec2-52-43-39-37.us-west-2.compute.amazonaws.com:8888
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's a boring example of rendering a DataFrame, without any (visible) styles
Step2: Note
Step4: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
Step5: In this case, the cell's style depends only on its own value.
Step6: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.
Step7: In this case the input is a Series, one column at a time.
Step8: Above we used Styler.apply to pass in each column one at a time.
Step9: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
Step10: Building Styles Summary
Step11: For row and column slicing, any valid indexer to .loc will work.
Step12: Only label-based slicing is supported right now, not positional.
Step13: Use a dictionary to format specific columns.
Step14: Or pass in a callable (or dictionary of callables) for more flexible handling.
Step15: You can format the text displayed for missing values by na_rep.
Step16: These formatting techniques can be used in combination with styling.
Step17: Builtin styles
Step18: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
Step19: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
Step20: There's also .highlight_min and .highlight_max.
Step21: Use Styler.set_properties when the style doesn't actually depend on the values.
Step22: Bar charts
Step23: New in version 0.20.0 is the ability to customize further the bar chart
Step26: The following example aims to give a highlight of the behavior of the new align options
Step27: Sharing styles
Step28: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Step29: Or through a set_precision method.
Step30: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Step31: Table styles
Step32: table_styles should be a list of dictionaries.
Step33: Missing values
Step34: Hiding the Index or Columns
Step35: CSS classes
Step36: Export to Excel
Step37: A screenshot of the output
Step38: We'll use the following template
Step39: Now that we've created a template, we need to set up a subclass of Styler that
Step40: Notice that we include the original loader in our environment's loader.
Step41: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
Step42: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
Step43: Here's the template structure
Step44: See the template in the GitHub repo for more details.
| <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[3, 3] = np.nan
df.iloc[0, 2] = np.nan
df.style
df.style.highlight_null().render().split('\n')[:10]
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
s = df.style.applymap(color_negative_red)
s
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
df.style.apply(highlight_max)
df.style.\
applymap(color_negative_red).\
apply(highlight_max)
def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
df.style.apply(highlight_max, color='darkorange', axis=None)
df.style.apply(highlight_max, subset=['B', 'C', 'D'])
df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
df.style.format("{:.2%}")
df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))})
df.style.format("{:.2%}", na_rep="-")
df.style.highlight_max().format(None, na_rep="-")
df.style.highlight_null(null_color='red')
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
# Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
# Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
df.style.highlight_max(axis=0)
df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
df.style.bar(subset=['A', 'B'], color='#d65f5f')
df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head =
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for series in [test1,test2,test3]:
s = series.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['width']
row += '</tr>'
head += row
head+=
</tbody>
</table>
HTML(head)
df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
style2 = df2.style
style2.use(style1.export())
style2
with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
df.style.set_caption('Colormaps, with a caption.')\
.background_gradient(cmap=cm)
from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
html = html.set_table_styles({
'B': [dict(selector='', props=[('color', 'green')])],
'C': [dict(selector='td', props=[('color', 'red')])],
}, overwrite=False)
html
(df.style
.set_na_rep("FAIL")
.format(None, na_rep="PASS", subset=["D"])
.highlight_null("yellow"))
df.style.hide_index()
df.style.hide_columns(['C','D'])
from IPython.html import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
with open("templates/myhtml.tpl") as f:
print(f.read())
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
MyStyler(df)
HTML(MyStyler(df).render(table_title="Extending Example"))
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
with open("templates/template_structure.html") as f:
structure = f.read()
HTML(structure)
# Hack to get the same style in the notebook as the
# main site. This is hidden in the docs.
from IPython.display import HTML
with open("themes/nature_with_gtoc/static/nature.css_t") as f:
css = f.read()
HTML('<style>{}</style>'.format(css))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Show event related fields images
| <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.4
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.cluster.spectral import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,
order=order_func, vmin=-250, vmax=250,
overlay_times=plt_times, show=True)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Foundations version
Step2: Tinker practice
Step3: From pytorch docs
Step4: Loss function
Step5: We need squeeze() to get rid of that trailing (,1), in order to use mse. (Of course, mse is not a suitable loss function for multi-class classification; we'll use a better loss function soon. We'll use mse for now to keep things simple.)
Step6: Gradients and backward pass
Step7: We cheat a little bit and use PyTorch autograd to check our results.
Step8: Refactor model
Step9: Module.forward()
Step10: Without einsum
Step11: nn.Linear and nn.Module
Step12: Export
| <ASSISTANT_TASK:>
Python Code:
#export
from exp.nb_01 import *
def get_data():
path = datasets.download_data(MNIST_URL, ext='.gz')
with gzip.open(path, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
return map(tensor, (x_train,y_train,x_valid,y_valid))
def normalize(x, m, s): return (x-m)/s
x_train,y_train,x_valid,y_valid = get_data()
train_mean,train_std = x_train.mean(),x_train.std()
train_mean,train_std
x_train = normalize(x_train, train_mean, train_std)
# NB: Use training, not validation mean for validation set
x_valid = normalize(x_valid, train_mean, train_std)
train_mean,train_std = x_train.mean(),x_train.std()
train_mean,train_std
#export
def test_near_zero(a,tol=1e-3): assert a.abs()<tol, f"Near zero: {a}"
test_near_zero(x_train.mean())
test_near_zero(1-x_train.std())
n,m = x_train.shape
c = y_train.max()+1
n,m,c
# num hidden
nh = 50
# standard xavier init
w1 = torch.randn(m,nh)/math.sqrt(m)
b1 = torch.zeros(nh)
w2 = torch.randn(nh,1)/math.sqrt(nh)
b2 = torch.zeros(1)
test_near_zero(w1.mean())
test_near_zero(w1.std()-1/math.sqrt(m))
# This should be ~ (0,1) (mean,std)...
x_valid.mean(),x_valid.std()
def lin(x, w, b): return x@w + b
t = lin(x_valid, w1, b1)
#...so should this, because we used xavier init, which is designed to do this
t.mean(),t.std()
def relu(x): return x.clamp_min(0.)
t = relu(lin(x_valid, w1, b1))
#...actually it really should be this!
t.mean(),t.std()
# kaiming init / he init for relu
w1 = torch.randn(m,nh)*math.sqrt(2/m)
w1.mean(),w1.std()
t = relu(lin(x_valid, w1, b1))
t.mean(),t.std()
#export
from torch.nn import init
w1 = torch.zeros(m,nh)
init.kaiming_normal_(w1, mode='fan_out')
t = relu(lin(x_valid, w1, b1))
init.kaiming_normal_??
w1.mean(),w1.std()
t.mean(),t.std()
w1.shape
import torch.nn
torch.nn.Linear(m,nh).weight.shape
torch.nn.Linear.forward??
torch.nn.functional.linear??
torch.nn.Conv2d??
torch.nn.modules.conv._ConvNd.reset_parameters??
# what if...?
def relu(x): return x.clamp_min(0.) - 0.5
# kaiming init / he init for relu
w1 = torch.randn(m,nh)*math.sqrt(2./m )
t1 = relu(lin(x_valid, w1, b1))
t1.mean(),t1.std()
def model(xb):
l1 = lin(xb, w1, b1)
l2 = relu(l1)
l3 = lin(l2, w2, b2)
return l3
%timeit -n 10 _=model(x_valid)
assert model(x_valid).shape==torch.Size([x_valid.shape[0],1])
model(x_valid).shape
#export
def mse(output, targ): return (output.squeeze(-1) - targ).pow(2).mean()
y_train,y_valid = y_train.float(),y_valid.float()
preds = model(x_train)
preds.shape
mse(preds, y_train)
def mse_grad(inp, targ):
# grad of loss with respect to output of previous layer
inp.g = 2. * (inp.squeeze() - targ).unsqueeze(-1) / inp.shape[0]
def relu_grad(inp, out):
# grad of relu with respect to input activations
inp.g = (inp>0).float() * out.g
def lin_grad(inp, out, w, b):
# grad of matmul with respect to input
inp.g = out.g @ w.t()
w.g = (inp.unsqueeze(-1) * out.g.unsqueeze(1)).sum(0)
b.g = out.g.sum(0)
def forward_and_backward(inp, targ):
# forward pass:
l1 = inp @ w1 + b1
l2 = relu(l1)
out = l2 @ w2 + b2
# we don't actually need the loss in backward!
loss = mse(out, targ)
# backward pass:
mse_grad(out, targ)
lin_grad(l2, out, w2, b2)
relu_grad(l1, l2)
lin_grad(inp, l1, w1, b1)
forward_and_backward(x_train, y_train)
# Save for testing against later
w1g = w1.g.clone()
w2g = w2.g.clone()
b1g = b1.g.clone()
b2g = b2.g.clone()
ig = x_train.g.clone()
xt2 = x_train.clone().requires_grad_(True)
w12 = w1.clone().requires_grad_(True)
w22 = w2.clone().requires_grad_(True)
b12 = b1.clone().requires_grad_(True)
b22 = b2.clone().requires_grad_(True)
def forward(inp, targ):
# forward pass:
l1 = inp @ w12 + b12
l2 = relu(l1)
out = l2 @ w22 + b22
# we don't actually need the loss in backward!
return mse(out, targ)
loss = forward(xt2, y_train)
loss.backward()
test_near(w22.grad, w2g)
test_near(b22.grad, b2g)
test_near(w12.grad, w1g)
test_near(b12.grad, b1g)
test_near(xt2.grad, ig )
class Relu():
def __call__(self, inp):
self.inp = inp
self.out = inp.clamp_min(0.)-0.5
return self.out
def backward(self): self.inp.g = (self.inp>0).float() * self.out.g
class Lin():
def __init__(self, w, b): self.w,self.b = w,b
def __call__(self, inp):
self.inp = inp
self.out = [email protected] + self.b
return self.out
def backward(self):
self.inp.g = self.out.g @ self.w.t()
# Creating a giant outer product, just to sum it, is inefficient!
self.w.g = (self.inp.unsqueeze(-1) * self.out.g.unsqueeze(1)).sum(0)
self.b.g = self.out.g.sum(0)
class Mse():
def __call__(self, inp, targ):
self.inp = inp
self.targ = targ
self.out = (inp.squeeze() - targ).pow(2).mean()
return self.out
def backward(self):
self.inp.g = 2. * (self.inp.squeeze() - self.targ).unsqueeze(-1) / self.targ.shape[0]
class Model():
def __init__(self, w1, b1, w2, b2):
self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]
self.loss = Mse()
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x, targ)
def backward(self):
self.loss.backward()
for l in reversed(self.layers): l.backward()
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model(w1, b1, w2, b2)
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
class Module():
def __call__(self, *args):
self.args = args
self.out = self.forward(*args)
return self.out
def forward(self): raise Exception('not implemented')
def backward(self): self.bwd(self.out, *self.args)
class Relu(Module):
def forward(self, inp): return inp.clamp_min(0.)-0.5
def bwd(self, out, inp): inp.g = (inp>0).float() * out.g
class Lin(Module):
def __init__(self, w, b): self.w,self.b = w,b
def forward(self, inp): return [email protected] + self.b
def bwd(self, out, inp):
inp.g = out.g @ self.w.t()
self.w.g = torch.einsum("bi,bj->ij", inp, out.g)
self.b.g = out.g.sum(0)
class Mse(Module):
def forward (self, inp, targ): return (inp.squeeze() - targ).pow(2).mean()
def bwd(self, out, inp, targ): inp.g = 2*(inp.squeeze()-targ).unsqueeze(-1) / targ.shape[0]
class Model():
def __init__(self):
self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]
self.loss = Mse()
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x, targ)
def backward(self):
self.loss.backward()
for l in reversed(self.layers): l.backward()
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model()
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
class Lin(Module):
def __init__(self, w, b): self.w,self.b = w,b
def forward(self, inp): return [email protected] + self.b
def bwd(self, out, inp):
inp.g = out.g @ self.w.t()
self.w.g = inp.t() @ out.g
self.b.g = out.g.sum(0)
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model()
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
#export
from torch import nn
class Model(nn.Module):
def __init__(self, n_in, nh, n_out):
super().__init__()
self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)]
self.loss = mse
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x.squeeze(), targ)
model = Model(m, nh, 1)
%time loss = model(x_train, y_train)
%time loss.backward()
!./notebook2script.py 02_fully_connected.ipynb
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read the evoked object from a file. Check out
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
Step4: All plotting functions of MNE-python return a handle to the figure instance.
Step5: Now we will make it a bit fancier and only use MEG channels. Many of the
Step6: Notice the legend on the left. The colors would suggest that there may be two
Step7: By default the topomaps are drawn from evenly spread out points of time over
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
Step11: Sometimes, you may want to compare two or more conditions at a selection of
Step12: We can also plot the activations as images. The time runs along the x-axis
Step13: Finally we plot the sensor data as a topographical view. In the simple case
Step14: Visualizing field lines in 3D
| <ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
# sphinx_gallery_thumbnail_number = 9
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
fig = evoked_l_aud.plot(exclude=(), time_unit='s')
fig.tight_layout()
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks, time_unit='s')
evoked_l_aud.plot_topomap(time_unit='s')
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag', time_unit='s')
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag', time_unit='s')
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300, time_unit='s')
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
ts_args = dict(gfp=True, time_unit='s')
topomap_args = dict(sensors=False, time_unit='s')
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles, split_legend=True)
evoked_r_aud.plot_image(picks=picks, time_unit='s')
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# Finally, explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There's no concept of "first element" in a dictionary, since it's unordered. (Of course, if you happened to have a key in your dictionary that was literally 0, this code would work.)
Step2: Since fruits["apple"] contains a string, we can concatenate to that string just like we would normally. And if instead the value was a number, we could add to it. Basically, you can treat the "dict[key]" as a variable that holds whatever the value is.
Step3: (B) Using a for loop, go through the dictionary you created above and print each name and food combination in the format
Step4: (C) (1) Change the dictionary so that Wilfred's favorite food is Pizza. (2) Add a new entry for Mitsworth, whose favorite food is Tuna.
Step5: (D) Prompt the user to input a name. Check if the name they entered is a valid key in the dictionary using an if statement. If the name is in the dictionary, print out the corresponding favorite food. If not, print a message saying "That name is not in our database".
Step6: (E) Print just the names in the dictionary in alphabetical order. Use the sorting example from the slides.
Step7: (F) Print just the names in sorted order based on their favorite food. Use the value-sorting example from the slides.
Step8: 3. File writing (3pts)
Step9: (B) Write code that prints the following text to a file called meow.txt. It must be formatted exactly as it here (you will need to use \n and \t)
Step10: (C) Write code that reads in the gene IDs from genes.txt and prints the unique gene IDs to a new file called genes_unique.txt. (You can re-use your code or the answer sheet from lab4 for getting the unique IDs.)
Step11: 4. The "many counters" problem (4pts)
Step12: (B) Using the tally dictionary you created above, figure out which sequence length was the most common, and print it to the screen.
Step13: 5. Codon table (6pts)
Step14: (C) Now we will adapt the code in (B) to translate a longer sequence. Instead of prompting the user for a single codon, allow them to enter a longer sequence. First, check that the sequence they entered has a length that is a multiple of 3 (Hint
Step15: (D) Now, instead of taking user input, you will apply your translator to a set of sequences stored in a file. Read in the sequences from sequences3.txt (assume each line is a separate sequence), translate it to amino acids, and print it to a new file called proteins.txt.
Step16: Bonus question
Step17: After you've written your code above and you think it works, run it and then run the following code to to spot-check whether you did everything correctly. If you didn't name your dictionary seqDict, you'll need to change it below to whatever you named your dictionary.
| <ASSISTANT_TASK:>
Python Code:
# run this cell first!
fruits = {"apple":"red", "banana":"yellow", "grape":"purple"}
print fruits["banana"]
query = "apple"
print fruits[query]
print fruits[0]
print fruits.keys()
print fruits.values()
for key in fruits:
print fruits[key]
del fruits["banana"]
print fruits
print fruits["pear"]
fruits["pear"] = "green"
print fruits["pear"]
fruits["apple"] = fruits["apple"] + " or green"
print fruits["apple"]
favFoods = {"Wilfred":"Steak", "Manfred":"French fries", "Wadsworth":"Spaghetti", "Jeeves":"Ice cream"}
print favFoods
for name in favFoods:
print name + "'s favorite food is " + favFoods[name]
favFoods["Wilfred"] = "Pizza"
favFoods["Mitsworth"] = "Tuna"
print favFoods
name = raw_input("Enter a name: ")
if name in favFoods:
print favFoods[name]
else:
print "That name is not in our database"
for name in sorted(favFoods):
print name
for name in sorted(favFoods, key=favFoods.get):
print name
outs = open("hello.txt", 'w')
outs.write("Hello, world")
outs.close()
outs = open("meow.txt", 'w')
outs.write("Dear Mitsworth,\n\n\tMeow, meow meow meow.\n\nSincerely,\nA friend")
outs.close()
inFile = "genes.txt"
outFile = "genes_unique.txt"
unique = []
ins = open(inFile, 'r')
outs = open(outFile, 'w')
for line in ins:
line = line.rstrip('\n')
if line not in unique:
outs.write(line + "\n")
unique.append(line)
outs.close()
ins.close()
# hint code:
tallyDict = {}
seq = "ATGCTGATCGATATA"
length = len(seq)
if length not in tallyDict:
tallyDict[length] = 1 #initialize to 1 if this is the first occurrence of the length...
else:
tallyDict[length] = tallyDict[length] + 1 #...otherwise just increment the count.
inFile = "sequences3.txt"
lenDict = {}
ins = open(inFile, 'r')
for line in ins:
line = line.rstrip('\n') #important to do here, since '\n' counts as a character and thus increases the length of the sequence by 1
seqLen = len(line)
if seqLen not in lenDict:
lenDict[seqLen] = 1
else:
lenDict[seqLen] += 1
ins.close()
# print all tallies for informational purposes
for name in sorted(lenDict.keys()):
print name, ":", lenDict[name]
# now to get the max length, we can sort the keys by value
sortedLens = sorted(lenDict, key=lenDict.get) #this returns a list of the keys sorted by value
# this sorts from smallest to largest, so we take the last element
mostCommon = sortedLens[-1]
print "Most common length is", mostCommon, "nts"
inFile = "codon_table.txt"
codon2aa = {}
ins = open(inFile, 'r')
ins.readline() #skip header
for line in ins:
line = line.rstrip('\n')
lineParts = line.split()
codon = lineParts[0]
aa = lineParts[1]
if codon not in codon2aa:
codon2aa[codon] = aa
else:
print "Warning! Multiple entries found for the same codon (" + codon + "). Skipping."
ins.close()
request = raw_input("Codon to translate: ").upper() #covert to uppercase, just in case
if request in codon2aa:
print request, "-->", codon2aa[request]
else:
print "Did not recognize that codon."
inFile = "codon_table.txt"
codon2aa = {}
ins = open(inFile, 'r')
ins.readline() #skip header
for line in ins:
line = line.rstrip('\n')
lineParts = line.split()
codon = lineParts[0]
aa = lineParts[1]
if codon not in codon2aa:
codon2aa[codon] = aa
else:
print "Warning! Multiple entries found for the same codon (" + codon + "). Skipping."
ins.close()
# get user input
request = raw_input("Sequence to translate (multiple of 3): ").upper()
protSeq = ""
if (len(request) % 3) == 0:
# this indexing/slicing is tricky! definitely test this sort of thing to make sure you get it right.
for i in range(0,len(request),3):
codon = request[i:i+3]
if codon in codon2aa:
protSeq = protSeq + codon2aa[codon]
else:
print "Warning! Unrecognized codon ("+codon+"). Using X as a place holder."
protSeq = protSeq + "X"
print "Your protein sequence is:", protSeq
else:
print "Please enter a sequence length that is a multiple of 3."
inFile = "codon_table.txt"
codon2aa = {}
ins = open(inFile, 'r')
ins.readline() #skip header
for line in ins:
line = line.rstrip('\n')
lineParts = line.split()
codon = lineParts[0]
aa = lineParts[1]
if codon not in codon2aa:
codon2aa[codon] = aa
else:
print "Warning! Multiple entries found for the same codon (" + codon + "). Skipping."
ins.close()
# read file of sequences
inFile = "sequences3.txt"
outFile = "proteins.txt"
ins = open(inFile, 'r')
outs = open(outFile, 'w')
for line in ins:
line = line.rstrip('\n')
protSeq = "" #best to define this with the loop so it's re-created for each separate sequence.
if (len(line) % 3) == 0:
for i in range(0,len(line),3):
codon = line[i:i+3]
if codon in codon2aa:
protSeq += codon2aa[codon]
else:
print "Warning! Unrecognized codon ("+codon+"). Using X as a place holder."
protSeq += "X"
outs.write(protSeq + '\n') # write to output file
else:
print "Encountered sequence length that is not a multiple of 3. Skipping it. Sequence:", line
outs.close()
ins.close()
ins = open("horrible.fasta", 'r')
seqDict = {}
activeID = ""
for line in ins:
line = line.rstrip('\n')
# if the first character is >, this line is a header.
if line[0] == ">":
activeID = line[1:]
if activeID in seqDict:
print ">>> Warning: repeat id:", activeID, "-- overwriting previous ID."
seqDict[activeID] = ""
# otherwise, this is a sequence line--add it to the activeID entry
else:
seqDict[activeID] = seqDict[activeID] + line
ins.close()
error = False
if ">varlen2_uc001pmn.3_3476" in seqDict:
print "Remove > chars from headers!"
error = True
elif "varlen2_uc001pmn.3_3476" not in seqDict:
print "Something's wrong with your dictionary: missing keys"
error = True
if "varlen2_uc021qfk.1>2_1472" not in seqDict:
print "Only remove the > chars from the beginning of the header!"
error = True
if len(seqDict["varlen2_uc009wph.3_423"]) > 85:
if "\n" in seqDict["varlen2_uc009wph.3_423"]:
print "Remove newline chars from sequences"
error = True
else:
print "Length of sequences longer than expected for some reason"
error = True
elif len(seqDict["varlen2_uc009wph.3_423"]) < 85:
print "Length of sequences shorter than expected for some reason"
error = True
if error == False:
print "Congrats, you passed all my tests!"
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open MODIS/Aqua files with chlorophyll in the North Sea and fetch data
Step2: Plot chlorophyll-a maps in swath projection
Step3: Colocate data. Reproject both images onto the same Domain.
| <ASSISTANT_TASK:>
Python Code:
# download sample files
!wget -P data -nc ftp://ftp.nersc.no/nansat/test_data/obpg_l2/A2015121113500.L2_LAC.NorthNorwegianSeas.hdf
!wget -P data -nc ftp://ftp.nersc.no/nansat/test_data/obpg_l2/A2015122122000.L2_LAC.NorthNorwegianSeas.hdf
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
from nansat import *
n1 = Nansat('data/A2015121113500.L2_LAC.NorthNorwegianSeas.hdf')
chlor_a1 = n1['chlor_a']
n2 = Nansat('data/A2015122122000.L2_LAC.NorthNorwegianSeas.hdf')
chlor_a2 = n2['chlor_a']
plt.figure(figsize=(5,5))
plt.subplot(121)
plt.imshow(chlor_a1, vmin=0, vmax=3)
plt.subplot(122)
plt.imshow(chlor_a2, vmin=0, vmax=3)
plt.show()
# define domain in longlat projection
d = Domain('+proj=stere +lat_0=58 +lon_0=5 +no_defs',
'-te -300000 -300000 300000 300000 -tr 3000 3000')
# reproject first image and get matrix with reprojected chlorophyll
n1.reproject(d)
chlor_a1 = n1['chlor_a']
# reproject second image and get matrix with reprojected chlorophyll
n2.reproject(d)
chlor_a2 = n2['chlor_a']
# get mask of land and set values of land pixels to NAN (not-a-number)
mask1 = n1.watermask()[1]
chlor_a1[mask1 == 2] = np.nan
chlor_a2[mask1 == 2] = np.nan
# prepare landmask for plotting: land pixels=1, water pixels=NaN
landmask = 1 - mask1.astype(float)
landmask[landmask == 0] = np.nan
plt.figure(figsize=(10,10))
plt.subplot(121)
plt.imshow(chlor_a1, vmin=0, vmax=5)
plt.imshow(landmask, cmap='gray')
plt.subplot(122)
plt.imshow(chlor_a2, vmin=0, vmax=5)
plt.imshow(landmask, cmap='gray')
plt.show()
# replace negative values (clouds) by NAN
chlor_a1[chlor_a1 < 0] = np.nan
chlor_a2[chlor_a2 < 0] = np.nan
# find difference
chlor_diff = chlor_a2 - chlor_a1
# plot
plt.figure(figsize=(5,5))
plt.imshow(chlor_diff, vmin=-0.1, vmax=2);plt.colorbar()
plt.imshow(landmask, cmap='gray')
plt.show()
# get transect - vector of data from 2D matrix from known locations
points = [[200, 75], [150, 150]]
t1 = n1.get_transect(points, ['chlor_a'], lonlat=False)
chl1 = t1['chlor_a']
lon1 = t1['lon']
lat1 = t1['lat']
t2 = n2.get_transect(points, ['chlor_a'], lonlat=False)
chl2 = t2['chlor_a']
# replace negative values with NAN
chl1 = np.array(chl1)
chl2 = np.array(chl2)
chl1[(chl1 < 0) + (chl1 > 5)] = np.nan
chl2[(chl2 < 0) + (chl2 > 5)] = np.nan
print (n1.time_coverage_start)
# plot
plt.plot(lon1, chl1, '-', label=n1.time_coverage_start)
plt.plot(lon1, chl2, '-', label=n2.time_coverage_start)
plt.legend()
plt.xlabel('longitude')
plt.ylabel('chlorphyll-a')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classic Riddler
Step2: If I would not have seen this particular tweet (https
| <ASSISTANT_TASK:>
Python Code:
import itertools
# heads = True
# tails = False
# Initialize coins to all heads
coins = [True]*100
for factor in range(100):
# This will generate N zeros, then a 1. This repeats forever
flip_generator = itertools.cycle([0]*factor+[1])
# This will take the first 100 items from the generator
flips = itertools.islice(flip_generator,100)
for index, flip in enumerate(flips):
if flip:
coins[index] = not coins[index]
# 1 has to be added to account for python 0-indexing
coins_tails = [index+1 for index,state in enumerate(coins) if state == False]
print(coins_tails)
import numpy as np
import itertools
# Alternative approach which counts the amount of flips. If even, the coin remains heads up.
# If odd, the coin would end up tails up.
total_flips = [0]*100
for factor in range(100):
# This will generate N zeros, then a 1. This repeats forever
flip_generator = itertools.cycle([0]*factor+[1])
# This will take the first 100 items from the generator
flips = list(itertools.islice(flip_generator,100))
total_flips = np.sum((total_flips,flips),axis=0)
# 1 has to be added to account for python 0-indexing
odd_flips = [index+1 for index,num_flips in enumerate(coins) if num_flips % 2 == 0]
print(odd_flips)
%matplotlib inline
import numpy as np
NUM_SPACES = 1000
probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 6 values should be 1/6
probs[0:6,0] = np.array([1/6]*6)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(probs[start_row:end_row,target_col])/6
probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_probs = np.sum(probs,axis=1)
x1 = np.arange(1,31)
y1 = sum_probs[:30]
plt.plot(x1,y1,marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_probs)+1)
second_probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 5 values should be 1/6
second_probs[0:5,0] = np.array([1/6]*5)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(second_probs[start_row:end_row,target_col])/6
if row == 5:
second_probs[row,col] = 0
else:
second_probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_second_probs = np.sum(second_probs,axis=1)
x2 = np.arange(1,31)
y2 = sum_second_probs[:30]
plt.plot(x2[:5],y2[:5],marker='.',color='b')
plt.plot(x2[6:31],y2[6:31],marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_second_probs)+1)
third_probs = np.zeros((1000,1000))
# Seed first column of probabilities
# The first 4 values should be 1/6
third_probs[0:4,0] = np.array([1/6]*4)
for col in np.arange(1,NUM_SPACES):
for row in np.arange(NUM_SPACES):
target_col = col-1
start_row = max(0,row-6)
end_row = max(0,row)
new_val = sum(third_probs[start_row:end_row,target_col])/6
if row == 5 or row == 4:
third_probs[row,col] = 0
else:
third_probs[row,col] = new_val
from matplotlib import pyplot as plt
sum_third_probs = np.sum(third_probs,axis=1)
x3 = np.arange(1,31)
y3 = sum_third_probs[:30]
plt.plot(x3[:4],y3[:4],marker='.',color='b')
plt.plot(x3[6:31],y3[6:31],marker='.',color='b')
plt.ylim(0)
plt.draw()
print(np.argmax(sum_third_probs)+1)
plt.plot(x1,y1,marker='.',color='k')
plt.plot(x2[:5],y2[:5],marker='.',color='b')
plt.plot(x2[6:31],y2[6:31],marker='.',color='b')
plt.plot(x3[:4],y3[:4],marker='.',color='r')
plt.plot(x3[6:31],y3[6:31],marker='.',color='r')
plt.ylim(0)
plt.draw()
print([np.argmax(sum_probs)+1,
np.argmax(sum_second_probs)+1,
np.argmax(sum_third_probs)+1])
# Implementing the recursive solution from
# http://www.laurentlessard.com/bookproofs/the-deadly-board-game/
p_cache = dict()
def p(k):
try:
return p_cache[k]
except KeyError:
if k == 0:
answer = float(1)
elif k < 0:
answer = float(0)
else:
answer = float((p(k-1)+p(k-2)+p(k-3)+p(k-4)+p(k-5)+p(k-6))/6)
p_cache[k] = answer
return answer
def q(k,m):
return p(k)+p(m)-p(k)*p(m-k)
def r(k,m,n):
return p(k)+p(m)+p(n)-p(k)*p(m-k)-p(k)*p(n-k)-p(m)*p(n-m)+p(k)*p(m-k)*p(n-m)
v = range(1,20)
#single = [p(k) for k in v]
#double = [[q(k,m) for k in v] for m in v]
p_vec = np.vectorize(p)
q_vec = np.vectorize(q)
r_vec = np.vectorize(r)
single = np.fromfunction(p_vec,(20,))
double = np.fromfunction(q_vec,(20,20))
triple = np.fromfunction(r_vec,(20,20,20))
np.argmax(triple[1:20,1:20,1:20])
plt.plot(v,single[1:],marker='.')
plt.show()
plt.imshow(double[1:,1:], cmap='viridis',interpolation ='nearest',extent = (0.5,19.5,19.5,0.5))
plt.show()
import itertools
fig = plt.figure()
im = plt.imshow(triple[1:20,1:20,1],
cmap='viridis',
interpolation='nearest',
extent = (0.5,19.5,19.5,0.5))
cycler = itertools.cycle(v)
def updatefig(i):
z = next(cycler)
im.set_array(triple[1:20,1:20,z])
return [im]
ani = animation.FuncAnimation(fig, updatefig, interval=200, blit=True)
HTML(ani.to_html5_video())
m = np.max(triple[1:,1:,1:])
i = np.argmax(m)
np.unravel_index(i,(20,20,20))
m = np.max(double[1:,1:])
i = np.argmax(m)
np.unravel_index(i,(20,20))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's all of our data
Step2: Here are each of the columns in the data set
Step3: Let's count how many players are from each country
Step4: Let's count how many players are from each country
Step5: And now let's look at the top ten highest birth cities
Step6: Let's look at how many of those Toronto-born players were drafted before 2006
Step7: Let's look at how many of those Edmonton-born players were drafted before 2006
Step8: Let's look at how many of those Minneapolis-born players were drafted before 2006
Step9: Concussions...that's always a fun topic. Let's look at the players from each country that reported a concussion. We'll start with the United States
Step10: Hmmm... only two reported concussions in professional hockey?! highly doubtful...let's look at the injuries that were reported as 'Undisclosed' and call them mystery injuries
Step11: Let's look at Canada's reported concussions
Step12: Hmmm...not a lot either. Let's look at the "undisclosed" injuries that were reported
Step13: Switzerland Concussions
Step14: Switzerland "Undisclosed Injuries"
Step15: Sweden Concussions
Step16: Sweden "Undisclosed" Injuries
Step17: Germany Concussions
Step18: Germany "Undisclosed" Injuries
Step19: Czech Republic Concussions
Step20: Czech Republic "Undisclosed Injuries"
Step21: Russia Concussions
Step22: Russia "Undisclosed Injuries"
Step23: Lithuania Concussions
Step24: Lithuania "Undisclosed Injuries"
Step25: Norway Concussions
Step26: Norway "Undisclosed" Injuries
Step27: Let's look at how old the players are
Step28: Young Players (24 years old or younger) for the United States
Step29: Young Players (24 years old or younger) for Canada
Step30: Old Players (36 years old or older) for the United States
Step31: Old Players (36 years old or younger) for Canada
Step32: Let's examine the correlation between height and weight
Step33: And a visual of the correlation...nice
Step34: Let's examine how many lefty's versus righty's (in shooting) each country has
Step35: Interesting...Canada has significantly more left-handed shooters (280) than right-handed shooters. Meanwhile, the USA is pretty even with 110 lefty's and 107 righty's.
Step36: Correlation between Country and Draft Year
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df= pd.read_excel("NHL 2014-15.xls")
!pip install xlrd
df.columns.value_counts()
df.head()
df.columns
df['Ctry'].value_counts().head(10)
df['Nat'].value_counts().head(10)
df['Birth City'].value_counts().head(10)
df[(df['Birth City'] == 'Toronto') & (df['Draft'] < 2006.0)].head()
df[(df['Birth City'] == 'Edmonton') & (df['Draft'] < 2006.0)].head()
df[(df['Birth City'] == 'Minneapolis') & (df['Draft'] < 2006.0)].head()
usa_concussion = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Concussion')]
usa_concussion[["First Name", "Last Name"]]
usa_mystery_injury = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Undisclosed')]
usa_mystery_injury[["First Name", "Last Name"]]
us_concussion
can_concussion = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Concussion')]
can_concussion[["First Name", "Last Name"]]
can_mystery_injury = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Undisclosed')]
can_mystery_injury[["First Name", "Last Name"]]
che_concussion = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Concussion')]
che_concussion[["First Name", "Last Name"]]
che_mystery_injury = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Undisclosed')]
che_mystery_injury[["First Name", "Last Name"]]
swe_concussion = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Concussion')]
swe_concussion[["First Name", "Last Name"]]
swe_mystery_injury = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Undisclosed')]
swe_mystery_injury[["First Name", "Last Name"]]
deu_concussion = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Concussion')]
deu_concussion[["First Name", "Last Name"]]
deu_mystery_injury = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Undisclosed')]
deu_mystery_injury[["First Name", "Last Name"]]
cze_concussion= df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Concussion')]
cze_concussion[["First Name", "Last Name"]]
cze_mystery_injury = df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Undisclosed')]
cze_mystery_injury[["First Name", "Last Name"]]
rus_concussion = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Concussion')]
rus_concussion[["First Name", "Last Name"]]
rus_mystery_injury = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Undisclosed')]
rus_mystery_injury[["First Name", "Last Name"]]
ltu_concussion = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Concussion')]
ltu_concussion[["First Name", "Last Name"]]
ltu_mystery_injury = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Undisclosed')]
ltu_mystery_injury[["First Name", "Last Name"]]
nor_concussion = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Concussion')]
nor_concussion[["First Name", "Last Name"]]
nor_mystery_injury = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Undisclosed')]
nor_mystery_injury[["First Name", "Last Name"]]
df
birthdate = df[df['DOB']].replace("DOB", "")
birthdate
df['birthyear'] = df['DOB'].astype(str).str.split("'").str.get(1).astype(int)
df
young_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] >= 94 )]
young_usa_players[["First Name", "Last Name"]]
young_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] >= 94 )]
young_can_players[["First Name", "Last Name"]]
old_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] <= 80 )]
old_usa_players[["First Name", "Last Name"]]
old_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] <= 80 )]
old_can_players[["First Name", "Last Name"]]
df['HT'].describe()
df['Wt'].describe()
plt.style.use('ggplot')
df.plot(kind='scatter', x='Wt', y='HT')
df['S'].value_counts()
df.groupby(['Ctry', 'S']).agg(['count'])
usa_left_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'L')]
usa_left_shot[["First Name", "Last Name"]]
can_left_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'L')]
can_left_shot[["First Name", "Last Name"]]
usa_right_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'R')]
usa_right_shot[["First Name", "Last Name"]]
can_right_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'R')]
can_right_shot[["First Name", "Last Name"]]
plt.style.use('seaborn-deep')
df.head(5).plot(kind='bar', x='Ctry', y='Draft')
df
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 这里的fmt是控制线和点外观的格式代码,并且具有与plt.plot中使用的简写相同的语法,在Simple Line Plots和Simple Scatter Plots中进行了概述。
Step2: 除了这些选项之外,还可以指定水平误差线(xerr),单面误差线和许多其他变体。有关可用选项的更多信息,请参考plt.errorbar的文档字符串。
Step3: 现在,我们有了xfit,yfit和dyfit,它们可以对我们的数据进行连续拟合。我们可以像上面那样将它们传递给plt.errorbar函数,但是我们真的不想绘制带有1,000个误差线的1,000点。相反,我们可以将plt.fill_between函数与浅色配合使用以可视化此连续错误:
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 50)
dy = 0.8
y = np.sin(x) + dy * np.random.randn(50)
# yerr表示y的误差
plt.errorbar(x, y, yerr=dy, fmt='.k');
plt.errorbar(x, y, yerr=dy, fmt='o', color='black',
ecolor='lightgray', elinewidth=3, capsize=0);
# GaussianProcessRegressor实现高斯回归
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
# define the model and draw some data
model = lambda x: x * np.sin(x)
xdata = np.array([1, 3, 5, 6, 8])
ydata = model(xdata)
#
kernel = 1.0 * RBF(length_scale=10.0, length_scale_bounds=(1e-2, 1e3)) \
+ WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e+1))
gp = GaussianProcessRegressor(kernel=kernel,
alpha=0.0)
# # Compute the Gaussian process fit
# gp = GaussianProcessRegressor(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1,
# random_start=100)
gp.fit(xdata[:, np.newaxis], ydata)
xfit = np.linspace(0, 10, 1000)
# MSE,yfit = gp.predict(xfit[:, np.newaxis], return_cov=True)
y_mean, y_cov= gp.predict(xfit[:, np.newaxis], return_cov=True)
dyfit = 2 * np.sqrt(np.diag(y_cov)) # 2*sigma ~ 95% confidence region
# Visualize the result
plt.plot(xdata, ydata, 'or')
plt.plot(xfit, model(xfit), '-', color='gray')
plt.fill_between(xfit, y_mean - dyfit,
y_mean + dyfit,
alpha=0.5, color='gray')
plt.xlim(0, 10);
<END_TASK> |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 11