text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Load data and construct predictions
Step5: Creating Figures
| <ASSISTANT_TASK:>
Python Code:
Load the ND data,
which we want to analyze
path = "../data/ND_data_filtered"
W = gather_data(path, "width")
W = W[0,:,:,:]
log_mean = np.mean(np.log(W), axis=2)
log_std = np.std(np.log(W), axis=2)
Load the 1D data,
for predictions
path = "../data/1D_data_for_prediction"
X = gather_data(path)
X = X[:, :, 0, :]
X = np.swapaxes(X, 0,1)
logX = np.log(X)
prediction = construct_predictions(logX[:,:], range(1,10), range(1,7), range(5), raw=False)
ms = np.arange(1,10)
ks = np.array([3,4,5,6])
Figure 5a
plt.rc("font",**{"family": "sans-serif","sans-serif": ["Arial"],"size": 8})
fig, axes = plt.subplots(figsize=(6., 1.55), ncols=len(ks[:4]), sharey=False, tight_layout = {"pad": 0})
axes[0].set_ylabel("Capacity")
for i,ax in enumerate(axes):
ax.set_title("N = {}".format(ks[i]))
ax.set_xticks(ms)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlabel("M")
ax.set_yscale("log")
ax.set_xticks(ms)
ax.set_xlim(0.5,ms[-1]+.5)
k = ks[i]
k_ = i
valid = np.where(ms>=k)[0]
ax.plot(ms[valid], prediction[valid,k - 1,1],
color="black",
linewidth= 3,
linestyle="-",
marker="None", markersize=2 , markerfacecolor="black")
valid = np.where(2*ms>=k)[0]
a = np.exp(log_mean - 1*log_std)[valid,k_]
b = np.exp(log_mean)[valid,k_]
c = np.exp(log_mean + 1*log_std)[valid,k_]
ax.plot( [ms[valid],ms[valid]] ,[a,c], linewidth=1.,color="black",linestyle="-", alpha=1.)
ax.plot(ms[valid], b, marker="s", color="black", markerfacecolor= "white",markersize=4,linewidth=1., linestyle="none")
plt.savefig("../Figures/Capacity_Figure_5a.pdf", dpi=300, transparent=False)
Figure 5b
plt.rc("font",**{"family": "sans-serif","sans-serif": ["Arial"],"size": 8})
fig, axes = plt.subplots(figsize=(6., 1.55), ncols=len(ks[:4]), sharey=False, tight_layout = {"pad": 0})
axes[0].set_ylabel("Capacity")
for i,ax in enumerate(axes):
ax.set_title("N = {}".format(ks[i]))
ax.set_xticks(ms)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlabel("M")
ax.set_yscale("log")
ax.set_xticks(ms)
ax.set_xlim(0.5,ms[-1]+.5)
k = ks[i]
valid = np.where(ms>=k)[0]
for d in range(1,5):
l = 3. if d == 1 else 1
c = "black"
ax.plot(ms[valid], prediction[valid,k - 1,d],
color=c,
linewidth= l,
linestyle="-",
marker="None", markersize=2 , markerfacecolor="black")
plt.savefig("../Figures/Capacity_Figure_5b.pdf", dpi=300, transparent=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we will load the dataSciencePilot action set and the decisionTree action set.
Step2: Next, we must connect to our data source. We are using a data set for predicting home equity loan defaults.
Step3: Our target is “BAD” meaning that it was a bad loan. I am setting up a variable to hold our target information as well as our policy information. Each policy is applicable to specific actions and I will provide more information about each policy later in the notebook.
Step4: Explore Data
Step5: Explore Correlations
Step6: Analyze Missing Patterns
Step7: Detect Interactions
Step8: Screen Variables
Step9: Feature Machine
Step10: Generate Shadow Features
Step11: Select Features
Step12: Data Science Automated Machine Learning Pipeline
Step13: Conclusion
| <ASSISTANT_TASK:>
Python Code:
import swat
import numpy as np
import pandas as pd
conn = swat.CAS('localhost', 5570, authinfo='~/.authinfo', caslib="CASUSER")
conn.builtins.loadactionset('dataSciencePilot')
conn.builtins.loadactionset('decisionTree')
tbl = 'hmeq'
hmeq = conn.read_csv("./data/hmeq.csv", casout=dict(name=tbl, replace=True))
hmeq.head()
# Target Name
trt='BAD'
# Exploration Policy
expo = {'cardinality': {'lowMediumCutoff':40}}
# Screen Policy
scpo = {'missingPercentThreshold':35}
# Selection Policy
sepo = {'criterion': 'SU', 'topk':4}
# Transformation Policy
trpo = {'entropy': True, 'iqv': True, 'kurtosis': True, 'outlier': True}
conn.dataSciencePilot.exploreData(
table = tbl,
target = trt,
casOut = {'name': 'EXPLORE_DATA_OUT_PY', 'replace' : True},
explorationPolicy = expo
)
conn.fetch(table = {'name': 'EXPLORE_DATA_OUT_PY'})
conn.dataSciencePilot.exploreCorrelation(
table = tbl,
casOut = {'name':'CORR_PY', 'replace':True},
target = trt
)
conn.fetch(table = {"name" : "CORR_PY"})
conn.dataSciencePilot.analyzeMissingPatterns(
table = tbl,
target = trt,
casOut = {'name':'MISS_PATTERN_PY', 'replace':True}
)
conn.fetch(table = {'name': 'MISS_PATTERN_PY'})
# Tranform data for binary format
conn.dataPreprocess.transform(
table = hmeq,
copyVars = ["BAD"],
casOut = {"name": "hmeq_transform", "replace": True},
requestPackages = [{"inputs":["JOB", "REASON"],
"catTrans":{"method": "label", "arguments":{"overrides":{"binMissing": True}}}},
{"inputs":["MORTDUE", "DEBTINC", "LOAN"],
"discretize": {"method": "quantile", "arguments":{"overrides":{"binMissing": True}}} }])
conn.fetch(table = {'name': 'hmeq_transform'})
conn.dataSciencePilot.detectInteractions(
table ='hmeq_transform',
target = trt,
event = '1',
sparse = True,
inputs = ["_TR1_JOB", "_TR1_REASON", "_TR2_MORTDUE", "_TR2_DEBTINC", "_TR2_LOAN"],
inputLevels = [7, 3, 6, 6, 6],
casOut = {'name': 'DETECT_INT_OUT_PY', 'replace': True})
conn.fetch(table={'name':'DETECT_INT_OUT_PY'})
conn.dataSciencePilot.screenVariables(
table = tbl,
target = trt,
casOut = {'name': 'SCREEN_VARIABLES_OUT_PY', 'replace': True},
screenPolicy = {}
)
conn.fetch(table = {'name': 'SCREEN_VARIABLES_OUT_PY'})
conn.dataSciencePilot.featureMachine(
table = tbl,
target = trt,
copyVars = trt,
explorationPolicy = expo,
screenPolicy = scpo,
transformationPolicy = trpo,
transformationOut = {"name" : "TRANSFORMATION_OUT", "replace" : True},
featureOut = {"name" : "FEATURE_OUT", "replace" : True},
casOut = {"name" : "CAS_OUT", "replace" : True},
saveState = {"name" : "ASTORE_OUT", "replace" : True}
)
conn.fetch(table = {'name': 'TRANSFORMATION_OUT'})
conn.fetch(table = {'name': 'FEATURE_OUT'})
conn.fetch(table = {'name': 'CAS_OUT'})
# Getting variable names and metadata from feature machine output
fm = conn.CASTable('FEATURE_OUT').to_frame()
inputs = fm['Name'].to_list()
nom = fm.loc[fm['IsNominal'] == 1]
nom = nom['Name'].to_list()
# Generating Shadow Features
conn.dataSciencePilot.generateShadowFeatures(
table = 'CAS_OUT',
nProbes = 2,
inputs = inputs,
nominals = nom,
casout={"name" : "SHADOW_FEATURES_OUT", "replace" : True},
copyVars = trt
)
conn.fetch(table = {"name" : "SHADOW_FEATURES_OUT"})
# Getting Feature Importance for Orginal Features
feats = conn.decisionTree.forestTrain(
table = 'CAS_OUT',
inputs = inputs,
target = trt,
varImp = True)
real_features = feats.DTreeVarImpInfo
# Getting Feature Importance for Shadow Features
inp = conn.CASTable('SHADOW_FEATURES_OUT').axes[1].to_list()
shadow_feats = conn.decisionTree.forestTrain(
table = 'SHADOW_FEATURES_OUT',
inputs = inp,
target = trt,
varImp = True)
sf = shadow_feats.DTreeVarImpInfo
# Building dataframe for easy comparison
feat_comp = pd.DataFrame(columns=['Variable', 'Real_Imp', 'SF_Imp1', 'SF_Imp2'])
# Filling Variable Column of Data Frame from Feature
feat_comp['Variable'] = real_features['Variable']
# Filling Importance Column of Data Frame from Feature
feat_comp['Real_Imp'] = real_features['Importance']
# Finding each Feature's Shadow Feature
for index, row in sf.iterrows():
temp_name = row['Variable']
temp_num = int(temp_name[-1:])
temp_name = temp_name[5:-2]
temp_imp = row['Importance']
for ind, ro in feat_comp.iterrows():
if temp_name == ro['Variable']:
if temp_num == 1:
# Filling First Shadow Feature's Importance
feat_comp.at[ind, 'SF_Imp1'] = temp_imp
else:
# Filling First Shadow Feature's Importance
feat_comp.at[ind, 'SF_Imp2'] = temp_imp
feat_comp.head()
# Determining which features have an importance smaller than their shadow feature's importance
to_drop = list()
for ind, ro in feat_comp.iterrows():
if ro['Real_Imp'] <= ro['SF_Imp1'] or ro['Real_Imp'] <= ro['SF_Imp2']:
to_drop.append(ro['Variable'])
to_drop
# Dropping Columns from CAS_OUT
CAS_OUT=conn.CASTable('CAS_OUT')
CAS_OUT = CAS_OUT.drop(to_drop, axis=1)
conn.dataSciencePilot.screenVariables(
table='CAS_OUT',
target=trt,
screenPolicy=scpo,
casout={"name" : "SCREEN_VARIABLES_OUT", "replace" : True}
)
conn.fetch(table = {"name" : "SCREEN_VARIABLES_OUT"})
conn.dataSciencePilot.dsAutoMl(
table = tbl,
target = trt,
explorationPolicy = expo,
screenPolicy = scpo,
selectionPolicy = sepo,
transformationPolicy = trpo,
modelTypes = ["decisionTree", "gradboost"],
objective = "ASE",
sampleSize = 10,
topKPipelines = 10,
kFolds = 5,
transformationOut = {"name" : "TRANSFORMATION_OUT_PY", "replace" : True},
featureOut = {"name" : "FEATURE_OUT_PY", "replace" : True},
pipelineOut = {"name" : "PIPELINE_OUT_PY", "replace" : True},
saveState = {"modelNamePrefix" : "ASTORE_OUT_PY", "replace" : True, "topK":1}
)
conn.fetch(table = {"name" : "TRANSFORMATION_OUT_PY"})
conn.fetch(table = {"name" : "FEATURE_OUT_PY"})
conn.fetch(table = {"name" : "PIPELINE_OUT_PY"})
conn.close()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get Forecast.io API key from config file
Step2: Import API wrapper module
Step3: Get weather data in daily and hourly resolution
Step4: You can use the methods days() and hours() to get a dataframe in daily or hourly resolution
Step5: Degree Days
Step6: Daytime Averages
Step7: Hourly resolution example
Step8: Deal with different timezones
| <ASSISTANT_TASK:>
Python Code:
import os
import sys
import inspect
import pandas as pd
import charts
from opengrid import config
config = config.Config()
#get Forecast.io API Key
api_key = config.get('Forecast.io', 'apikey')
from opengrid.library import forecastwrapper
start = pd.Timestamp('20150813')
end = pd.Timestamp('20150816')
Weather_Ukkel = forecastwrapper.Weather(api_key=api_key, location='Ukkel', start=start, end=end)
Weather_Ukkel.days().info()
Weather_Ukkel.hours().info()
Weather_Ukkel.days(include_heating_degree_days = True,
heating_base_temperatures = [15,18],
include_cooling_degree_days = True,
cooling_base_temperatures = [18,24]).filter(like='DegreeDays')
Weather_Ukkel.days().filter(like='daytime')
start = pd.Timestamp('20150916')
end = pd.Timestamp('20150918')
Weather_Brussel = forecastwrapper.Weather(api_key=api_key, location=[50.8503396, 4.3517103], start=start, end=end)
Weather_Boutersem = forecastwrapper.Weather(api_key=api_key, location='Kapelstraat 1, 3370 Boutersem', start=start, end=end)
df_combined = pd.merge(Weather_Brussel.hours(), Weather_Boutersem.hours(), suffixes=('_Brussel', '_Boutersem'),
left_index=True, right_index=True)
charts.plot(df_combined.filter(like='cloud'), stock=True, show='inline')
start = pd.Timestamp('20150815')
end = pd.Timestamp('20150816')
Weather_London = forecastwrapper.Weather(api_key=api_key, location='London', start=start, end=end, tz='Asia/Singapore')
Weather_Brussels = forecastwrapper.Weather(api_key=api_key, location='Brussels', start=start, end=end, tz='Asia/Singapore')
Weather_London.days()
Weather_Brussels.days()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reset TensorFlow Graph
Step2: Create TensorFlow Session
Step3: Load Model Training and Test/Validation Data
Step4: Randomly Initialize Variables (Weights and Bias)
Step5: View Accuracy of Pre-Training, Initial Random Variables
Step6: Setup Loss Summary Operations for Tensorboard
Step7: Train Model
Step8: View loss in Tensorboard
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
tf.reset_default_graph()
sess = tf.Session()
print(sess)
from datetime import datetime
version = int(datetime.now().strftime("%s"))
num_samples = 100000
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/cpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/test' % version,
graph=tf.get_default_graph())
%%time
with tf.device("/cpu:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-cpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/cpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_cpu.pb' % optimize_me_parent_path
print(unoptimized_model_graph_path)
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
# saver.export_meta_graph(
# filename=model_checkpoint_path,
# clear_devices=True
# )
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/cpu/unoptimized_cpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/cpu/unoptimized_cpu.pb'
output_dot='/root/notebooks/unoptimized_cpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_cpu.dot \
-o /root/notebooks/unoptimized_cpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_cpu.png', width=1024, height=768)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example solution
Step2: Inspect each article for mentions of groups and locations
Step3: Transform defaultdict to a Pandas DataFrame
| <ASSISTANT_TASK:>
Python Code:
# To get you started we can import Pandas and Seaborn which might help you
# build a graph or visualisation of the data
% matplotlib inline
from collections import defaultdict, Counter
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import seaborn as sns
import spacy
nlp = spacy.load('en')
def read_file_to_list(file_name):
with open(file_name, 'r', encoding='utf8') as file:
return file.readlines()
# The file has been re-encoded in UTF-8, the source encoding is Latin-1
terrorism_articles = read_file_to_list('data/rand-terrorism-dataset.txt')
# Create a list of spaCy Doc objects representing articles
terrorism_articles_nlp = [nlp(art) for art in terrorism_articles]
common_terrorist_groups = [
'taliban',
'al - qaeda',
'hamas',
'fatah',
'plo',
'bilad al - rafidayn'
]
common_locations = [
'iraq',
'baghdad',
'kirkuk',
'mosul',
'afghanistan',
'kabul',
'basra',
'palestine',
'gaza',
'israel',
'istanbul',
'beirut',
'pakistan'
]
location_entity_dict = defaultdict(Counter)
for article in terrorism_articles_nlp:
#Get all the groups and location entity in the article
article_terrorist_cands = [ent.lemma_ for ent in article.ents if ent.label_ == 'PERSON' or ent.label_ == 'ORG']
article_location_cands = [ent.lemma_ for ent in article.ents if ent.label_ == 'GPE']
#Filter groups and locations for only those which we are interested in
terrorist_candidates = [ent for ent in article_terrorist_cands if ent in common_terrorist_groups]
location_candidates = [loc for loc in article_location_cands if loc in common_locations]
for found_entity in terrorist_candidates:
for found_location in location_candidates:
location_entity_dict[found_entity][found_location] += 1
# Let's inspect a specific combination as a cursory check on the for loop operating correctly
location_entity_dict['plo']['beirut']
# Transform the dictionary into a pandas DataFrame and fill NaN values with zeroes
location_entity_df = pd.DataFrame.from_dict(dict(location_entity_dict), dtype=int)
location_entity_full_df = location_entity_df.fillna(value=0).astype(int)
# Show DF to console
location_entity_full_df
# Seaborn can transform a DataFrame directly into a figure
plt.figure()
hmap = sns.heatmap(location_entity_full_df, annot=True, fmt='d', cmap='YlGnBu', cbar=False)
# Add features using the under the hood plt interface
plt.title('Global Incidents by Terrorist group')
plt.xticks(rotation=30)
plt.show()
# You can also mask all the zero figures using features of the DataFrame
heat_mask = location_entity_df.isnull()
hmap = sns.heatmap(location_entity_full_df, annot=True, fmt='d', cmap='YlGnBu', cbar=False, mask=heat_mask)
# Add features using the under the hood plt interface
sns.axes_style('white')
plt.title('Global Incidents by Terrorist group')
plt.xticks(rotation=30)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Working With Data Files
Step2: Pandas is magic but it doesn't automatically know where to find your data of interest. You likely will have to look at it first using a GUI.
Step3: We see the data of interest is on the second sheet, and contained in columns "TA ID", "N #1 (%)", and "N #2 (%)".
Step4: Note that the actual row headers are stored in row 1 and not 0 above.
Step5: This formatting is closer to what we need.
Step6: For many of these, we can retreive the smiles string via the canonical_smiles attribute of the get_compounds object (using pubchempy)
Step7: However, some of these drug names have variables spaces and symbols (·, (±), etc.), and names that may not be readable by pubchempy.
Step8: Now let's actually convert all these compounds to smiles. This conversion will take a few minutes so might not be a bad spot to go grab a coffee or tea and take a break while this is running! Note that this conversion will sometimes fail so we've added some error handling to catch these cases below.
Step9: Hooray, we have mapped each drug name to its corresponding smiles code.
Step10: Our goal is to build a small molecule model, so let's make sure our molecules are all small. This can be approximated by the length of each smiles string.
Step11: Some of these look rather large, len(smiles) > 150. Let's see what they look like.
Step12: As suspected, these are not small molecules, so we will remove them from the dataset. The argument here is that these molecules could register as inhibitors simply because they are large. They are more likely to sterically blocks the channel, rather than diffuse inside and bind (which is what we are interested in).
Step13: Now, let's look at the numerical structure of the dataset.
Step14: I don't trust n=1, so I will throw these out.
Step15: We see that most of the data is contained in the gaussian-ish blob centered a bit below zero. We see that there are a few clearly active datapoints located in the bottom left, and one on the top right. These are all distinguished from the majority of the data. How do we handle the data in the blob?
Step16: This looks pretty gaussian, let's get the 95% confidence interval by fitting a gaussian via scipy, and taking 2*the standard deviation
Step17: Now, I don't trust the data outside of the confidence interval, and will therefore drop these datapoints from df.
Step18: Now that data looks much better!
Step19: Now, let's look at the sorted data with error bars.
Step20: Now, let's identify our active compounds.
Step21: In summary, we have
Step22: Now, save this to file.
Step23: Now, we will convert this dataframe to a DeepChem dataset.
Step24: Lastly, it is often advantageous to numerically transform the data in some way. For example, sometimes it is useful to normalize the data, or to zero the mean. This depends in the task at hand.
Step25: Now let's save the balanced dataset object to disk, and then reload it as a sanity check.
| <ASSISTANT_TASK:>
Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
!conda install pubchempy
import os
import pandas as pd
from pubchempy import get_cids, get_compounds
import os
from IPython.display import Image, display
current_dir = os.path.dirname(os.path.realpath('__file__'))
data_screenshot = os.path.join(current_dir, 'assets/dataset_preparation_gui.png')
display(Image(filename=data_screenshot))
import deepchem as dc
dc.utils.download_url(
'https://github.com/deepchem/deepchem/raw/master/datasets/Positive%20Modulators%20Summary_%20918.TUC%20_%20v1.xlsx',
current_dir,
'Positive Modulators Summary_ 918.TUC _ v1.xlsx'
)
raw_data_file = os.path.join(current_dir, 'Positive Modulators Summary_ 918.TUC _ v1.xlsx')
raw_data_excel = pd.ExcelFile(raw_data_file)
# second sheet only
raw_data = raw_data_excel.parse(raw_data_excel.sheet_names[1])
# preview 5 rows of raw dataframe
raw_data.loc[raw_data.index[:5]]
# remove column labels (rows 0 and 1), as we will replace them
# only take data given in columns "TA ID" "N #1 (%)" (3) and "N #2 (%)" (4)
raw_data = raw_data.iloc[2:, [2, 6, 7]]
# reset the index so we keep the label but number from 0 again
raw_data.reset_index(inplace=True)
## rename columns
raw_data.columns = ['label', 'drug', 'n1', 'n2']
# preview cleaner dataframe
raw_data.loc[raw_data.index[:5]]
drugs = raw_data['drug'].values
get_compounds(drugs[1], 'name')
get_compounds(drugs[1], 'name')[0].canonical_smiles
import re
ion_replacements = {
'HBr': ' hydrobromide',
'2Br': ' dibromide',
'Br': ' bromide',
'HCl': ' hydrochloride',
'2H2O': ' dihydrate',
'H20': ' hydrate',
'Na': ' sodium'
}
ion_keys = ['H20', 'HBr', 'HCl', '2Br', '2H2O', 'Br', 'Na']
def compound_to_smiles(cmpd):
# remove spaces and irregular characters
compound = re.sub(r'([^\s\w]|_)+', '', cmpd)
# replace ion names if needed
for ion in ion_keys:
if ion in compound:
compound = compound.replace(ion, ion_replacements[ion])
# query for cid first in order to avoid timeouterror
cid = get_cids(compound, 'name')[0]
smiles = get_compounds(cid)[0].canonical_smiles
return smiles
smiles_map = {}
for i, compound in enumerate(drugs):
try:
smiles_map[compound] = compound_to_smiles(compound)
except:
print("Errored on %s" % i)
continue
smiles_data = raw_data
# map drug name to smiles string
smiles_data['drug'] = smiles_data['drug'].apply(lambda x: smiles_map[x] if x in smiles_map else None)
# preview smiles data
smiles_data.loc[smiles_data.index[:5]]
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('white')
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import Draw, PyMol, rdFMCS
from rdkit.Chem.Draw import IPythonConsole
from rdkit import rdBase
import numpy as np
smiles_data['len'] = [len(i) if i is not None else 0 for i in smiles_data['drug']]
smiles_lens = [len(i) if i is not None else 0 for i in smiles_data['drug']]
sns.histplot(smiles_lens)
plt.xlabel('len(smiles)')
plt.ylabel('probability')
# indices of large looking molecules
suspiciously_large = np.where(np.array(smiles_lens) > 150)[0]
# corresponding smiles string
long_smiles = smiles_data.loc[smiles_data.index[suspiciously_large]]['drug'].values
# look
Draw._MolsToGridImage([Chem.MolFromSmiles(i) for i in long_smiles], molsPerRow=6)
# drop large molecules
smiles_data = smiles_data[~smiles_data['drug'].isin(long_smiles)]
nan_rows = smiles_data[smiles_data.isnull().T.any().T]
nan_rows[['n1', 'n2']]
df = smiles_data.dropna(axis=0, how='any')
# seaborn jointplot will allow us to compare n1 and n2, and plot each marginal
sns.jointplot(x='n1', y='n2', data=smiles_data)
diff_df = df['n1'] - df['n2']
sns.histplot(diff_df)
plt.xlabel('difference in n')
plt.ylabel('probability')
from scipy import stats
mean, std = stats.norm.fit(np.asarray(diff_df, dtype=np.float32))
ci_95 = std*2
ci_95
noisy = diff_df[abs(diff_df) > ci_95]
df = df.drop(noisy.index)
sns.jointplot(x='n1', y='n2', data=df)
avg_df = df[['label', 'drug']].copy()
n_avg = df[['n1', 'n2']].mean(axis=1)
avg_df['n'] = n_avg
avg_df.sort_values('n', inplace=True)
plt.errorbar(np.arange(avg_df.shape[0]), avg_df['n'], yerr=ci_95, fmt='o')
plt.xlabel('drug, sorted')
plt.ylabel('activity')
actives = avg_df[abs(avg_df['n'])-ci_95 > 25]['n']
plt.errorbar(np.arange(actives.shape[0]), actives, yerr=ci_95, fmt='o')
# summary
print (raw_data.shape, avg_df.shape, len(actives.index))
# 1 if condition for active is met, 0 otherwise
avg_df.loc[:, 'active'] = (abs(avg_df['n'])-ci_95 > 25).astype(int)
avg_df.to_csv('modulators.csv', index=False)
dataset_file = 'modulators.csv'
task = ['active']
featurizer_func = dc.feat.ConvMolFeaturizer()
loader = dc.data.CSVLoader(tasks=task, feature_field='drug', featurizer=featurizer_func)
dataset = loader.create_dataset(dataset_file)
transformer = dc.trans.BalancingTransformer(dataset=dataset)
dataset = transformer.transform(dataset)
dc.utils.save_to_disk(dataset, 'balanced_dataset.joblib')
balanced_dataset = dc.utils.load_from_disk('balanced_dataset.joblib')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Further Cleaning
Step2: Sorting by date and by score
Step3: Ploting my score data
Step4: Using the linear regression models I can now get the coefficient, intercept and root mean square error for the score per turn line and the score per zone line. Also, as the number of turns is displayed when a game is saved I can calculate how many points my current save game is worth. However, I would not expect this figure to be very accurate due to the small number of points available and I'm not willing to have my character die to check how right the figure is!
Step5: A look at some of the remaining data
Step6: I have done slightly more work with the Enemy column. First I print it off as is but then make some changes.
Step7: There are a number of highscores that have an empty line where the death description should be and these are marked unknown. Also, others will just read "died from bleeding". Here bleeding is added as both the enemy name and the weapon.
Step8: Below is the complete dataframe sorted by date
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
col_names = ["Name", "End Time", "Game End Time", "Enemy", "x hit", "Damage", "Weapon", "PV", "Pos Dam", "Score", "Turns", "Zones", "Storied Items", "Artifact"]
#read in the data from the text file, setting the seperator between each column as "\t".
qud = pd.read_csv("Cleaned_Qud_HighScores_1.txt", sep=r"\t+", names = col_names, engine='python')
qud.head()
#Dropping these two values
print qud.iloc[53].values #Forgot to name my character. Deceided to quit by attacking either Mehmet or Warden Ualraig
print qud.iloc[54].values #Took one step and Ualraig wasted Mehmet. Walked around for a while but quit as I could not longer start the "What's Eating The Watervine? mission
#As these are my two lowest scores I can the set the dataframe to be rows 0 to 53 (does not include 53)
qud = qud[0:53]
qud.tail(3)
import re
from datetime import datetime
from time import strptime
def convert_to_date(date_in):
date_search = re.search("(\w{6,9}),\s*(\w{3,9})\s*(\d{2}),\s*(\d{4})\s*(\d{1,2}):(\d{2}):(\d{2})\s*(\w{2})", date_in)
#date_search.group(1) = Day as word(ie Sunday), 2 = Month as word (ie August), 3 = day of month, 4 = year, 5 = hour, 6 = minute, 7 = second, 8 = AM or PM
#In End Time hour is expressed from 1 to 12, ie 1 AM or 1 PM. The below code converts that to 0 to 23, ie 1 or 13
hour = int(date_search.group(5))
if date_search.group(8) == "PM":
if hour == 12:
pass
else:
hour += 12
if date_search.group(8) == "AM":
if hour == 12:
hour = 0
#Create a datetime. strptime is used to take the first 3 letters of the Month as word and get the int value for that month, ie August = Aug, is month 8 of 12
new_date = datetime(int(date_search.group(4)), strptime(date_search.group(2)[:3], "%b").tm_mon, int(date_search.group(3)), hour, int(date_search.group(6)), int(date_search.group(7)))
return new_date
qud["End Time"] = qud["End Time"].apply(convert_to_date)
#Pull the month out of Game Time
def convert_game_month(date_in):
date_search = re.search("of\s*((\w*\s*)*)", date_in)
return date_search.group(1)
qud["Game End Time"] = qud["Game End Time"].apply(convert_game_month)
def clean_artifacts(artifact):
x_search = re.search("(x\d+)", artifact) #remove multipliers like "x2"
if x_search != None:
artifact = artifact.replace(x_search.group(0), "").strip()
mul_search = re.search("((-?\d+\s*\d+d\d+)+)", artifact) #removes pv and possible weapon damage like "2 1d3"
if mul_search != None:
artifact = artifact.replace(mul_search.group(0), "").strip()
artifact = artifact.replace("->", "").replace("<>", "").strip() #removes -> and <> which should be empty from previous cleaning
cell_search = re.search("(\[(\w*\s*)*\])", artifact) #removes [no cell], [shotgun shell] etc
if cell_search != None:
artifact = artifact.replace(cell_search.group(0), "").strip()
digit_search = re.search("((\d+\s*)+)", artifact) #removes any remaining digits such as av dv ie 2 4
if digit_search != None:
artifact = artifact.replace(digit_search.group(0), "").strip()
return artifact
qud["Artifact"] = qud["Artifact"].apply(clean_artifacts)
qud.head() #print new, clean dataframe
sorted_qud = qud.sort(["End Time"]).reset_index(drop = True) #Sort by End Time, reset the index and drop the old index
highscore = -10000
print "Highscore Progression" #Game Number, Name, Date, Score
for score in sorted_qud["Score"]:
if int(score) > highscore:
highscore = int(score)
print "%d %s %s %d" % (
int(sorted_qud.index[sorted_qud["Score"] == score][0])+ 1, #the index value of the game + 1. My first game is at index 0 so add 1 and this becomes game 1
sorted_qud["Name"][sorted_qud["Score"] == score].tolist()[0], #Character's name
sorted_qud["End Time"][sorted_qud["Score"] == score].tolist()[0], #End Time of game
int(score) #Score
)
print "\n"
print "Highest Scores"
sorted_scores = qud.sort(["Score"], ascending = False).reset_index(drop = True) #sort by score
for i in range(5):
print sorted_scores["Name"].iloc[i], sorted_scores["End Time"].iloc[i], sorted_scores["Score"].iloc[i] #print Name, End Time and Score for first 5 rows
from sklearn import linear_model
fig = plt.figure(figsize = (20,10))
p1 = fig.add_subplot(221) # 2x2, plot 1 (top left)
p2 = fig.add_subplot(222) # 2x2, plot 2 (top right)
p3 = fig.add_subplot(223) # 2x2, plot 3 (bottom left)
p4 = fig.add_subplot(224) # 2x2, plot 4 (bottom right)
#Turns to Score
p1.scatter(qud["Turns"], qud["Score"], color="green") #Turns on x axis, score on y axis, color green (this is Qud after all)
X = np.array(qud["Turns"]).reshape(len(qud),1) #variable X is an np.array of the turns, len(qud) rows, 1 column
y= np.array(qud["Score"]).reshape(len(qud),1) #variable y is an np.array of the scores, len(qud) rows, 1 column
turns_score = linear_model.LinearRegression()
turns_score.fit(X, y) #fit turns and score using linear regression
#plot a line with turns on the x axis and predicted score for that many turns from the linear regression model on the y axis
p1.plot(qud["Turns"], turns_score.predict(X), color="red")
p1.set_title("Score per Turn")
p1.set_xlabel("Turns")
p1.set_ylabel("Score")
p1.axis('tight')
#Zones to Score
p2.scatter(qud["Zones"], qud["Score"], color="green")
X= np.array(qud["Zones"]).reshape(len(qud),1) #Update X to be an np.array of zones, y stays as score above
zones_score = linear_model.LinearRegression()
zones_score.fit(X, y) #fit zones to score
#plot a line with zones on the x axis and predicted score for that many zones from the linear regression model on the x axis
p2.plot(qud["Zones"], zones_score.predict(X), color="red")
p2.set_title("Score per Zone")
p2.set_xlabel("Zones")
p2.set_ylabel("Score")
p2.axis('tight')
#using the sorted by date dataframe plot a bar chart of the scores. sorted_qud.index.values starts at 0, not 1
p3.bar(sorted_qud.index.values, sorted_qud["Score"], color="green")
p3.plot(pd.rolling_mean(sorted_qud["Score"].values, window=5, min_periods=1), color="red", linewidth=2) #plot a 5 game simple moving average
p3.set_title("5 Game Moving Average")
p3.set_xlabel("Game (Vertical lines represent patches: Aug 4, Aug 8, Aug 15/21)")
p3.set_ylabel("Score")
p3.axis('tight')
#These numbers are plotted manually from looking at the dataframe and seeing when was the first game I played on/after each patch release
p3.axvline(24, color = "red", linewidth = 2) #first game on/after Aug 4th
p3.axvline(27, color = "red", linewidth = 2) #first game on/after Aug 8th
p3.axvline(29, color = "red", linewidth = 2) #first game on/after Aug 15th and 21st
#Histogram. Depressing
p4.hist(qud["Score"], bins = 50);
p4.axis('tight')
p4.set_title("Score Frequency")
p4.set_xlabel("Score (50 bins)")
p4.set_ylabel("Frequency")
plt.tight_layout()
from sklearn.metrics import mean_squared_error
from math import sqrt
print "For Score Per Turn"
print "Total turns multiplied by the coefficient plus the intercept = my score"
print "Coefficient: ", turns_score.coef_[0][0]
print "Intercept: ", turns_score.intercept_[0]
print "RMSE: ", sqrt(mean_squared_error(y, turns_score.predict(np.array(qud["Turns"]).reshape(len(qud),1))))
print "Predicted score from my current game (59924 turns): ", int(turns_score.predict(59924)[0][0])
print "Turns needed for 100,000 points: ", int(math.ceil(((100000 + abs(turns_score.intercept_))/turns_score.coef_)[0][0]))
print "Turns needed for 1,000,000 points: ", int(math.ceil(((1000000 + abs(turns_score.intercept_))/turns_score.coef_)[0][0]))
print "For Score Per Zone"
print "Total zones visited multiplied by the coefficient plus the intercept = my score"
print "Coefficient: ", zones_score.coef_[0][0]
print "Intercept ", zones_score.intercept_[0]
print "RMSE: ", sqrt(mean_squared_error(y, zones_score.predict(np.array(qud["Zones"]).reshape(len(qud),1))))
#Each month mentioned in the Game End Time
game_months = qud["Game End Time"]
print np.unique(game_months)
print len(np.unique(game_months))
#Use groupby to find most mentioned month, ie the month I have died most in. Nivvun Ut is the very first month...
qud['Game End Time'].groupby(qud['Game End Time']).count().order(ascending = False)
#Use group by to find the most advanced artifact I held when I died. Lots of no artifacts and lots of artifacts awarded for finishing the first 2 missions in Joppa
qud['Artifact'].groupby(qud['Artifact']).count().order(ascending = False)
qud['Enemy'].groupby(qud['Enemy']).count().order(ascending = False)
#create a list called enemies, add new values to it, convert to a dataframe and groupby name
enemies = qud["Enemy"].tolist()
for i in range(len(enemies)):
name = enemies[i].strip()
if name in ["Wahmahcalcalit", "Umchuum", "Duhmahcaluhcal"]:
enemies[i] = "wizard"
if name in ["snapjaw scavenger", "napjaw scavenger", "snapjaw hunter", "Groubuubu-wof-wofuz, the stalwart Snapjaw Tot-eater", "Ruf-ohoubub, the stalwart Snapjaw Bear-baiter"]:
enemies[i] = "snapjaw"
if name in ["young ivory", "bleeding"]:
enemies[i] = "young ivory/bleeding"
if name in ["scalding steam", "fire ant"]:
enemies[i] = "fire ant/scalding steam"
enemy_df = pd.DataFrame(enemies, columns=["Name"])
enemy_df['Name'].groupby(enemy_df['Name']).count().order(ascending = False)
qud['Weapon'].groupby(qud['Weapon']).count().order(ascending = False)
sorted_qud
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Multicollearity
Step4: As we can see in this example, the model indeed recovers the underlying structure of the data very well, despite quite significant noise in the data. Given that the the predictors are on the same scale, we can compare the coefficients directly to determine variable importance, we can see here that when using linear regression, X2 is the most important predictor for this given dataset. To be explicit, standardized coefficients represent the mean change in the response given one standard deviation change in the predictor.
Step5: Though widely used, this is actually a measurement that requires some context for it to be a valid evaluation metric. We'll give some examples of why
Step6: When checking the R-squared value for this model, it’s very high at about 0.90, but the model is completely wrong as this data follows a nonlinear distribution. Using R-squared to justify the "goodness" of our model in this instance would be a mistake. Hopefully one would plot the data first and recognize that a simple linear regression in this case would be inappropriate.
Step7: We repeat the above code, but this time with a different range of x. Leaving everything else the same
Step9: R-squared falls from around 0.9 to around 0.2, but the MSE remains fairly the same. In other words the predictive ability is the same for both data sets, but the R-squared would lead you to believe the first example somehow had a model with more predictive power.
Step12: The coefficients of our fitted linear model sums up to ~3, so we can expect it to perform well. On the other hand, if we were to interpret the coefficients at face value, then according to the model $X3$ has a strong positive impact on the output variable, while $X1$ has a negative one, when in fact all the features are correlated and should have equal effects to the output variable. This multicollearity issue also applies to other methods/algorithms and should be addressed before feeding our data to a machine learning method/algorithm.
Step14: Cramer's V
| <ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style()
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import chi2_contingency
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
%watermark -a 'Ethen' -d -t -v -p scipy,numpy,pandas,sklearn,matplotlib
# sklearn's LinearRegression may give harmless errors
# https://github.com/scipy/scipy/issues/5998
warnings.filterwarnings(
action = 'ignore', module = 'scipy', message = '^internal gelsd')
def pretty_print_linear(estimator, names = None, sort = False):
A helper method for pretty-printing linear models' coefficients
coef = estimator.coef_
if names is None:
names = ['X%s' % x for x in range(1, len(coef) + 1)]
info = zip(coef, names)
if sort:
info = sorted(info, key = lambda x: -np.abs(x[0]))
output = ['{} * {}'.format(round(coef, 3), name) for coef, name in info]
output = ' + '.join(output)
return output
# A dataset with 3 features
size = 5000
np.random.seed(0)
X = np.random.normal(0, 1, (size, 3))
# y = X0 + 2 * X1 + noise
y = X[:, 0] + 2 * X[:, 1] + np.random.normal(0, 2, size)
linear = LinearRegression()
linear.fit(X, y)
print('Linear model:', pretty_print_linear(linear))
def rsquared_score(y_true, y_pred):
rsquared evaluation metric
rss = np.sum((y_true - y_pred) ** 2)
tss = np.sum((y_true - np.mean(y_true)) ** 2)
rsquared = 1 - rss / tss
return rsquared
y_pred = linear.predict(X)
print('rsquared:', rsquared_score(y, y_pred))
# we can use scikit-learn's r2_score function
# by passing in the predicted value and the true label
print('rsquared:', r2_score(y, y_pred))
# or for regression models, the default evaluation
# metric is set to be rsquared and can be accessed
# through the .score method
print('rsquared:', linear.score(X, y))
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
# generate some exponential data and fit a linear regression to it
rstate = np.random.RandomState(1)
x = rstate.exponential(scale = 1 / 0.005, size = 50)
y = (x - 1) ** 2 * rstate.uniform(low = 0.8, high = 1.2, size = 50)
# scikit-learn model expects a 2d ndarray
# even if the data only contains 1 feature
X = x.reshape(-1, 1)
linear = LinearRegression()
linear.fit(X, y)
print('rsquared:', linear.score(X, y))
y_pred = linear.predict(X)
plt.plot(x, y_pred, 'r')
plt.scatter(x, y)
plt.show()
x = np.linspace(1, 10, 100)
y = 2 + 1.2 * x + rstate.normal(loc = 0, scale = 0.9, size = 100)
X = x.reshape(-1, 1)
linear = LinearRegression()
linear.fit(X, y)
y_pred = linear.predict(X)
print('rsquared:', r2_score(y, y_pred))
print('mse:', mean_squared_error(y, y_pred))
plt.plot(x, y_pred, 'r')
plt.scatter(x, y)
plt.show()
# smaller range for x
x = np.linspace(1, 2, 100)
y = 2 + 1.2 * x + rstate.normal(loc = 0, scale = 0.9, size = 100)
X = x.reshape(-1, 1)
linear = LinearRegression()
linear.fit(X, y)
y_pred = linear.predict(X)
print('rsquared:', r2_score(y, y_pred))
print('mse:', mean_squared_error(y, y_pred))
plt.plot(x, y_pred, 'r')
plt.scatter(x, y)
plt.show()
def generate_random_data(size, seed):
Example of collinear features existing within the data
rstate = np.random.RandomState(seed)
X_seed = rstate.normal(0, 1, size)
X1 = X_seed + rstate.normal(0, .1, size)
X2 = X_seed + rstate.normal(0, .1, size)
X3 = X_seed + rstate.normal(0, .1, size)
y = X1 + X2 + X3 + rstate.normal(0, 1, size)
X = np.array([X1, X2, X3]).T
return X, y
seed = 5
size = 100
X, y = generate_random_data(size, seed)
linear = LinearRegression()
linear.fit(X, y)
print('Linear model:', pretty_print_linear(linear))
print('rsquared:', linear.score(X, y))
def remove_collinearity(X, colnames = None, threshold = 5.0):
Identify multi-collinearity between the numeric variables
using variance inflation factor (vif)
if colnames is None:
colnames = ['feature' + str(j) for j in range(1, X.shape[1] + 1)]
while True:
n_features = X.shape[1]
if n_features == 1:
break
vif = [compute_vif(X, index) for index in range(n_features)]
max_index = np.argmax(vif)
if vif[max_index] >= threshold:
removed = colnames[max_index]
colnames.remove(removed)
X = np.delete(X, max_index, axis = 1)
else:
break
return X, colnames
def compute_vif(X, target_index):
Similar implementation as statsmodel's variance_inflation_factor
with some enhancemants:
1. includes the intercept by default
2. prevents float division errors (dividing by 0)
References
----------
http://www.statsmodels.org/dev/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html
n_features = X.shape[1]
X_target = X[:, target_index]
mask = np.arange(n_features) != target_index
X_not_target = X[:, mask]
linear = LinearRegression()
linear.fit(X_not_target, X_target)
rsquared = linear.score(X_not_target, X_target)
vif = 1. / (1. - rsquared + 1e-5)
return vif
# removing collinearity, thus redundant features
# while still retaining predictive power
X, colnames = remove_collinearity(X)
print('remaining feature:', colnames)
linear = LinearRegression()
linear.fit(X, y)
print('Linear model:', pretty_print_linear(linear))
print('rsquared:', linear.score(X, y))
# generate a correlated categorical variable
# and see if cramer's v method will detect it
df = pd.DataFrame(index = range(1, 8))
df['a'] = ['chicken', 'chicken', 'chicken', 'chicken', 'chat', 'chat', 'chat']
df['b'] = ['dog', 'dog', 'dog', 'dog', 'cat', 'cat', 'cat']
observed = pd.crosstab(df['a'], df['b'])
observed
def compute_cramersv(observed, correction = False):
Parameters
----------
observed : 2d ndarray
The contingency table. The table contains the observed frequencies
(i.e. number of occurrences) for each category.
correction : bool, default False
If True, and the degrees of freedom is 1, apply Yates’ correction for continuity.
The effect of the correction is to adjust each observed value by 0.5 towards the
corresponding expected value. This is set to False by defualt as the effect of
Yates' correction is to prevent overestimation of statistical significance for small
data. i.e. It is chiefly used when at least one cell of the table has an expected
count smaller than 5. And most people probably aren't working with a data size that's
that small.
Returns
-------
cramersv : float
n_obs = observed.sum()
n_row, n_col = observed.shape
chi2 = chi2_contingency(observed, correction = correction)[0]
cramersv = np.sqrt(chi2 / (n_obs * min(n_row - 1, n_col - 1)))
return cramersv
correction = False
observed = observed.values
compute_cramersv(observed, correction)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Any one point inside the unit square would represent an image. For example the image associated with the point $(0.25,0.85)$ is shown below.
Step2: Now consider the case where there is some process correlating the two variables. This would be similar to the underlying biophysics governing the activity of an astrocyte. In that case, the pixels would be correlated in some manner due to the mechanism driving the cell and we would see structure in the microscopy recordings. In this simple case, let's consider a direct correlation of the form $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}+\epsilon$ where $\epsilon$ is a noise term coming from a low variability normal distribution $\epsilon \sim N(0,\frac{1}{10})$. We see below that in this case, the images plotted in two dimensions resulting from this relationship form a distinct pattern. In addition if we look at the images themselves one may be able to see a pattern...
Step3: We will refer to the structure suggested by the two dimensional points as the 'manifold'. This is a common practice when analyzing images. A 28 by 28 dimensional image will be a point in 784 dimensional space. If we are examining images with structure, various images of the number 2 for example, then it turns out that these images will form a manifold in 784 dimensional space. In most cases, as is the case in our contrived example, this manifold exists in a lower dimensional space than that of the images themselves. The goal is to 'learn' this manifold. In our simple case we can describe the manifold as a function of only 1 variable $$f(t) = <t,\frac{1}{2} \cos(2\pi t)+\frac{1}{2}>$$ This is what we would call the underlying data generating process. In practice we usually describe the manifold in terms of a probability distribution. We will refer to the data generating distribution in our example as $p_{test}(x_1, x_2)$. Why did we choose a probability to describe the manifold created by the data generating process? How might this probability be interpreted?
Step4: As our intuition might have suggested, the data generating distribution looks very similar to the structure suggested by the two dimensional images plotted above. There is high probability very near the actual curve $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}$ and low probability as we move away. We imposed the uncertainty via the Gaussian noise term $\epsilon$. However, in real data the uncertainty can be due to the myriad of sources outlined above. In these cases a complex probability distribution isn't an arbitrary choice for representing the data, it becomes necessary [cite Cristopher Bishop 2006].
| <ASSISTANT_TASK:>
Python Code:
x1 = np.random.uniform(size=500)
x2 = np.random.uniform(size=500)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x1,x2, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25)
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('images_in_2dspace.pdf')
im = [(0.25, 0.85)]
plt.imshow(im, cmap='gray',vmin=0,vmax=1)
plt.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off'
)
plt.xticks([])
plt.yticks([])
plt.xlabel('Pixel 1 = 0.25 Pixel 2 = 0.85')
plt.savefig('sample_2dspace_image.pdf')
x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5
x2 = np.linspace(0,1,200)
eps = np.random.normal(scale=0.1, size=200)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x2,x1(x2)+eps, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25); plt.axes().set_aspect('equal')
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('structured_images_in_2dspace.pdf')
structured_images = zip(x1(np.linspace(0,1,10)), np.linspace(0,1,10))
for im in structured_images:
plt.figure(); plt.imshow([im], cmap='gray', vmin=0, vmax=1)
from matplotlib.colors import LogNorm
x2 = np.random.uniform(size=100000)
eps = np.random.normal(scale=0.1, size=100000)
hist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())
plt.xlim(0.0,1.0); plt.ylim(-0.3,1.3); plt.axes().set_aspect('equal')
plt.xlabel('Pixel 2'); plt.ylabel('Pixel 1')
plt.colorbar();
plt.savefig('histogram_of_structured_images.pdf')
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
X,Y = np.mgrid[0:50,0:50]
ax.plot_surface(X, Y, hist2d[0])#, linewidth=0, antialiased=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantities can be converted to other units systems or factors by using to()
Step2: We can do arithmetic operations when the quantities have the compatible units
Step3: Quantities can also be combined, for example to measure speed
Step4: <div style='background
Step5: Composed units
Step6: and others are already a composition
Step7: Sometime we get no units quantitites
Step8: What happen if we add a number to this?
Step9: Equivalencies
Step10: Other built-in equivalencies are
Step11: Printing the quantities
Step12: Arrays
Step13: Plotting quantities
Step14: Creating functions with quantities as units
Step15: <div style='background
Step16: Create your own units
Step17: <div style='background
| <ASSISTANT_TASK:>
Python Code:
from astropy import units as u
# Define a quantity length
# print it
# Type of quantity
# Type of unit
# Quantity
# value
# unit
# information
# Convert it to: km, lyr
# arithmetic with distances
# calculate a speed
# decompose it
#1
#2
#3
# create a composite unit
# and in the imperial system
# what can be converted from s-1?
# or Jules?
# Unity of R
# no units
# arithmetic with no units
# final value of a no unit quantity
# converting spectral quantities
# but doing it right
# finding the equivalencies
# but also using other systems
# Printing values with different formats
# different ways of defining a quantity for a single value
# now with lists
# and arrays
# and its arithmetics
# angles are smart!
# allowing for plotting
from astropy.visualization import quantity_support
quantity_support()
# loading matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
# Ploting the previous array
# Create a function for the Kinetic energy
# run with and without units
#4
# run it for some values
# on Mars:
# Create units for a laugh scale
#5
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
| <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 3
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x / np.max(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
F_W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape.as_list()[3], conv_num_outputs], stddev=0.05))
F_b = tf.Variable(tf.zeros(conv_num_outputs))
output = tf.nn.conv2d(x_tensor, F_W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
output = tf.nn.bias_add(output, F_b)
output = tf.nn.relu(output)
output = tf.nn.max_pool(output, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=tf.nn.relu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
c_layer = conv2d_maxpool(x, 32, (8, 8), (1, 1), (4, 4), (2, 2))
c_layer = conv2d_maxpool(c_layer, 128, (4,4), (1,1), (4,4), (2,2))
c_layer = conv2d_maxpool(c_layer, 512, (2,2), (1,1), (4,4), (2,2))
c_layer = tf.nn.dropout(c_layer, keep_prob)
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(c_layer)
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_layer = fully_conn(flat, 512)
fc_layer = tf.nn.dropout(fc_layer, keep_prob)
fc_layer = fully_conn(flat, 128)
fc_layer = tf.nn.dropout(fc_layer, keep_prob)
fc_layer = fully_conn(flat, 32)
# Function Definition from Above:
# output(x_tensor, num_outputs)
o_layer = output(fc_layer, 10)
return o_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
pass
# TODO: Tune Parameters
epochs = 15
batch_size = 512
keep_probability = .7
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create combined data
Step2: Load Datafiles
Step3: Shuffle the data
Step4: Get parts of speech for text string
Step5: Get POS trigrams for a text string
Step6: Turn Trigrams into Dict keys
Step7: Take the trigrams and index them
Step8: Chunking the data for TF
Step9: Setting up TF
Step10: Initialize
Step11: Training
Step12: Playground
Step13: Save the vocab
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
import spacy
nlp = spacy.load('en')
import re
from nltk.util import ngrams, trigrams
import csv
import subprocess
subprocess.Popen("python combine.py childrens_fragments".split(), cwd='../data/fragments/participle-phrases')
texts = []
labels = []
with open("../data/fragments/participle-phrases/childrens_fragments.combined.txt","r") as f:
for i, sentence_or_fragment in enumerate(f):
if i % 2 == 0:
labels.append(0)
else:
labels.append(1)
texts.append(sentence_or_fragment.strip())
print(texts[-10:])
import random
combined = list(zip(texts,labels))
random.shuffle(combined)
texts[:], labels[:] = zip(*combined)
print(texts[-10:])
print(labels[-10:])
def textStringToPOSArray(text):
doc = nlp(text)
tags = []
for word in doc:
tags.append(word.tag_)
return tags
textStringToPOSArray(texts[3])
def find_ngrams(input_list, n):
return zip(*[input_list[i:] for i in range(n)])
def getPOSTrigramsForTextString(text):
tags = textStringToPOSArray(text)
tgrams = list(trigrams(tags))
return tgrams
print("Text: ", texts[3], labels[3])
getPOSTrigramsForTextString(texts[3])
def trigramsToDictKeys(trigrams):
keys = []
for trigram in trigrams:
keys.append('>'.join(trigram))
return keys
print(texts[2])
print(trigramsToDictKeys(getPOSTrigramsForTextString(texts[2])))
from collections import Counter
c = Counter()
for textString in texts:
c.update(trigramsToDictKeys(getPOSTrigramsForTextString(textString)))
total_counts = c
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {n: i for i, n in enumerate(vocab)}## create the word-to-index dictionary here
print(word2idx)
def textToTrigrams(text):
return trigramsToDictKeys(getPOSTrigramsForTextString(text))
def text_to_vector(text):
wordVector = np.zeros(len(vocab))
for word in textToTrigrams(text):
index = word2idx.get(word, None)
if index != None:
wordVector[index] += 1
return wordVector
text_to_vector('Donald, standing on the precipice, began to dance.')[:65]
word_vectors = np.zeros((len(texts), len(vocab)), dtype=np.int_)
for ii, text in enumerate(texts):
word_vectors[ii] = text_to_vector(text)
# Printing out the first 5 word vectors
word_vectors[:5, :23]
records = len(labels)
test_fraction = 0.9
train_split, test_split = int(records*test_fraction), int(records*(1-test_fraction))
print(train_split, test_split)
trainX, trainY = word_vectors[:train_split], to_categorical(labels[:train_split], 2)
testX, testY = word_vectors[test_split:], to_categorical(labels[test_split:], 2)
trainX[-1], trainY[-1]
len(trainY), len(testY), len(trainY) + len(testY)
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 25, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
len(vocab)
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
# Testing
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
w = csv.writer(open("../models/participlevocabindex.csv", "w"))
for key, val in word2idx.items():
w.writerow([key, val])
model.save("../models/participle_model.tfl")
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence)])[0][1]
print('Is this a participle phrase fragment?\n {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Yes' if positive_prob > 0.5 else 'No')
test_sentence("Neglecting to recognize the horrors those people endure allow people to go to war more easily.")
test_sentence("Katherine, gesticulating wildly and dripping in sweat, kissed him on the cheek.")
test_sentence("Working far into the night in an effort to salvage her little boat.")
test_sentence("Working far into the night in an effort to salvage her little boat, she slowly grew tired.")
test_sentence("Rushing to the rescue with his party.")
test_sentence("Isobel was about thirteen now, and as pretty a girl, according to Buzzby, as you could meet with in any part of Britain.")
test_sentence("Being of a modest and retiring disposition, Mr. Hawthorne avoided publicity.")
test_sentence("Clambering to the top of a bridge, he observed a great rainbow")
test_sentence("Clambering to the top of a bridge.")
test_sentence("He observed a great rainbow.")
test_sentence("Sitting on the iron throne, Joffry looked rather fat.")
test_sentence("Worrying that a meteor or chunk of space debris will conk her on the head.")
test_sentence("Aunt Olivia always wears a motorcycle helmet, worrying that a meteor or chunk of space debris will conk her on the head")
test_sentence("Affecting the lives of many students in New York City.")
test_sentence("Quill was a miracle, affecting the lives of many students in New York City.")
test_sentence("Standing on the edge of the cliff looking down.")
test_sentence("Emilia, standing on the edge of the cliff and looking down, began to weep.")
test_sentence("Standing on the edge of the cliff and looking down, Emilia began to weep.")
vocab
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas is the software package that you will use to generate "data frames" which are just Python representations of data that you have collected.
Step2: Add the rest of the antibiotis to the graph
| <ASSISTANT_TASK:>
Python Code:
# this would be a comment
# cells like this are like an advanced calculator
# for example:
2+2
# Load the packages into memory by running this cell
import pandas as pd
import numpy as np
import pygal
# Example of how to use pandas to read and load a "comma-separated-value" or csv file.
# You can create csv files in any text editor (like notepad)
# or in programs that use spreadsheets (Excel/Numbers/Google Sheets)
ecoli = pd.read_csv("kb_ecoli.csv")
# You can display the data you just loaded in a table
ecoli
# Start by replacing "ab#" in the csv file by the real antibiotic name
# that we used in the microbiology laboratory and then reload the data
# if you did this correctly, the table should have the correct names
ecoli = pd.read_csv("kb_ecoli.csv")
ecoli
# We can extract the data from a single column using its name
antibiotic1=ecoli.ab1
# or by its location in the data frame
antibiotic12=ecoli.iloc[0:,11]
antibiotic12
# you can also check the name of the column (remember python indexing starts at 0!)
ecoli.columns[0]
# Or we can directly calculate average values using numpy
antibiotic1=np.mean(ecoli.ab1)
antibiotic1
antibiotic12=np.mean(ecoli.ab12)
antibiotic12
# and we can already create a bar graph that displays the data with pygal
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_labels = 'ab1','ab12';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], antibiotic12)
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# the graph was saved as an svg file in your working directory
# you can open that svg file in a new browser tab
# we can use some optional arguments to put labels
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_title = 'Antibiotics';
bar_chart.y_title = 'Zone of inhibition (mm)';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], antibiotic12)
# bar_chart.x_labels = [{'label': 'AB1','value': 1},{'label': 'AB12','value': 12}]
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# reload the tab that contains the graph
# you could even use advanced options to put error bars
# and using numpy's standard deviation function: np.std()
bar_chart = pygal.Bar()
bar_chart.title = "Kirby Bauer results for E.coli"
bar_chart.x_title = 'Antibiotics';
bar_chart.y_title = 'Zone of inhibition (mm)';
bar_chart.add('name of ab1', antibiotic1)
bar_chart.add(ecoli.columns[11], [{'value': antibiotic12, 'ci':
{'low': np.mean(ecoli.ab12)-np.std(ecoli.ab12), 'high': np.mean(ecoli.ab12)+np.std(ecoli.ab12)}}])
# bar_chart.add('Second', [{'value': np.mean(ecoli.ab2), 'ci': {'high': 5}}])
bar_chart.render_to_file('kirbybauer_ecoli.svg')
# reload the tab that contains the graph
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.8.1 Sorting out the metadata
Step2: Our Project has to be updated with the recent changes to the spreadsheet
Step3: Such fixes can also be done programmatically (see section 3.4)
Step4: If we print this Concatenation object we get this message
Step5: 3.8.3 Building the supermatrix
Step6: Now that this supermatrix is stored as a trimmed alignment in the pj.trimmed_alignments dictionary, we can write it to a file or fetch the MultipleSeqAlignment object, as shown in section 3.7.
| <ASSISTANT_TASK:>
Python Code:
from reprophylo import *
pj = unpickle_pj('outputs/my_project.pkpj', git=False)
from IPython.display import Image
Image('images/fix_otus.png', width = 400)
pj.correct_metadata_from_file('data/Tetillida_otus_corrected.csv')
concat = Concatenation('large_concat', # Any unique string
pj.loci, # This is a list of Locus objects
'source_otu', # The values of this qualifier
# flag sequences the belong to the same
# sample
otu_must_have_all_of=['MT-CO1'], # All the OTUS must have a cox1 sequence
otu_must_have_one_of=[['18s','28s']], # All the OTUs must have either 18s or 28s or both
define_trimmed_alns=[] # We only have one alignment per gene
# so the list is empty (default value)
)
print concat
pj.add_concatenation(concat)
pj.make_concatenation_alignments()
pickle_pj(pj, 'outputs/my_project.pkpj')
# Design a supermatrix
concat = Concatenation('concat_name', loci_list, 'otu_qualifier' **kwargs)
# Add it to a project
pj.add_concatenation(concat)
# Build supermatrices based on the Concatenation
# objects in pj.concatenations
pj.make_concatenation_alignments()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target configuration
Step2: Workload execution
Step3: Energy estimation
Step4: Data analysis
Step5: We can see on the above plot that the system level power consumption decreases over time (in average). This is coherent with the expected behaviour given the decreasing ramp workload under consideration.
| <ASSISTANT_TASK:>
Python Code:
from conf import LisaLogging
LisaLogging.setup()
# One initial cell for imports
import json
import logging
import os
from env import TestEnv
# Suport for FTrace events parsing and visualization
import trappy
from trappy.ftrace import FTrace
from trace import Trace
# Support for plotting
# Generate plots inline
%matplotlib inline
import numpy
import pandas as pd
import matplotlib.pyplot as plt
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Root path of the gem5 workspace
base = "/home/vagrant/gem5"
conf = {
# Only 'linux' is supported by gem5 for now, 'android' is a WIP
"platform" : 'linux',
# Preload settings for a specific target
"board" : 'gem5',
# Devlib modules to load - "gem5stats" is required to use the power instrument
"modules" : ["cpufreq", "bl", "gem5stats"],
# Host that will run the gem5 instance
"host" : "workstation-lin",
"gem5" : {
# System to simulate
"system" : {
# Platform description
"platform" : {
# Gem5 platform description
# LISA will also look for an optional gem5<platform> board file
# located in the same directory as the description file.
"description" : os.path.join(base, "juno.py"),
"args" : [
"--power-model", # Platform-specific parameter enabling power modelling
"--juno-revision 2",
# Resume simulation from a previous checkpoint
# Checkpoint must be taken before Virtio folders are mounted
#"--checkpoint-indir /data/tmp/Results_LISA/gem5",
#"--checkpoint-resume 1",
]
},
# Kernel compiled for gem5 with Virtio flags
"kernel" : os.path.join(base, "product/", "vmlinux"),
# DTB of the system to simulate
"dtb" : os.path.join(base, "product/", "armv8_juno_r2.dtb"),
# Disk of the distrib to run
"disk" : os.path.join(base, "aarch64-ubuntu-trusty-headless.img")
},
# gem5 settings
"simulator" : {
# Path to gem5 binary
"bin" : os.path.join(base, "gem5/build/ARM/gem5.fast"),
# Args to be given to the binary
"args" : [
# Zilch
],
}
},
# Tools required by the experiments
"tools" : ['trace-cmd', 'sysbench', 'rt-app'],
# Output directory on host
"results_dir" : "gem5_res",
# Energy Meters configuration based on Gem5 stats
"emeter" : {
"instrument" : "gem5",
"conf" : {
# Zilch
},
# Each channel here must refer to a specific **power** field in the stats file.
'channel_map' : {
'Core0S' : 'system.cluster0.cores0.power_model.static_power',
'Core0D' : 'system.cluster0.cores0.power_model.dynamic_power',
'Core1S' : 'system.cluster0.cores1.power_model.static_power',
'Core1D' : 'system.cluster0.cores1.power_model.dynamic_power',
'Core2S' : 'system.cluster0.cores2.power_model.static_power',
'Core2D' : 'system.cluster0.cores2.power_model.dynamic_power',
'Core3S' : 'system.cluster0.cores3.power_model.static_power',
'Core3D' : 'system.cluster0.cores3.power_model.dynamic_power',
'Core4S' : 'system.cluster1.cores0.power_model.static_power',
'Core4D' : 'system.cluster1.cores0.power_model.dynamic_power',
'Core5S' : 'system.cluster1.cores1.power_model.static_power',
'Core5D' : 'system.cluster1.cores1.power_model.dynamic_power',
},
},
}
# This can take a lot of time ...
te = TestEnv(conf, wipe=True)
target = te.target
# Create and RTApp RAMP tasks
rtapp = RTA(target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp1' : Ramp(
start_pct = 95,
end_pct = 5,
delta_pct = 10,
time_s = 0.1).get(),
'ramp2' : Ramp(
start_pct = 90,
end_pct = 30,
delta_pct = 20,
time_s = 0.2).get(),
})
# Start emeters & run workload
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
nrg_rep = te.emeter.report(te.res_dir)
logging.info("Measured channels energy:")
print json.dumps(nrg_rep.channels, indent=4)
logging.info("DataFrame of collected samples (only first 5)")
nrg_rep.data_frame.head()
# Obtain system level energy by ...
df = nrg_rep.data_frame
# ... summing the dynamic power of all cores to obtain total dynamic power, ...
df["total_dynamic"] = df[('system.cluster0.cores0.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores1.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores2.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores3.power_model.dynamic_power', 'power')] + \
df[('system.cluster1.cores0.power_model.dynamic_power', 'power')] + \
df[('system.cluster1.cores1.power_model.dynamic_power', 'power')]
# ... summing the static power of all cores to obtain total static power and ...
df["total_static"] = df[('system.cluster0.cores0.power_model.static_power', 'power')] + \
df[('system.cluster0.cores1.power_model.static_power', 'power')] + \
df[('system.cluster0.cores2.power_model.static_power', 'power')] + \
df[('system.cluster0.cores3.power_model.static_power', 'power')] + \
df[('system.cluster1.cores0.power_model.static_power', 'power')] + \
df[('system.cluster1.cores1.power_model.static_power', 'power')]
# ... summing the dynamic and static powers
df["total"] = df["total_dynamic"] + df["total_static"]
logging.info("Plot of collected power samples")
axes =df[('total')].plot(figsize=(16,8),
drawstyle='steps-post');
axes.set_title('Power samples');
axes.set_xlabel('Time [s]');
axes.set_ylabel('Output power [W]');
logging.info("Power distribution")
axes = df[('total')].plot(kind='hist', bins=32,
figsize=(16,8));
axes.set_title('Power Histogram');
axes.set_xlabel('Output power [W] buckets');
axes.set_ylabel('Samples per bucket');
logging.info("Plot of collected power samples")
nrg_rep.data_frame.describe(percentiles=[0.90, 0.95, 0.99]).T
# Don't forget to stop Gem5
target.disconnect()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The choice of the distance function (divergence) can be important. In practice, a popular choice is the Euclidian distance but this is by no means the only one.
Step2: Equal distance contours
Step3: http
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
df = pd.read_csv(u'data/iris.txt',sep=' ')
df
X = np.hstack([
np.matrix(df.sl).T,
np.matrix(df.sw).T,
np.matrix(df.pl).T,
np.matrix(df.pw).T])
print X[:5] # sample view
c = np.matrix(df.c).T
print c[:5]
def Divergence(x,y,p=2.):
e = np.array(x) - np.array(y)
if np.isscalar(p):
return np.sum(np.abs(e)**p)
else:
return np.sum(np.matrix(e)*p*np.matrix(e).T)
Divergence([0,0],[1,1],p=2)
W = np.matrix(np.diag([2,1]))
Divergence([0,0],[1,1],p=W)
W = np.matrix([[2,1],[1,2]])
Divergence([0,0],[1,1],p=W)
%run plot_normballs.py
def nearest(A,x, p=2):
'''A: NxD data matrix, N - number of samples, D - the number of features
x: test vector
returns the distance and index of the the nearest neigbor
'''
N = A.shape[0]
d = np.zeros((N,1))
md = np.inf
for i in range(N):
d[i] = Divergence(A[i,:], x, p)
if d[i]<md:
md = d[i]
min_idx = i
return min_idx
def predict(A, c, X, p=2):
L = X.shape[0]
return [np.asscalar(c[nearest(A, X[i,:], p=p)]) for i in range(L)]
x_test = np.mat('[3.3, 2.5,5.5,1.7]')
#d, idx = distance(X, x_test, p=2)
cc = predict(X, c, x_test)
print(cc)
#float(c[idx])
def leave_one_out(A, c, p=2):
N = A.shape[0]
correct = 0
for j in range(N):
md = np.inf
for i in range(N):
if i != j:
d = Divergence(A[i,:], A[j,:], p=p)
if d<md:
md = d
min_idx = i
if c[min_idx] == c[j]:
correct += 1
accuracy = 1.*correct/N
return accuracy
leave_one_out(X, c, p=np.diag([1,1,1,1]))
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 3
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] + 0.02*np.random.randn(150,2) # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
weights='uniform'
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.axis('equal')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definimos de una vez todas las variables necesarias
Step2: Y definimos las variables que dependen de otra variable, especificamente en este calculo, todo lo anterior es constante y solo $q_1$ es una variable dependiente del tiempo
Step3: Esta vez vamos a hacer algo ligeramente diferente, vamos a automatizar un poco el proceso de obtención de la posición generalizada del manipulador, para esto vamos a apoyarnos de nuestros conocimientos de cinemática directa, para lo cual, primero necesitamos definir una función DH la cual tome una lista de parametros, en un orden especifico, y nos devuelva la matriz de transformación homogénea asociada a este eslabón
Step4: Una vez que tenemos la función DH para calcular las matrices de transformación homogéneas, ahora procedemos a calcular la posición de cada articulación por medio de estas matrices
Step5: Recordemos que la posición de cada articulación se obtendrá por medio del ultimo vector de esta matriz, por lo que podemos
Step6: Para la posición de la segunda articulación necesitamos multiplicar las primeras dos matrices
Step7: Aunque en este caso, podemos simplificar mas estas expresiones
Step8: Teniendo estas posiciones, para obtener la velocidad, necesitamos obtener su derivada
Step9: Una vez que tenemos la velocidad, obtener el cuadrado de esta velocidad es facil, para un vector podemos decir que
Step10: Y calculando la altura y velocidad rotacional del eslabon
Step11: Ejercicio
Step12: Si ahora calculamos las energías cinéticas, tenemos
Step13: Ejercicio
Step14: Y calculando las energías potenciales
Step15: Una vez que tenemos las energías cinéticas y potenciales de cada eslabón, podemos calcular la energía cinética total y la energía potencial total del manipulador
Step16: Con estas energias se puede calcular el Lagrangiano
Step17: Ya con el Lagrangiano, podemos calcular la ecuación de Euler-Lagrange para cada grado de libertad del manipulador
| <ASSISTANT_TASK:>
Python Code:
from sympy import var, sin, cos, pi, Matrix, Function, Rational, simplify
from sympy.physics.mechanics import mechanics_printing
mechanics_printing()
var("l1:3")
var("m1:3")
var("J1:3")
var("g t")
q1 = Function("q1")(t)
q2 = Function("q2")(t)
def DH(params):
from sympy import Matrix, sin, cos
a, d, α, θ = params
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return A
from nose.tools import assert_equal
from sympy import var, Matrix, sin, cos, eye
var("l1")
q1 = Function("q1")(t)
A = Matrix([[cos(q1), -sin(q1), 0, l1*cos(q1)],
[sin(q1), cos(q1), 0, l1*sin(q1)],
[0, 0, 1, 0],
[0, 0, 0, 1]])
assert_equal(DH([l1, 0, 0, q1]), A)
assert_equal(DH([0, 0, 0, 0]), eye(4))
print("Sin errores")
A1 = DH([l1, 0, 0, q1])
A2 = DH([l2, 0, 0, q2])
A1
p1 = A1[0:3, 3:4]
p1
p2 = (A1*A2)[0:3, 3:4]
p2
p2 = simplify(p2)
p2
v1 = p1.diff("t")
v1
v2 = p2.diff("t")
v2
v1c = (v1.T*v1)[0]
v1c = v1c.simplify()
v1c
v2c = (v2.T*v2)[0]
v2c = v2c.simplify()
v2c
h1, h2 = p1[1], p2[1]
ω1, ω2 = q1.diff(t), q1.diff(t) + q2.diff(t)
def ener_cin(params):
from sympy import Rational
m, v, J, ω = params
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return K
from nose.tools import assert_equal
from sympy import var, Matrix, sin, cos, Rational
var("m1 J1 l1 ω1")
q1 = Function("q1")(t)
v1 = Matrix([[l1*cos(q1)], [l1*sin(q1)], [0]])
assert_equal(ener_cin([m1, v1, J1, ω1]), Rational(1,2)*m1*l1**2 + Rational(1,2)*J1*ω1**2)
assert_equal(ener_cin([0, Matrix([[0],[0],[0]]), 0, 0]), 0)
print("Sin errores")
h1, h2 = p1[1], p2[1]
ω1, ω2 = q1.diff(t), q1.diff(t) + q2.diff(t)
K1 = ener_cin([m1, v1, J1, ω1])
K1
K2 = ener_cin([m2, v2, J2, ω2])
K2
def ener_pot(params):
m, h = params
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return U
from nose.tools import assert_equal
from sympy import var
var("m1 m2 g h1 h2")
assert_equal(ener_pot([m1, h1]), m1*g*h1)
assert_equal(ener_pot([m2, h2]), m2*g*h2)
print("Sin errores")
h1, h2 = p1[1], p2[1]
ω1, ω2 = q1.diff(t), q1.diff(t) + q2.diff(t)
U1 = ener_pot([m1, h1])
U2 = ener_pot([m2, h2])
K = K1 + K2
U = U1 + U2
L = K - U
L
τ1 = (L.diff(q1.diff(t)).diff(t) - L.diff(q1)).expand().collect(q1.diff(t).diff(t)).collect(q2.diff(t).diff(t))
τ1
τ2 = (L.diff(q2.diff(t)).diff(t) - L.diff(q2)).expand().collect(q1.diff(t).diff(t)).collect(q2.diff(t).diff(t))
τ2
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <code>del</code> statement can be used to remove an item from a list given its index
Step2: <code>list()</code>
Step3: Sort a list
Step4: List comprehensive
Step5: Tuples
Step6: Sets
Step7: add item to a set
Step8: check item is in a set
Step9: delete item
Step10: similarly to list comprehensions, set comprehensions are also supported
Step11: you can loop over the set
Step12: Dictionaries
Step13: delete key in dictionary
Step14: check if a key is in dictionary
Step15: <code>dict.keys()</code>
Step16: dictionary can be construct by calling
Step17: <code>zip(sequence...)</code>
Step18: Loop over a dictionary
Step19: Generator & iterator
Step20: comprehensive syntax like list comprehensive
Step21: Iterator
Step22: More on looping
Step23: <code>dict.iteritems()</code>
| <ASSISTANT_TASK:>
Python Code:
pets = ['dog', 'cat', 'pig']
print pets.index('cat')
pets.insert(0, 'rabbit')
print pets
pets.pop(1)
print pets
a = range(10)
print a
del a[2]
print a
print a[:3]
del a[:3]
print a
print list('i can eat glass')
print sorted([2, 3, 1], reverse=True)
a = [2, 3, 1]
print a.sort(reverse=True)
print a
print sorted([
['peter', 23],
['john', 30],
['tom', 18]
], key=lambda x: x[1])
squares = []
for x in range(10):
squares.append(x**2)
print squares
print [x**2 for x in range(10)]
array = []
for x in [1,2,3]:
for y in [1, 2, 3]:
if x != y:
array.append((x, y))
print array
print [(x, y) for x in [1,2,3] for y in [1,2,3] if x != y]
t = (1, 2, 3, 4, 5)
print t
tuple([1,2,3])
# change the tuple raise exception
t[0] = 5
letters = {'a', 'b', 'c', 'a'}
print letters
print set(['a', 'b', 'c', 'a'])
s = set(['a', 'b'])
s.add('c')
print s
pets = { 'dog', 'cat', 'pig' }
pets.add('dog')
print pets
pets.add('fish')
print pets
print 'fish' in pets
print 'lion' in pets
pets.remove('fish')
print pets
letters = {x for x in 'i can eat glass'}
print letters
for c in set('i can eat glass'):
print c,
{'a', 'b'}
tel = {'jack': 4098, 'sape': 4139}
tel['guido'] = 4127
print tel
tel['vu'] = 4910
print tel
print tel['jack']
del tel['guido']
print tel
print 'sape' in tel
print 'foo' in tel
tel = {'sape': 4139, 'jack': 4098, 'guido': 4127}
print tel.keys()
print tel.values()
print dict([('sape', 4139), ('jack', 4098), ('guido', 4127)])
zip([1, 2, 3], 'abc', 'ABC')
print dict(zip('abc', [1, 2, 3]))
for name in tel:
print name, ':', tel[name]
tel.values()
for telno in tel.values():
print telno
def firstn(n):
i = 0
while i < n:
yield i
i += 1
gen = firstn(10)
print range(50)
print firstn(50)
for i in range(5):
print i,
print '\n--------------------'
for i in firstn(5):
print i,
for i in (x ** 2 for x in range(10)):
print i,
for i in xrange(10):
print i,
list(enumerate(['dog', 'cat', 'pig']))
print list(enumerate(['dog', 'cat', 'pig']))
print list(enumerate(['dog', 'cat', 'pig'], start=2))
for value in enumerate(['dog', 'cat', 'pig']):
print value
for index, value in enumerate(['dog', 'cat', 'pig']):
print index, ':', value
print tel
print list(tel.iteritems())
for name, telno in tel.iteritems():
print name, ':', telno
for key in tel.iterkeys():
print key
import os
os.listdir('.')
[file_name for file_name in os.listdir('.') if file_name.endswith('.pyc')]
filter(lambda file_name: file_name.endswith('.pyc'), os.listdir('.'))
os.remove('./main.pyc')
[os.remove(file_name) for file_name in os.listdir('.') if file_name.endswith('.pyc')]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import pandas and several display and plotting options
Step2: If you already know the series ID you want (say by searching on the FRED website), you can fetch data easily into a pandas Series
Step3: You can also easily fetch the meta data about any series
Step4: You can also get a set of series IDs programmatically by release or category IDs. Several sorting options are also available. On the FRED website I know that the release ID 175 contains some personal income data. Let's fetch 5 most popular series in that set.
Step5: You can also search by categories. On the FRED website I see that category 101 contains data about Consumer Credit.
Step6: As a example let's fetch the personal income data. Release 151 looks quite intersting
Step7: I noticed that the data is mostly organized by state, except for a few that are by BEA region. We can use pandas to easily select the seires we want
Step8: looks good, we got the data series for all 50 states here
| <ASSISTANT_TASK:>
Python Code:
from fredapi import Fred
fred = Fred()
import pandas as pd
pd.options.display.max_colwidth = 60
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
figsize(20, 5)
s = fred.get_series('SP500', observation_start='2014-09-02', observation_end='2014-09-05')
s.tail()
s = fred.get_series('SP500', observation_start='1/31/2014')
s.tail()
info = fred.get_series_info('PAYEMS')
info['title']
personal_income_series = fred.search_by_release(175, limit=5, order_by='popularity', sort_order='desc')
personal_income_series['title']
df = {}
df['SF'] = fred.get_series('PCPI06075')
df['NY'] = fred.get_series('PCPI36061')
df['DC'] = fred.get_series('PCPI11001')
df = pd.DataFrame(df)
df.plot()
df = fred.search_by_category(101, limit=10, order_by='popularity', sort_order='desc')
df['title']
df = fred.search_by_release(151)
df['title'].head(10)
state_df = df[~df['title'].str.startswith('Per Capita Personal Income in the')]
len(state_df)
state_df.id.str[:2]
income_by_state = {}
for series_id in state_df.index:
income_by_state[series_id[:2]] = fred.get_series(series_id)
income_by_state = pd.DataFrame(income_by_state)
income_by_state.ix[-1].plot(kind='bar')
plt.title('Per Capita Personal Income by State')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: For each row in the file, you need to make sure all the constraints are matching the desired ones. If so, keep count of the BMI group using a dictionary.
Step3: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling
Step4: Repeat this analysis for all genera, and for the records having Time = 0.
Step5: Testing
Step6: Now use this function to print the results for all genera at Time = 0
| <ASSISTANT_TASK:>
Python Code:
import csv # Import csv module for reading the file
def get_BMI_count(dict_constraints):
Take as input a dictionary of constraints
for example, {'Age': '28', 'Sex': 'female'}
And return the count of the various groups of BMI
# We use a dictionary to store the results
BMI_count = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
BMI_count[my_BMI] = BMI_count.get(my_BMI, 0) + 1
return BMI_count
get_BMI_count({'Nationality': 'US', 'Sex': 'female'})
import scipy # For log10
def get_abundance_by_BMI(dict_constraints, genus = 'Aerococcus'):
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
# Now let's open the other file, and keep track of the abundance of the genus for each
# BMI group
abundance = {}
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
# Finally, calculate means, and print results
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'},
'Clostridium difficile et rel.')
def get_all_genera():
with open('../data/Lahti2014/HITChip.tab') as f:
header = f.readline().strip()
genera = header.split('\t')[1:]
return genera
get_all_genera()[:6]
for g in get_all_genera()[:5]:
get_abundance_by_BMI({'Time': '0'}, g)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Вариант с квадратами
Step2: Вариант поскейленных данных
Step3: В прошлом ноутбуке изучается рпспределение цен. Оно так себе - очень большая плотность в начале вариационного ряда и она быстро падает до очень маленьких значений. При таких данных имеет смысл прологарифмировать колонку и работать уже с логарифмом. Не забудем перед посчетом ошибки модели возвести экспоненту в получившуюся степень.
Step4: Комбинации вышеизложенных вариантов.
Step5: Для каждого варианта составляем модель, обучаем ее и сопоставляем ошибки.
Step6: При нескольких прогонках мною было подмечено, что ошибки моделей хоть и меняются, но относительно друг друга модели в среднем особо не передвигатся
Step7: В среднем RMSE была чуть меньше. Лучше остальных те же модели (вот это уже стабильно просматривалось)
Step8: Если усреднять запуски, то результаты лучше у модели с регуляризацией, причем у Ridge. По итогам всех эспериментов наилучшей выглядит модель для sq_log данных.
| <ASSISTANT_TASK:>
Python Code:
data.drop(['Bal_na', 'Distr_N', 'Brick_na'], axis = 1, inplace = True)
data_sq = data.copy()
squared_columns = ['Distance', 'Kitsp', 'Livsp', 'Totsp', 'Metrokm']
squared_columns_new = ['Distance_sq', 'Kitsp_sq', 'Livsp_sq', 'Totsp_sq', 'Metrokm_sq']
for i in range(len(squared_columns)):
data_sq[squared_columns_new[i]] = [x**2 for x in data_sq[squared_columns[i]]]
data_sq.head(3)
from sklearn.preprocessing import scale
to_scale = ['Distance', 'Floor', 'Kitsp', 'Livsp', 'Nfloors', 'Totsp', 'Metrokm']
data_sc = data.copy()
data_sc[to_scale] = scale(data_sc[to_scale], axis = 1)
data_sc.head(3)
data_log = data.copy()
data_log['Price'] = np.log(data_log['Price'])
data_log['Price'].head()
data_sc_log = data.drop('Price', axis = 1)
data_sc_log[to_scale] = scale(data_sc_log[to_scale], axis = 1)
data_sc_log['Price'] = np.log(data['Price'])
data_sc_log.head(3)
data_sq_log = data.drop('Price', axis = 1)
for i in range(len(squared_columns)):
data_sq_log[squared_columns_new[i]] = [x**2 for x in data_sq_log[squared_columns[i]]]
data_sq_log['Price'] = np.log(data['Price'])
data_sq_log.head(3)
data_sc_sq_log = data.drop('Price', axis = 1)
data_sc_sq_log[to_scale] = scale(data_sc_sq_log[to_scale], axis = 1)
for i in range(len(squared_columns)):
data_sc_sq_log[squared_columns_new[i]] = [x**2 for x in data_sc_sq_log[squared_columns[i]]]
data_sc_sq_log['Price'] = np.log(data['Price'])
data_sc_sq_log.head(3)
data_sq_sc_log = data.drop('Price', axis = 1)
for i in range(len(squared_columns)):
data_sq_sc_log[squared_columns_new[i]] = [x**2 for x in data_sq_sc_log[squared_columns[i]]]
data_sq_sc_log[to_scale] = scale(data_sq_sc_log[to_scale], axis = 1)
data_sq_sc_log['Price'] = np.log(data['Price'])
data.head(3)
datasets = [data, data_sq, data_sc, data_log, data_sc_log, data_sq_log, data_sc_sq_log, data_sq_sc_log]
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from math import sqrt
simple = LinearRegression()
squared = LinearRegression()
scaled = LinearRegression()
log = LinearRegression()
sc_log = LinearRegression()
sq_log = LinearRegression()
sc_sq_log = LinearRegression()
sq_sc_log = LinearRegression()
models = [
('simple', simple),
('squared', squared),
('scaled', scaled),
('log', log),
('sc_log', sc_log),
('sq_log', sq_log),
('sc_sq_log', sc_sq_log),
('sq_sc_log', sq_sc_log)
]
for i in range(len(models)):
tmp = datasets[i]
train = tmp.drop('Price', axis = 1)
target = tmp['Price']
target = pd.DataFrame(target)
Xtr, Xval, Ytr, Yval = train_test_split(train, target, test_size=0.2)
modelname, model = models[i]
model.fit(Xtr, Ytr);
predictions = model.predict(Xval)
if (modelname.find('log', 0, len(modelname)) < 0):
rmse = sqrt(mean_squared_error(Yval, predictions))
mae = mean_absolute_error(Yval, predictions)
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
else:
rmse = sqrt(mean_squared_error(np.exp(Yval), np.exp(predictions)))
mae = mean_absolute_error(np.exp(Yval), np.exp(predictions))
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
from sklearn.linear_model import Lasso
simple = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
squared = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
scaled = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
log = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
sc_log = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
sq_log = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
sc_sq_log = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
sq_sc_log = Lasso(alpha = 0.01, max_iter = 100000, tol = 0.01)
models = [
('simple', simple),
('squared', squared),
('scaled', scaled),
('log', log),
('sc_log', sc_log),
('sq_log', sq_log),
('sc_sq_log', sc_sq_log),
('sq_sc_log', sq_sc_log)
]
for i in range(len(models)):
tmp = datasets[i]
train = tmp.drop('Price', axis = 1)
target = tmp['Price']
target = pd.DataFrame(target)
Xtr, Xval, Ytr, Yval = train_test_split(train, target, test_size=0.2)
modelname, model = models[i]
model.fit(Xtr, Ytr);
predictions = model.predict(Xval)
coef = model.coef_
zeros = []
for j in range(len(Xtr.columns)):
if coef[j] == 0:
zeros.append(Xtr.columns[j])
if (modelname.find('log', 0, len(modelname)) < 0):
rmse = sqrt(mean_squared_error(Yval, predictions))
mae = mean_absolute_error(Yval, predictions)
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae, ' Coef: ', zeros)
else:
rmse = sqrt(mean_squared_error(np.exp(Yval), np.exp(predictions)))
mae = mean_absolute_error(np.exp(Yval), np.exp(predictions))
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae, ' Coef: ', zeros)
from sklearn.linear_model import Ridge
simple = Ridge(alpha = .01, max_iter = 10000)
squared = Ridge(alpha = .01, max_iter = 10000)
scaled = Ridge(alpha = .01, max_iter = 10000)
log = Ridge(alpha = .01, max_iter = 10000)
sc_log = Ridge(alpha = .01, max_iter = 10000)
sq_log = Ridge(alpha = .01, max_iter = 10000)
sc_sq_log = Ridge(alpha = .01, max_iter = 10000)
sq_sc_log = Ridge(alpha = .01, max_iter = 10000)
models = [
('simple', simple),
('squared', squared),
('scaled', scaled),
('log', log),
('sc_log', sc_log),
('sq_log', sq_log),
('sc_sq_log', sc_sq_log),
('sq_sc_log', sq_sc_log)
]
for i in range(len(models)):
tmp = datasets[i]
train = tmp.drop('Price', axis = 1)
target = tmp['Price']
target = pd.DataFrame(target)
Xtr, Xval, Ytr, Yval = train_test_split(train, target, test_size=0.2)
modelname, model = models[i]
model.fit(Xtr, Ytr);
predictions = model.predict(Xval)
if (modelname.find('log', 0, len(modelname)) < 0):
rmse = sqrt(mean_squared_error(Yval, predictions))
mae = mean_absolute_error(Yval, predictions)
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
else:
rmse = sqrt(mean_squared_error(np.exp(Yval), np.exp(predictions)))
mae = mean_absolute_error(np.exp(Yval), np.exp(predictions))
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
data_rf = pd.read_csv('/Users/tatanakuzenko/cian_data_clear_for_modeling.csv')
data_sq_rf = data_rf
for i in range(len(squared_columns_new)):
data_sq_rf[squared_columns_new[i]] = 0
for i in range(len(squared_columns)):
data_sq_rf[squared_columns_new[i]] = [x**2 for x in data_sq_rf[squared_columns[i]]]
data_sc_rf = data.copy()
data_sc_rf[to_scale] = scale(data_sc_rf[to_scale], axis = 1)
data_log_rf = data.copy()
data_log_rf['Price'] = np.log(data_log_rf['Price'])
data_sq_log_rf = data.drop('Price', axis = 1)
for i in range(len(squared_columns)):
data_sq_log_rf[squared_columns_new[i]] = [x**2 for x in data_sq_log_rf[squared_columns[i]]]
data_sq_log_rf['Price'] = np.log(data['Price'])
data_sc_log_rf = data.drop('Price', axis = 1)
data_sc_log_rf[to_scale] = scale(data_sc_log_rf[to_scale], axis = 1)
data_sc_log_rf['Price'] = np.log(data['Price'])
data_sc_sq_log_rf = data.drop('Price', axis = 1)
data_sc_sq_log_rf[to_scale] = scale(data_sc_sq_log_rf[to_scale], axis = 1)
for i in range(len(squared_columns)):
data_sc_sq_log_rf[squared_columns_new[i]] = [x**2 for x in data_sc_sq_log_rf[squared_columns[i]]]
data_sc_sq_log_rf['Price'] = np.log(data['Price'])
data_sq_sc_log_rf = data.drop('Price', axis = 1)
for i in range(len(squared_columns)):
data_sq_sc_log_rf[squared_columns_new[i]] = [x**2 for x in data_sq_sc_log_rf[squared_columns[i]]]
data_sq_sc_log_rf[to_scale] = scale(data_sq_sc_log_rf[to_scale], axis = 1)
data_sq_sc_log_rf['Price'] = np.log(data['Price'])
datasets_rf = [data_rf, data_sq_rf, data_sc_rf, data_log_rf, data_sc_log_rf, data_sq_log_rf, data_sc_sq_log_rf, data_sq_sc_log_rf]
from sklearn.ensemble import RandomForestRegressor
simple = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
squared = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
scaled = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
log = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
sc_log = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
sq_log = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
sc_sq_log = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
sq_sc_log = RandomForestRegressor(n_estimators=60, criterion='mse', max_depth=15)
models = [
('simple', simple),
('squared', squared),
('scaled', scaled),
('log', log),
('sc_log', sc_log),
('sq_log', sq_log),
('sc_sq_log', sc_sq_log),
('sq_sc_log', sq_sc_log)
]
for i in range(len(models)):
tmp = datasets[i]
train = tmp.drop('Price', axis = 1)
target = tmp['Price']
Xtr, Xval, Ytr, Yval = train_test_split(train, target, test_size=0.2)
modelname, model = models[i]
model.fit(Xtr, Ytr);
predictions = model.predict(Xval)
if (modelname.find('log', 0, len(modelname)) < 0):
rmse = sqrt(mean_squared_error(Yval, predictions))
mae = mean_absolute_error(Yval, predictions)
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
else:
rmse = sqrt(mean_squared_error(np.exp(Yval), np.exp(predictions)))
mae = mean_absolute_error(np.exp(Yval), np.exp(predictions))
print('Model:', modelname, ' RMSE: ', rmse, ' MAE: ', mae)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Group Travel Optimization
Step4: This will print a line containing each person’s name and origin, as well as the depar- ture time, arrival time, and price for the outgoing and return flights
Step6: Observation
Step8: Observation
| <ASSISTANT_TASK:>
Python Code:
import time
import random
import math
people = [('Seymour','BOS'),
('Franny','DAL'),
('Zooey','CAK'),
('Walt','MIA'),
('Buddy','ORD'),
('Les','OMA')]
# LaGuardia airport in New York
destination='LGA'
Load this data into a dictionary with the origin and destination (dest) as the keys
and a list of potential flight details as the values.
flights={}
#
for line in file('schedule.txt'):
origin,dest,depart,arrive,price=line.strip( ).split(',')
flights.setdefault((origin,dest),[])
# Add details to the list of possible flights
flights[(origin,dest)].append((depart,arrive,int(price)))
Calculates how many minutes into the day a given time is. This makes it easy to calculate
flight times and waiting times
def getminutes(t):
x=time.strptime(t,'%H:%M')
return x[3]*60+x[4]
Routine that prints all the flights that people decide to take in a nice table
def printschedule(r):
for d in range(len(r)/2):
name=people[d][0]
origin=people[d][1]
out=flights[(origin,destination)][r[d]]
ret=flights[(destination,origin)][r[d+1]]
print '%10s%10s %5s-%5s $%3s %5s-%5s $%3s' % (name,origin,
out[0],out[1],out[2],
ret[0],ret[1],ret[2])
s=[1,4,3,2,7,3,6,3,2,4,5,3]
printschedule(s)
This function takes into account the total cost of the trip and the total time spent waiting at airports for the
various members of the family. It also adds a penalty of $50 if the car is returned at a later time of
the day than when it was rented.
def schedulecost(sol):
totalprice=0
latestarrival=0
earliestdep=24*60
for d in range(len(sol)/2):
# Get the inbound and outbound flights
origin=people[d][1]
outbound=flights[(origin,destination)][int(sol[d])]
returnf=flights[(destination,origin)][int(sol[d+1])]
# Total price is the price of all outbound and return flights
totalprice+=outbound[2]
totalprice+=returnf[2]
# Track the latest arrival and earliest departure
if latestarrival<getminutes(outbound[1]): latestarrival=getminutes(outbound[1])
if earliestdep>getminutes(returnf[0]): earliestdep=getminutes(returnf[0])
# Every person must wait at the airport until the latest person arrives.
# They also must arrive at the same time and wait for their flights.
totalwait=0
for d in range(len(sol)/2):
origin=people[d][1]
outbound=flights[(origin,destination)][int(sol[d])]
returnf=flights[(destination,origin)][int(sol[d+1])]
totalwait+=latestarrival-getminutes(outbound[1])
totalwait+=getminutes(returnf[0])-earliestdep
# Does this solution require an extra day of car rental? That'll be $50!
if latestarrival>earliestdep: totalprice+=50
return totalprice+totalwait
#Print schedule cost
schedulecost(s)
The function takes a couple of parameters. Domain is a list of 2-tuples that specify the minimum and maximum values
for each variable. The length of the solution is the same as the length of this list. In the current example,
there are nine outbound flights and nine inbound flights for every person, so the domain in the list is (0,8)
repeated twice for each person.
The second parameter, costf, is the cost function, which in this example will be schedulecost. This is passed as
a parameter so that the function can be reused for other optimization problems. This function randomly generates
1,000 guesses and calls costf on them. It keeps track of the best guess (the one with the lowest cost) and returns it.
def randomoptimize(domain,costf):
best=999999999
bestr=None
for i in range(1000):
# Create a random solution
r=[random.randint(domain[i][0],domain[i][1])
for i in range(len(domain))]
# Get the cost
cost=costf(r)
# Compare it to the best one so far
if cost<best:
best=cost
bestr=r
return r
#Let's try 1000 guesses. 1,000 guesses is a very small fraction of the total number of possibilities'
domain=[(0,8)]*(len(optimization.people)*2)
s=randomoptimize(domain,schedulecost)
schedulecost(s)
printschedule(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And some more specialized dependencies
Step2: Configuration for this figure.
Step3: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step4: We want to grab all the data for the selected frame.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 16.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d, InterpolatedUnivariateSpline
from scipy.optimize import bisect
import json
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
config = Foo()
config.name = "HighAspect/HA_visc/HA_visc"
#config.arch_end = "alcf#dtn_mira/projects/alpha-nek"
config.arch_end = "maxhutch#alpha-admin/pub/"
config.frame = 1
config.lower = .25
config.upper = .75
c = Chest(path = "{:s}-results".format(config.name),
open = partial(glopen, endpoint=config.arch_end),
open_many = partial(glopen_many, endpoint=config.arch_end))
sc = CachedSlict(c)
with glopen(
"{:s}.json".format(config.name), mode='r',
endpoint = config.arch_end,
) as f:
params = json.load(f)
T = sc[:,'H'].keys()[config.frame]
frame = sc[T,:]
c.prefetch(frame.full_keys())
import yt
#test = frame['t_yz'] + 1.
test = np.tile(frame['t_yz'].transpose(),(1,1,1)).transpose() + 1.
data = dict(
density = (test, "g/cm**3")
)
bbox = np.array([[params['root_mesh'][1], params['extent_mesh'][1]],
[params['root_mesh'][2], params['extent_mesh'][2]],
[0., 1.]])
#bbox = np.array([[params['root_mesh'][1], params['extent_mesh'][1]],
# [params['root_mesh'][2], params['extent_mesh'][2]]])
ds = yt.load_uniform_grid(data, test.shape, bbox=bbox, periodicity=(False,True,False), length_unit="m")
slc = yt.SlicePlot(ds, "z", "density",
width=(1,16))
slc.set_buff_size((14336,448))
#slc.pan((.25,7))
slc.show()
sl = ds.slice("z", 0).to_frb((1., 'm'), (128,128), height=(32.,'m'))
plt.imshow(sl['density'].d)
plt.show()
plt.figure()
plt.imshow(test[:,7000:7500,0].transpose())
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy provides various functions for creating common arrays
Step2: Array operations
Step3: But could that be done with lists? Yes but the syntax is not as nice.
Step4: NumPy provides many mathematical functions which operate on arrays.
Step5: Multiple dimensions
Step6: Indexing and Slicing
Step7: Multidimentional arrays can also be sliced. A comma seperates the dimensions.
Step8: Logical indexing
Step9: Masked Arrays
| <ASSISTANT_TASK:>
Python Code:
import numpy as np # standard import abbreviation
a = np.array([1, 2, 3]) # a NumPy array of three integers
a
a.shape # tuple representing the size of each dimension
a.ndim # number of dimensions
a.dtype # Data type information
b = np.array([1., 2., 3., 4.]) # a NumPy array of four floats
b.shape
b.dtype
a = np.arange(10) # a range of values from (0) to 10
print(a)
a = np.arange(1, 10, 2, dtype='float32')
print(a)
print(a.dtype)
a = np.linspace(0, 10, 5) # 5 linearly spaced entries from 0 to 10
print(a)
a = np.array([1, 2, 3])
b = np.array([6, 7, 8])
a + b
a * b
a = [1, 2, 3]
b = [6, 7, 8]
c = [i+j for i, j in zip(a, b)]
print(c)
a = np.linspace(-np.pi, np.pi, 10)
np.sin(a)
np.cos(a)
a = np.arange(12).reshape(3, 4) # create a 2 dimensional array with dimensions of 3 and 4
print(a)
a.ndim
a.shape
2 * a
a = np.arange(10)
a[3]
a[2:-2]
a[1::2]
b = np.arange(12).reshape(3, 4)
print(b)
b[1, 2]
b[2] # select the entire second dimension
b[1:3, :3] # slices are also allowed
b[:, 2] # all elements in the first dimension
# ... (ellipsis) will replace one or more dimensions
b[..., 2]
a = np.arange(5)
selection = np.array([True, False, False, True, True])
a[selection]
a[a>2]
a = np.ma.arange(12).reshape(3, 4)
print(a)
a[2,2] = np.ma.masked
print(a)
b = a * 2
print(b)
# logical masking
a[a > 6] = np.ma.masked
print(a)
# unmasked an element
a[-1, -1] = 42
print(a)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Passing values to functions
Step3: Conclusion
Step4: Initialization of variables within function definition
Step5: * operator
| <ASSISTANT_TASK:>
Python Code:
#Example_1: return keyword
def straight_line(slope,intercept,x):
"Computes straight line y value"
y = slope*x + intercept
return y
print("y =",straight_line(1,0,5)) #Actual Parameters
print("y =",straight_line(0,3,10))
#By default, arguments have a positional behaviour
#Each of the parameters here is called a formal parameter
#Example_2
def straight_line(slope,intercept,x):
y = slope*x + intercept
print(y)
straight_line(1,0,5)
straight_line(0,3,10)
#By default, arguments have a positional behaviour
#Functions can have no inputs or return.
straight_line(x=2,intercept=7,slope=3)
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case1(list1):
list1[1]=1
print(list1)
case1(list_zeroes)
print(list_zeroes)
#Passing variables to a function
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case2(list1):
list1=[2,3,4,5,6]
print(list1)
case2(list_zeroes)
print(list_zeroes)
def calculator(num1,num2,operator='+'):
if (operator == '+'):
result = num1 + num2
elif(operator == '-'):
result = num1 - num2
return result
n1=int(input("Enter value 1: "))
n2=int(input("Enter value 2: "))
v_1 = calculator(n1,n2)
print(v_1)
v_2 = calculator(n1,n2,'-')
print(v_2)
# Here, the function main is termed as the caller function, and the function
# calculator is termed as the called function
# The operator parameter here is called a keyword-argument
def f(a, L=[]):
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
# Caution ! The list L[] was initialised only once.
#The paramter initialization to the default value happens at function definition and not at function call.
def sum(*values):
s = 0
for v in values:
s = s + v
return s
s = sum(1, 2, 3, 4, 5)
print(s)
def get_a(**values):
return values['a']
s = get_a(a=1, b=2) # returns 1
print(s)
def sum(*values, **options):
s = 0
for i in values:
s = s + i
if "neg" in options:
if options["neg"]:
s = -s
return s
s = sum(1, 2, 3, 4, 5) # returns 15
print(s)
s = sum(1, 2, 3, 4, 5, neg=True) # returns -15
print(s)
s = sum(1, 2, 3, 4, 5, neg=False) # returns 15
print(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We then create a function to read in our dataset and clean it, pruning specifically the columns that we care about.
Step3: We then create our cleaned dataset.
Step4: To get a sense of what we're dealing with, we directly plotted all foods on a log-log scale using a scatter plot, with the Carbs and Proteins as the axes and Fat as the radius of the dots. At first glance, it appears that the foods aren't particularly differentiated by nutrient content.
Step6: We created a function that could take in a category and plot a 3D scatter plot of the data, thus giving a modular function to use going into the rest of the code.
Step7: We now begin creating labels for the different food groups. Here we add a new column to hold the category given to the food by the Food Pyramid. We then create a new dataframe with only these categories for use in the rest of the code. The specific labels given are based on later knowledge of which K-Means groups matched the best with the Food Pyramid groups based on nutrient content.
Step8: We now create a K-Means object and run it on the food data with only the macronutrients as criteria. We include sugar because we believe that sugar is also a very useful metric both in categorizing food and in helping people make decisions about diet.
Step9: Below is the 3D scatter plot showing all the clusters from the K-Means algorithm.
Step10: We now separate out different categories for analysis.
Step11: We the make another column that holds the correct guess for each food, in other words whether that food based on its K-Means group was placed in the same group as the one given to it from the Food Pyramid.
Step12: We took all the categories from K-Means and displayed each cluster's average nutrient content, which told us what group was most simiar to its corresponding one from the Food Pyramid.
Step13: Following are two examples, plotted on the 3d log scale. For the rest of the comparisons, see the bottom of the journal. This is a comparison of the two meat groups, one given by the Food Pyramid, and one proposed by the K-Means algorithm.
Step14: This one is the same as above, except it shows a comparison between the two for the fruit group.
Step15: We then generate an accuracy score using sklearn, which evaluates whether a particular food in K-Means was categorized in the same group as in the Food Pyramid across all foods. The result, 57% is barely above half.
Step16: To gain a better understanding of what's happening, we decided to create a confusion matrix to view all the different individual accuracies between clusters. Below shows all of the clusters and where the foods ended up being grouped. Note that a perfect match would mean that the main diagonal of the matrix would be entirely black.
Step18: We then created another function that directly compares the most similar food groups between K-Means and the Food Pyramid. This modular function compares two labels against each other, and as stated above, the labels were created with later knowledge of the most similar groupings.
Step19: Below are the 3D scatter plots on log-log-log scale for the 6 different food groups from the Food Pyramid with their most similar cluster from the K-Means algorithm. Note that while some are pretty coincident, others are completely off.
Step20: We took the foods from the Food Pyramid and distributed them across the different clusters from K-Means, to really explicitly see which where the different foods lie. It would appear that the groupings from the Food Pyramid are completely scattered among the different clusters from K-Means, showing that their groupings are not particuarly in line with the foods' nutrient content.
Step22: Below are some examples of foods within the 6 groups made by the K-Means algorithm. We present to you, for fun, a small list of foods in each category, revealing some interesting wonders, as well as potentially some flaw in our design choices.
Step23: The proposed Meat group, followed by the proposed Vegetable group.
Step24: The proposed Cereal Group, followed by the proposed Fruits group.
Step25: The proposed Fat group, followed by the proposed Dairy group.
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import pandas as pd
import sys
import numpy as np
import math
import matplotlib.pyplot as plt
from sklearn.feature_extraction import DictVectorizer
%matplotlib inline
import seaborn as sns
from collections import defaultdict, Counter
import statsmodels.formula.api as smf
from mpl_toolkits import mplot3d
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
def ReadProximates():
Reads the correct sheet from the Excel Spreadsheet downloaded from the databank.
Cleans the macronutrient data and replaces non-numerical entries with 0.
Returns: cleaned DataFrame
df = pd.read_excel('dietary.xls', sheetname='Proximates')
column_list = ['Water (g)', 'Protein (g)', 'Fat (g)', 'Carbohydrate (g)', 'Total sugars (g)']
df['Protein'] = pd.to_numeric(df['Protein (g)'], errors='coerce')
df['Fat'] = pd.to_numeric(df['Fat (g)'], errors='coerce')
df['Carbohydrate'] = pd.to_numeric(df['Carbohydrate (g)'], errors='coerce')
df['Sugars'] = pd.to_numeric(df['Total sugars (g)'], errors='coerce')
df['Protein'].replace([np.nan], 0, inplace=True)
df['Fat'].replace([np.nan], 0, inplace=True)
df['Carbohydrate'].replace([np.nan], 0, inplace=True)
df['Sugars'].replace([np.nan], 0, inplace=True)
return df
tester = ReadProximates()
x_vals = 'Protein'
y_vals = 'Carbohydrate'
z_vals = 'Fat'
food_group_dict = {'A':['Cereals','peru'], 'B':['Dairy','beige'], 'C':['Egg','paleturquoise'],
'D':['Vegetable','darkolivegreen'], 'F':['Fruit','firebrick'], 'G':['Nuts','saddlebrown'],
'J':['Fish','slategray'],'M':['Meat','indianred'], 'O':['Fat','khaki']}
ax = plt.subplot(111)
for key,val in food_group_dict.items():
df = tester[tester.Group.str.startswith(key, na=False)]
ax.scatter(df[x_vals],df[y_vals],df[z_vals],color=val[1],label = val[0])
plt.xscale('log')
plt.yscale('log')
ax.set_xlabel(x_vals+' (g)')
ax.set_ylabel(y_vals+' (g)')
ax.legend()
def ThreeDPlot(pred_cat, actual_cat, ax, actual_label, colors = ['firebrick', 'peru']):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
ax.scatter3D(np.log(pred_cat.Protein),np.log(pred_cat.Carbs), np.log(pred_cat.Fat), c = colors[0], label = 'Predicted Group')
ax.scatter3D(np.log(actual_cat.Protein),np.log(actual_cat.Carbohydrate), np.log(actual_cat.Fat), c = colors[1], label = actual_label, alpha= .5)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
plt.legend()
cereals = tester[tester.Group.str.startswith('A', na=False)]
cereals['Label'] = cereals.Protein*0+2
fruits = tester[tester.Group.str.startswith('F', na=False)]
fruits['Label'] = fruits.Protein*0+3
veggies = tester[tester.Group.str.startswith('D', na=False)]
veggies['Label'] = veggies.Protein*0+1
dairy = tester[tester.Group.str.startswith('B', na=False)]
dairy['Label'] = dairy.Protein*0+5
oils = tester[tester.Group.str.startswith('O', na=False)]
oils['Label'] = oils.Protein*0+4
m1 = tester[tester.Group.str.startswith('J', na=False)]
m2 = tester[tester.Group.str.startswith('M', na=False)]
meats = pd.concat([m1,m2])
meats['Label'] = meats.Protein*0
all_these = pd.concat([cereals, fruits, veggies, dairy, oils, meats])
# Selects the appropriate macronutrient columns to feed to the kmeans algorithm
protein = pd.Series(all_these.Protein, name='Protein')
fat = pd.Series(all_these.Fat, name='Fat')
carbs = pd.Series(all_these.Carbohydrate, name='Carbs')
sugars = pd.Series(all_these['Sugars'], name='Sugars')
# Create a new DataFrame using only the macronutrient columns
X = pd.concat([protein,fat,carbs,sugars], axis=1)
X.fillna(0)
kmeans = KMeans(n_clusters=6, random_state=0)
kmeans.fit(X.dropna())
y_kmeans = kmeans.predict(X)
ax = plt.subplot(projection='3d')
ax.scatter3D(np.log(X.Protein),np.log(X.Carbs), np.log(X.Fat), c = y_kmeans)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
# Create a way to select the categories
predicted_labels = pd.DataFrame(y_kmeans, index=X.index).astype(float)
X['predictions'] = predicted_labels
# Separate out the categories for individual analysis
labeled0 = X[X.predictions == 0]
labeled1 = X[X.predictions == 1]
labeled2 = X[X.predictions == 2]
labeled3 = X[X.predictions == 3]
labeled4 = X[X.predictions == 4]
labeled5 = X[X.predictions == 5]
all_these['guess'] = predicted_labels[0]
all_these['correct_guess'] = np.where((all_these.Label == all_these.guess), True, False)
all_these.groupby('guess').mean()
ax = plt.subplot(projection='3d')
ThreeDPlot(labeled0, meats, ax, 'Meats', ['firebrick','slategray'])
ax = plt.subplot(projection='3d')
ThreeDPlot(labeled3, fruits, ax, 'Fruits', ['firebrick','purple'])
accuracy_score(all_these.Label,predicted_labels)
# Look at confusion matrix for some idea of accuracy. Meats has the highest rate of matching.
labels = ["meats", "vegetables", "cereal", "fruit", "oils", "dairy"]
predlabels = ["high protein", "all low", "high carb, low sugar", "high carb, high sugar", "high fat", "all medium"]
mat = confusion_matrix(all_these.Label, predicted_labels)
sns.heatmap(mat.T, square=True, xticklabels=labels, yticklabels=predlabels, annot=True, fmt="d", linewidth=.5)
plt.xlabel('Food Pyramid label')
plt.ylabel('K-Means label')
plt.title("Matrix Comparison of K-Means vs. Food Pyramid")
def HowMatched3D(df, label_int, actual_label):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
ax = plt.subplot(projection='3d')
TP = df[(df.Label == label_int)&(df.correct_guess==True)]
FP = df[(df.guess == label_int)&(df.correct_guess==False)]
FN = df[(df.Label == label_int)&(df.correct_guess==False)]
print('Matches:',len(TP), 'In Group, is not '+actual_label+':',len(FP), 'Not in Group, is '+actual_label+':',len(FN))
ax.scatter3D(np.log(TP.Protein),np.log(TP.Carbohydrate), np.log(TP.Fat), c = '#8F008F', label = 'In Group, is '+actual_label)
ax.scatter3D(np.log(FP.Protein),np.log(FP.Carbohydrate), np.log(FP.Fat), c = '#EB4C4C', label = 'In Group, is not '+actual_label)
ax.scatter3D(np.log(FN.Protein),np.log(FN.Carbohydrate), np.log(FN.Fat), c = '#4CA6FF', label = 'Not in Group, is '+actual_label)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
plt.legend()
HowMatched3D(all_these, 0, 'Meat')
HowMatched3D(all_these, 1, 'Vegetable')
HowMatched3D(all_these, 2, 'Cereal')
HowMatched3D(all_these, 3, 'Fruit')
HowMatched3D(all_these, 4, 'Oil')
HowMatched3D(all_these, 5, 'Dairy')
df = pd.DataFrame(mat.T/mat.T.sum(axis=0),
index=["high protein", "all low", "high carb, low sugar", "high carb, high sugar", "high fat", "all medium"],
columns=["meats", "vegetables", "cereals", "fruits", "fats", "dairy"])
df.columns.name = 'Group Breakdown (percentages)'
df = df.round(2)
df = df.multiply(100)
df = df.astype(int).astype(str)
for index, row in df.iterrows():
for i in range(6):
df.loc[index][i] = str(df.loc[index][i]) + "%"
df
def Examples(df, label_int, si = [0,5]):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
TP = df[(df.Label == label_int)&(df.correct_guess==True)]
FP = df[(df.guess == label_int)&(df.correct_guess==False)]
print("Guessed Similar:")
print(TP["Food Name"][si[0]:si[1]])
print("\nSurprising:")
print(FP["Food Name"][si[0]:si[1]])
print('High Protein Group')
Examples(all_these, 0)
print('\nLow Everything Group')
Examples(all_these, 1)
print('High Carb, Low Sugar Group')
Examples(all_these, 2)
print('\nHigh Carb, High Sugar Group')
Examples(all_these, 3)
print('High Fat Group')
Examples(all_these, 4)
print('\nMid Everything Group')
Examples(all_these, 5)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: bqplot scatter plot
Step2: Ipyvolume quiver plot
Step3: Linking ipyvolume and bqplot
Step4: Embedding
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import vaex
ds = vaex.example()
N = 2000 # for performance reasons we only do a subset
x, y, z, vx, vy, vz, Lz, E = [ds.columns[k][:N] for k in "x y z vx vy vz Lz E".split()]
import bqplot.pyplot as plt
plt.figure(1, title="E Lz space")
scatter = plt.scatter(Lz, E,
selected_style={'opacity': 0.2, 'size':1, 'stroke': 'red'},
unselected_style={'opacity': 0.2, 'size':1, 'stroke': 'blue'},
default_size=1,
)
plt.brush_selector()
plt.show()
import ipyvolume.pylab as ipv
ipv.clear()
quiver = ipv.quiver(x, y, z, vx, vy, vz, size=2, size_selected=5, color_selected="blue")
ipv.show()
from ipywidgets import jslink, VBox
jslink((scatter, 'selected'), (quiver, 'selected'))
hbox = VBox([ipv.current.container, plt.figure(1)])
# TODO: cannot display the figure twice currently
# hbox
import ipyvolume.embed
# if we don't do this, the bqplot will be really tiny in the standalone html
bqplot_layout = hbox.children[1].layout
bqplot_layout.min_width = "400px"
ipyvolume.embed.embed_html("bqplot.html", hbox, offline=True, devmode=True)
%debug
!open bqplot.html
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you never played with the low-level components of TensorFlow before, you probably would have expected the print operation to show the value of b at this point. Instead, we have to fetch the value of the variable by running the operation inside a Session object
Step2: Now, let's keep exploring and add some intermediate computations
Step3: Once again, you might have expected to obtain the value of b, since it was clearly computed to obtain c. While there are several methods to get it (such as calling its eval method), none of them is as satisfying as writing pure NumPy code, and none of them lends itself to an immediate debug using a visual editor.
Step4: Eager is enabled with a single line
Step5: The previous instruction should always be run at the beginning of a program. If it fails, simply reset the runtime of the notebook from Runtime >> Restart runtime. If you are working with v1.5 or v1.6, replace tf.enable_eager_execution() with tfe.enable_eager_execution()
Step6: Variables and gradients with eager execution
Step7: To further simplify development, in eager execution you can mix TF Tensors and NumPy arrays with automatic casting underneath
Step8: And you can extract the NumPy representation of a Tensor with a new numpy method
Step9: Finally, we can initialize an eager variable using the standard get_variable method, by specifying a flag
Step10: Computing gradients in eager
Step11: In programming terms, we are now dealing with an imperative definition, as opposed to a declarative interface.
Step12: And we can chain the use of tfe.gradients_function to get higher-order derivatives
Step13: Defining models using layers
Step14: We can stack multiple layers to create more complicated models. For example, a neural network with one hidden layer and dropout in the middle
Step15: Note the training flag we can use to differentiate between training and test (for the dropout layer). This is a standard practice for all layers that have a different behaviour in the two cases.
Step16: It is relatively similar to the functional interface of Keras. If you are working with v1.5 or v1.6, you need to use tfe.Network instead of tf.keras.Model. Note that variables are not yet initialized
Step17: However, it is enough to use the model a single time to automatically trigger the initialization. Networks objects can be called as if they were functions
Step18: Networks have several additional utilities to handle the models. For example, we can count the number of adaptable parameters of the model
Step19: For simple sequential models, eager execution also provides a short-hand with the Sequential object; the following is equivalent to the previous model definition
Step20: Handling the input pipeline
Step21: To cycle over the data, we can use the eager module tfe.Iterator instead of tf.data.Iterator, which acts as a standard Python iterator. For example, we can cycle over each batch in the dataset and print the proportion of the first class in each batch
Step22: from_tensor_slices creates a dataset having one element for each row of our original tensors. If you don't need batching, from_tensors will treat the entire tensor as a single element
Step23: We can apply further transformations to the dataset before processing it, such as repeating the entire dataset twice
Step24: Or shuffling the dataset each time we cycle over it
Step25: The parameter for the shuffle method is a buffer dimension
Step26: We are using softmax_cross_entropy_with_logits_v2 instead of the older softmax_cross_entropy_with_logits, which would throw a warning as it will be deprecated in future releases. The two are identical, except that the new version allows to back-propagate through the labels by default
Step27: Once again, another variant is the implicit_value_and_gradients, which returns both the value of the function and its gradients
Step28: Using the first syntax, the optimization cycle is now a trivial matter
Step29: Note how, when compared to the classical TensorFlow low-level interface, the previous code tends to be more readable, and closely resembles what the code would have looked like had we only used NumPy.
Step30: Evaluating the model and plotting the results
Step31: We can accumulate values inside the metric and print an average result as follows
Step32: Let us rewrite the optimization code, this time by accumulating the training accuracy at each epoch
Step33: We can use Matplotlib to plot the resulting accuracy
Step34: Saving summaries on disk and working with the TensorBoard
Step35: In order to save a summary, we need to select the writer as the default one, and tell TF that we want to save summaries everytime they are computed
Step36: Alternatively, we can instruct TF to save summaries only every given number of steps
Step37: The 'global step' is a variable inside TF that keeps track of the iterations of our optimization algorithm
Step38: Note that the variable is currently set to 0. If we want to update it correctly, we need to provide the global step during the optimization cycle
Step39: Alternatively, we can provide our own global step to the summary operation. This is particularly easy with eager execution enabled because we can work with standard int64 values
Step40: In the following example, we extend again our optimization routine to save the loss value on disk at every iteration
Step41: Now launch the tensorboard to visualize the loss
Step42: If you are running this notebook from a local machine, you can navigate to the address above to visualize the training details in the TensorBoard itself
Step43: In order to enable the GPU, we can specify a device explicitly for each operation that we want to run on that device
Step44: Alternatively, we can move the data to the GPU before running the computation
Step45: Given a variable x on the GPU, we can perform the inverse operation similarly
Step46: In order to save a comprehensive snapshot, we can save all variables of the model together with the optimizer's state, e.g.
Step47: We modify one again our training routine, this time by saving all variables at the end of every epoch
Step48: Eager provides a method to restore the latest checkpoint from the disk
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
a = tf.constant(3.0)
b = a + 2.0
print(b)
sess = tf.Session()
with sess.as_default():
print(sess.run(b))
with tf.Session() as sess:
c = sess.run(1.5*b)
print(b)
!pip install tensorflow==v1.7rc0
import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np
tf.enable_eager_execution()
from sklearn import datasets, preprocessing, model_selection
data = datasets.load_iris()
# Feature normalization on the input
X = preprocessing.MinMaxScaler(feature_range=(-1,+1)).fit_transform(data['data'])
# Encode the output using the one-hot encoding
y = preprocessing.OneHotEncoder(sparse=False).fit_transform(data['target'].reshape(-1, 1))
# Split in train/test sets
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25, stratify=y)
W = tfe.Variable(0.5, name='w')
print(W)
print(W + np.asarray([1, 3]))
W.numpy()
W2 = tf.get_variable('W2', shape=[1], use_resource=True)
print(W2)
def f(X):
return 1.0 + 2.0 * X
f_grad = tfe.gradients_function(f)
print(f_grad(0.3))
a = tf.constant(0.3)
b = tf.constant(0.5)
def f(a, b):
return a*b
# Return the gradient for the first parameter only
print(tfe.gradients_function(f, params=[0])(1.0, 1.0))
# Alternative definition (by name)
# print(tfe.gradients_function(f, params=['a'])(1.0, 1.0))
def f(X):
return tf.square(X)
# Second-order derivative
f_gg = tfe.gradients_function(tfe.gradients_function(f))
f_gg(1.0)
lin = tf.layers.Dense(units=3, use_bias=True, activation=None)
hid = tf.layers.Dense(units=10, activation=tf.nn.relu)
drop = tf.layers.Dropout()
out = tf.layers.Dense(units=3, activation=None)
def nn_model(x, training=False):
return out(drop(hid(x), training=training))
class SingleHiddenLayerNetwork(tf.keras.Model):
def __init__(self):
super(SingleHiddenLayerNetwork, self).__init__()
self.hidden_layer = tf.layers.Dense(10, activation=tf.nn.tanh, use_bias=True)
self.output_layer = tf.layers.Dense(3, use_bias=True, activation=None)
def call(self, x):
return self.output_layer(self.hidden_layer(x))
net = SingleHiddenLayerNetwork()
len(net.variables)
net(tf.constant(X_train[0:1]))
len(net.variables)
net.count_params()
net = tfe.Sequential(layers_funcs=[
tf.layers.Dense(10, activation=tf.nn.tanh, use_bias=True),
tf.layers.Dense(3, use_bias=True, activation=None)
])
train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
import numpy as np
for xb,yb in tfe.Iterator(train_dataset.batch(32)):
print('Percentage of class [0]: ', tf.reduce_mean(yb[:, 0]).numpy()*100, ' %')
train_dataset_alt = tf.data.Dataset.from_tensors((X_train, y_train))
for xb, yb in tfe.Iterator(train_dataset_alt):
# Check that the batch is equivalent to the entire training array
assert(np.all(X_train == xb.numpy()))
# Compute the percentage of labels for the first class
print('Percentage of class [0] (entire dataset): ', tf.reduce_mean(yb[:, 0]).numpy()*100, ' %')
for xb,yb in tfe.Iterator(train_dataset.repeat(2).batch(32)):
print('Percentage of class [0]: ', tf.reduce_mean(yb[:, 0]).numpy()*100, ' %')
for xb,yb in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
print('Percentage of class [0]: ', tf.reduce_mean(yb[:, 0]).numpy()*100, ' %')
def loss(net, inputs, labels):
return tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=net(inputs), labels=labels))
loss_grad = tfe.implicit_gradients(loss)
loss_and_grads = tfe.implicit_value_and_gradients(loss)
net = SingleHiddenLayerNetwork()
opt = tf.train.AdamOptimizer(learning_rate=0.01)
# Loop over the epochs
for epoch in range(50):
# For each epoch we shuffle the dataset
for (xb, yb) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
opt.apply_gradients(loss_grad(net, xb, yb))
# Training accuracy at the end of each tenth epoch
if epoch % 10 == 0:
print("Epoch %d: Loss on training set : %f" %
(epoch, loss(net, X_train, y_train).numpy()))
opt.minimize(lambda: loss(net, xb, yb))
accuracy = tfe.metrics.Accuracy()
accuracy(tf.argmax(net(tf.constant(X_test)), axis=1), tf.argmax(tf.constant(y_test), axis=1))
print('Final test accuracy is: ', accuracy.result().numpy())
net = SingleHiddenLayerNetwork()
opt = tf.train.AdamOptimizer(learning_rate=0.01)
# Numpy array to keep track of the accuracy
acc_history = np.zeros(50)
# Loop over the epochs
for epoch in range(50):
# Initialize the metric
accuracy = tfe.metrics.Accuracy()
# For each epoch we shuffle the dataset
for (xb, yb) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
opt.apply_gradients(loss_grad(net, xb, yb))
# Save the training accuracy on the batch
accuracy(tf.argmax(net(tf.constant(xb)), axis=1), tf.argmax(tf.constant(yb), axis=1))
# Save the overall accuracy in our vector
acc_history[epoch] = accuracy.result().numpy()
import matplotlib.pyplot as plt
plt.figure()
plt.plot(acc_history)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.show()
writer = tf.contrib.summary.create_file_writer('tmp')
with writer.as_default():
with tf.contrib.summary.always_record_summaries():
tf.contrib.summary.scalar('scalar_value', 0.5)
with writer.as_default():
with tf.contrib.summary.record_summaries_every_n_global_steps(5):
tf.contrib.summary.scalar('scalar_value', 0.5) # Will only save every 5 steps
tf.train.get_global_step()
opt.apply_gradients(loss_grad(net, xb, yb), global_step=tf.train.get_or_create_global_step()) # Will apply gradients AND increase the step by one
tf.train.get_global_step()
with writer.as_default():
with tf.contrib.summary.record_summaries_every_n_global_steps(5):
tf.contrib.summary.scalar('scalar_value', 0.5, step=4) # This will save the value on disk
net = SingleHiddenLayerNetwork()
opt = tf.train.AdamOptimizer(learning_rate=0.01)
with writer.as_default():
with tf.contrib.summary.always_record_summaries():
# Loop over the epochs
for epoch in range(50):
# For each epoch we shuffle the dataset
for (xb, yb) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
tf.contrib.summary.scalar('loss_value', loss(net, xb,yb))
opt.minimize(lambda: loss(net, xb,yb), global_step=tf.train.get_or_create_global_step())
!tensorboard --logdir=tmp
tfe.num_gpus()
with tf.device("/gpu:0"):
net(X_train[0:1, :])
net = net.gpu()
net(tf.constant(X_train[0:1, :]).gpu())
checkpointer = tfe.Checkpoint(W=W)
checkpointer.save('tmp/')
checkpointer.restore('tmp/')
checkpointer = tfe.Checkpoint(net=net,opt=opt, global_step=tf.train.get_or_create_global_step())
checkpointer.save('tmp/')
net = SingleHiddenLayerNetwork()
opt = tf.train.AdamOptimizer(learning_rate=0.01)
checkpointer = tfe.Checkpoint(net=net,opt=opt, global_step=tf.train.get_or_create_global_step())
for epoch in range(50):
for (xb, yb) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
opt.minimize(lambda: loss(net, xb,yb))
checkpointer.save('tmp/')
checkpointer.restore(tf.train.latest_checkpoint('tmp/'))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 变量是动态的
Step2: 理解 Python 变量在内存中的表示
Step3: Python 有一些很优雅的设计,来提升性能,对于0-256这些常用的数字,Python 内部是有缓存的。
Step4: 下面的例子引入条件判断语句,if 语句。Python 中 if 语句很容易理解,注意条件后面的冒号不要忘记,还有缩进格式。
Step5: 下面的例子引入循环语句 for,后面还会详细解释,这里先有个印象即可。
Step6: print() 格式化输出说明
Step7: Python 从3.6版本开始引入 f-string,称为格式化字符串常量,主要目的是使格式化字符串的操作更加简便。f-string在形式上是以 f 或 F 修饰符引领的字符串(f'xxx' 或 F'xxx'),以大括号 {} 标明被替换的字段;f-string在本质上并不是字符串常量,而是一个在运行时运算求值的表达式。
| <ASSISTANT_TASK:>
Python Code:
counter = 100 # 整型变量
miles = 1000.0 # 浮点型(小数)
name = "John" # 字符串
name2 = 'Tom'
# 显示指定变量名的内容
print(name2)
flag = False # 布尔值
#显示变量的类型
print(type(flag))
# 多个变量赋值, Python 的写法比较简洁
a = b = c = 1
b = 2
print(a,b,c)
# 字符串变量赋值
s = s1 = 'Hello'
print(s,s1)
# 多个变量赋值
a, b, c = 1, 2, 3
print(a,b,c)
# 交换变量
a, b = 1, 2
a, b = b, a
print(a,b)
# 定义一个整数变量
a = 1
print(a)
print(type(a))
# 变成字符串变量
a = 's'
print(a)
print(type(a))
# 改变变量的内容
a = 'ABC'
b = a
a = 'XYZ'
print(b)
# 用 id() 可以得到变量的内存地址
print(id(a))
print(id(b))
# 指向同一个变量内容的变量内存地址其实是一样的
# 注意id(100)的地址和a 的地址是一样的
a = 100
b = 101
print(id(a))
print(id(b))
print(id(100))
# 变量内容发生变化后,内存地址会发生变化
b = b + 1
print(id(a))
print(id(b))
# 指向同一个变量内容的变量内存地址这里又不一样了
a = 257
b = 257
print(id(a))
print(id(b))
# 输出数字和变量内容
print(100)
a = 100
print(a)
# 先通过输入内容到变量,然后再输出
name = input('name:')
print( 'hello', name)
# 根据变量a 的值来判断
a = 10
if a > 10:
print('big')
g=0
else:
print('small')
for i in range(13):
print(i)
for i in 'hello':
print(i)
# 支持格式化的 print
s = 'hello'
x = len(s)
print('The length of %s is %d' % (s,x))
# 更加好,更加 pythonic 的写法:format函数-增强的格式化字符串函数
# 关于format的更多细节,参见 https://docs.python.org/zh-cn/3/library/string.html#format-string-syntax
# 通过关键字传参
print('{greet} from {language}'.format(greet='hello', language='English'))
# 通过位置传参
print('{0} from {1}'.format('hello', 'English'))
# 支持格式化的 print 的另外一种写法
s = 'hello'
x = len(s)
print('The length of {word} is {length}'.format(word=s,length=x))
# 格式化 print 举例
pi = 3.141592653
print('%10.3f' % pi) #字段宽10,精度3
print("pi = %.*f" % (3,pi)) #用*从后面的元组中读取字段宽度或精度
print('%010.3f' % pi) #用0填充空白
print('%-10.3f' % pi) #左对齐
print('%+f' % pi) #显示正则表达式
a = 3.141592653
f'a is {a:10.3f}'
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='joint'></a>
Step2: The marginal distributions of a bivariate normal distribution are (univariate) normal distributions.
Step3: <a id='mvn'></a>
| <ASSISTANT_TASK:>
Python Code:
from symbulate import *
%matplotlib inline
RV(BivariateNormal(mean1 = 0, mean2 = 1, sd1 = 1, sd2 = 2, corr = 0.5)).sim(5)
x = RV(BivariateNormal(mean1 = 0, mean2 = 1, sd1 = 1, sd2 = 2, corr = 0.5)).sim(1000)
x.plot(alpha = 0.2)
x.mean(), x.sd(), x.corr()
RV(BivariateNormal(mean1 = 0, mean2 = 1, sd1 = 1, sd2 = 2, corr = 0.5)).sim(1000).plot(alpha = 0.2)
RV(BivariateNormal(mean1 = 0, mean2 = -2, var1 = 1, var2 = 4, cov = -1.8)).sim(1000).plot(alpha = 0.2)
X, Y = RV(BivariateNormal(mean1 = 0, mean2 = 1, sd1 = 1, sd2 = 2, corr = 0.5))
X.sim(10000).plot()
mu = [1, 2, 3]
Sigma = [[1, 1, -2],
[1, 4, 0],
[-2, 0, 9]]
X = RV(MultivariateNormal(mean = mu, cov = Sigma))
X.sim(5)
x = X.sim(10000)
x.mean(), x.cov()
X, Y, Z = RV(MultivariateNormal(mean = mu, cov = Sigma))
(X & Y).sim(10000).plot(alpha = 0.2)
X.sim(10000).plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MNIST
Step2: Binary classifier
Step3: Note
Step4: ROC curves
Step6: Multiclass classification
Step7: Multilabel classification
Step8: Warning
Step9: Multioutput classification
Step10: Extra material
Step11: KNN classifier
| <ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28*28
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
some_digit = X[36000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[36000]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = (y_train_5[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_5[test_index])
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_predictions)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
4344 / (4344 + 1307)
recall_score(y_train_5, y_train_pred)
4344 / (4344 + 1077)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
4344 / (4344 + (1077 + 1307)/2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
y_scores.shape
# hack to work around issue #9589 introduced in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
(y_train_pred == (y_scores > 0)).all()
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
sgd_clf.classes_[5]
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
len(ovo_clf.estimators_)
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
If you prefer color and a colorbar
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load
Step2: Account for NaN in column name.
Step3: Transform
Step4: Bag of Words
Step5: Stop Words
Step6: This code
Step7: To DF
Step8: Words to Features
Step9: LDA
Step10: Topics
| <ASSISTANT_TASK:>
Python Code:
import re
import random
#import lda
import csv
import numpy as np
import pandas as pd
from collections import Counter
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
df = pd.read_csv('../../data/cleaned/UCB_dept_merge.csv')
df['product_line'] = df['supplier_name'] + ' ' + \
df['product_description'] + ' ' + \
df['manufacturer']
random.seed(8675309)
rows = random.sample(df.index, 40000)
df = df.ix[rows].reset_index()
df.head()
cols = ['supplier_name', 'item_type', 'product_description', 'manufacturer',
'buyer__first_name', 'buyer__last_name', 'department_name', 'product_line']
for col in cols:
df[col] = df[col].replace(np.nan, '' , regex=True) \
.apply(lambda x: x.lower()) \
.apply(lambda x: re.sub('(http\S*|www\S*)', '', x)) \
.apply(lambda x: re.sub('((?<=\D)/|/(?=\D))', ' ', x)) \
.apply(lambda x: re.sub('[^A-Za-z0-9.%\/]+', ' ', x)) \
.apply(lambda x: re.sub('\.+', '', x)) \
.apply(lambda x: re.sub('(?<=\s)\w(?=\s)|(?<=\s)\d(?=\s)', '', x)) \
.apply(lambda x: re.sub('\s+', ' ', x).strip())
df.head()
tokenized_pd = [word_tokenize(line) for line in df.product_line]
stop_words = stopwords.words('english') + \
[u'ea', u'per', u'item', u'description', u'quote', u'pk', u'pack',
'give', 'something', 'inc', 'corporation', 'quantity', 'back',
'products', 'co', 'officemax', 'unit', 'corp']
tokenized_pd_clean = []
for entry in tokenized_pd:
entry_list = []
for word in entry:
if ((not word in stop_words) and \
(not unicode(word).isnumeric()) and \
(not len(word) <= 1)):
entry_list.append(word)
tokenized_pd_clean.append(entry_list)
df['tokenized_pd_clean'] = tokenized_pd_clean
pd_list_clean = []
for item in tokenized_pd_clean:
pd_list_clean.append(' '.join(item))
vectorizer = CountVectorizer(analyzer = "word",
tokenizer = None,
preprocessor = None,
stop_words = None)
word_features = vectorizer.fit_transform(pd_list_clean).toarray()
word_features.shape
word_features[0:5,:]
vocab = vectorizer.get_feature_names()
print vocab[:15]
vocab_map = vectorizer.vocabulary_
X = word_features
model = lda.LDA(n_topics=15, n_iter=1500, random_state=8675309)
model.fit(X)
topic_word = model.topic_word_
n_top_words = 21
with open('../../results/topic_definitions.csv', 'wb') as to_:
writer = csv.writer(to_, delimiter=',', quotechar='\"')
doc_topic = model.doc_topic_
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-n_top_words:-1]
writer.writerow([i, ' '.join(topic_words)])
with open('../../results/pd_topics.csv', 'wb') as to_:
writer = csv.writer(to_, delimiter=',', quotechar='\"')
doc_topic = model.doc_topic_
for i in range(len(tokenized_pd_clean)):
writer.writerow([tokenized_pd_clean[i], doc_topic[i].argmax()])
words = [w.strip().split(' ') for w in pd_list_clean]
word_list = [i for word in words for i in word]
word_counts = Counter(word_list)
top_100_words = word_counts.most_common(100)
for word in top_100_words:
print word
topics = pd.read_csv('../../results/pd_topics_10.csv', header=None)
topics.columns = ['tpc', 'topic']
df['tpc'] = topics.tpc
df['topic'] = topics.topic
depts = pd.DataFrame({'count' : df.groupby('department_name')['department_name'].count()}).reset_index()
depts.sort('count', ascending=False, inplace=True)
top15 = depts['department_name'][:25].tolist()
df_top15 = df[df.department_name.isin(top15)]
df_top15 = df_top15[df_top15['product_line'] != '']
topics_by_dept = pd.DataFrame({'count' : df_top15.groupby(['department_name', 'topic'])['topic'].count()}).reset_index()
topic_def = pd.read_csv('../../results/topics_definitions_10.csv', header=None)
topic_def.columns = ['topic', 'words']
topic_def['words'] = topic_def['words'].apply(lambda x: ', '.join(x.split()[:10]))
df_top15_final = pd.merge(topics_by_dept, topic_def, on='topic')
df_top15_final.to_csv('../../results/topic_count_10.csv', index=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step3: Call function to compute benchmarks
Step4: Lets look at the results
Step5: Timing and some accuracy scores across trials
Step6: Lets look the stability of feature importances across trials
Step7: Lets look at how the RF benchmarks varies as a function of n_estimators
Step8: iRF benchmarks
Step9: Lets look at the results
Step10: Timing and some accuracy scores across trials
Step11: Lets look the stability of feature importances across trials
Step12: Finally, lets examine the discovered interactions across trials
Step13: Lets look at how the RF benchmarks varies as a function of n_estimators
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import time
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from IPython.display import display, Image
from sklearn.datasets import load_breast_cancer
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
from utils import iRF_benchmarks_lib
reload(irf_jupyter_utils)
reload(irf_utils)
# load breast cancer data
#raw_data = load_breast_cancer()
#responses = raw_data.target
#features = raw_data.data
features = np.loadtxt('data/breast_cancer_features.csv', delimiter=',')
responses = np.loadtxt('data/breast_cancer_responses.csv', delimiter=',')
# load splicing data
# assumes Y_X_splicing.txt is in same folder as this notebook
data = np.loadtxt('Y_X_splicing.txt', skiprows = 1, usecols = range(1,307))
indices_high = data[:,-1] > 0.7
indices_low = data[:,-1] < 0.3
responses = np.zeros(np.shape(data)[0])
responses[indices_high] = 1
features = data[:, 0:270]
# keep data with high or low responses
responses = responses[np.logical_or(indices_high, indices_low)]
features = features[np.logical_or(indices_high, indices_low), :]
n_trials = 10 # number of times to run random forest for our benchmarks
n_estimators = 20 # number of trees in the random forest
train_split_propn = 0.8
specs = {'n_trials': n_trials,
'n_estimators': n_estimators,
'train_split_propn': train_split_propn,
'N_obs': np.shape(features)[0], # use all data points
'N_features': np.shape(features)[1] # use all features
}
rf_bm = iRF_benchmarks_lib.consolidate_bm_RF(features, responses, specs, seed = 2017)
print(specs)
n_trials = 10 # number of times to run random forest for our benchmarks
n_estimators = 500 # number of trees in the random forest
train_split_propn = 0.8
metrics_all, metrics_summary, feature_importances = \
iRF_benchmarks_lib.RF_benchmarks(features, responses,
n_trials = n_trials,
train_split_propn = train_split_propn,
n_estimators=n_estimators,
seed = 2017)
print('Dimensions of full dataset (#samples , # features): ', np.shape(features))
print('Number of training samples: ', np.round(np.shape(features)[0] * specs['train_split_propn'][0]))
print('Number of test samples: ', np.round(np.shape(features)[0]*(1-specs['train_split_propn'][0])))
print('number of trees in the random forest: ', specs['n_estimators'][0])
print('time (seconds) to compute RF [mean, std]: ', rf_bm[0]['metrics_summary']['time'])
print('accuracy_score [mean, std]: ', rf_bm[0]['metrics_summary']['accuracy_score'])
print('hammming_loss [mean, std]: ', rf_bm[0]['metrics_summary']['hamming_loss'])
print('top five feature importances across trials')
for i in range(n_trials):
# sort by feature importance
importances_rank = np.argsort(rf_bm[0]['feature_importances'][i])[::-1]
print('trial' + str(i) + ': ', importances_rank[0:10])
n_trials = 10 # number of times to run random forest for our benchmarks
n_estimators = [20, 50, 100, 150, 200, 300, 400, 500] # number of trees in the random forest
train_split_propn = 0.8
specs = {'n_trials': n_trials,
'n_estimators': n_estimators,
'train_split_propn': train_split_propn,
'N_obs': np.shape(features)[0], # use all data points
'N_features': np.shape(features)[1] # use all features
}
rf_bm = iRF_benchmarks_lib.consolidate_bm_RF(features, responses, specs, seed = 2017)
iRF_benchmarks_lib.plot_bm_RF(rf_bm, specs, 'n_estimators', 'time')
iRF_benchmarks_lib.plot_bm_RF(rf_bm, specs, 'n_estimators', 'accuracy_score')
n_trials = 2
train_split_propn = 0.8
n_estimators = 20
n_bootstraps = 20
n_RIT = 20
max_depth = 5
n_estimators_bootstrap = 5
iRF_specs = {'n_trials': n_trials,
'n_iter': n_estimators,
'train_split_propn': train_split_propn,
'n_estimators': n_estimators,
'n_bootstraps': n_bootstraps,
'propn_n_samples': 0.2,
'bin_class_type': 1,
'n_RIT': n_RIT,
'max_depth': max_depth,
'noisy_split': False,
'num_splits': 2,
'n_estimators_bootstrap': n_estimators_bootstrap,
'N_obs': np.shape(features)[0], # use all data points
'N_features': np.shape(features)[1] # use all features
}
iRF_bm = iRF_benchmarks_lib.consolidate_bm_iRF(features, responses, iRF_specs, seed = None)
print('Dimensions of full dataset (#samples , # features): ', np.shape(features))
print('Number of training samples: ', np.round(np.shape(features)[0] * train_split_propn))
print('Number of test samples: ', np.round(np.shape(features)[0]*(1-train_split_propn)))
print('\n')
print('number of trees in full random forest: ', n_estimators)
print('number of bootstrap samples: ', n_bootstraps)
print('number of trees in RIT: ', n_RIT)
print('max depth of RIT: ', max_depth)
print('number of trees is RF bootstrap: ', n_estimators_bootstrap)
print('time (seconds) to compute iRF [mean, std]: ', iRF_bm[0]['metrics_summary']['time'])
print('\n')
print('accuracy_score [mean, std]: ', iRF_bm[0]['metrics_summary']['accuracy_score'])
print('hammming_loss [mean, std]: ', iRF_bm[0]['metrics_summary']['hamming_loss'])
print('top five important features across trials')
for i in range(n_trials):
importances_rank = np.argsort(iRF_bm[0]['feature_importances'][i])[::-1]
print('trial' + str(i) + ': ', importances_rank[0:5])
print('top five stable interactions across trials')
for i in range(n_trials):
# sort by stability
stability = sorted(iRF_bm[0]['stability_all'][i].values(), reverse=True)
interactions = sorted(iRF_bm[0]['stability_all'][i], key=iRF_bm[0]['stability_all'][i].get, reverse=True)
print('trial' + str(i) + ': ', interactions[0:5])
n_trials = 3
train_split_propn = 0.8
n_estimators = [20, 50, 100]
n_bootstraps = 20
n_RIT = 20
max_depth = 5
n_estimators_bootstrap = 5
iRF_specs = {'n_trials': n_trials,
'n_iter': 5,
'train_split_propn': train_split_propn,
'n_estimators': n_estimators,
'n_bootstraps': n_bootstraps,
'propn_n_samples': 0.2,
'bin_class_type': 1,
'n_RIT': n_RIT,
'max_depth': max_depth,
'noisy_split': False,
'num_splits': 2,
'n_estimators_bootstrap': n_estimators_bootstrap,
'N_obs': np.shape(features)[0], # use all data points
'N_features': np.shape(features)[1] # use all features
}
iRF_bm = iRF_benchmarks_lib.consolidate_bm_iRF(features, responses, iRF_specs, seed = 2018)
plot_bm(iRF_bm, specs, 'n_estimators', 'time')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given a variable called angle, print out the sine of it like so
Step2: Using string formatting show the decimal and binary representation of the number 34. Your format function should only take one argument, the number 34.
Step3: Using a for loop, print the integers from 0 to 8 in binary such that each line is right-aligned.
| <ASSISTANT_TASK:>
Python Code:
from math import pi
print('{:.3}'.format(pi / 2))
from math import *
x = 1.2
print('The sine of {:.2} radians is {:.2}'.format(x, sin(x)))
print('binary: {0:b}, decimal: {0:}'.format(34))
for i in range(9):
print('{:5b}'.format(i))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see, the squad has about 47% chance of scoring 1 or 0 hits and around 53% chance of scoring 2 or more hits. The expectation is 1.7, which is misleading
Step2: The distribution shifted to the right, the chance to score 2+ hits went up to around 62%, but really the effect is negligible, with the re-roll still just as likely to miss as it is to hit. Now let's look at 'First Rank Fire!, Second Rank Fire!', which turns Lasguns into Rapid Fire 2 weapons
Step3: A much stronger effect. You still cannot really rely on getting 3+ hits (~70% chance), but they will happen often enough and rarely would you get less than 2.
Step4: Maybe not as much as one would think, the chance to score 2+ has gone up to around 68%, compared to the troop squad's 53%. If you have 60 points to spend, you are probably better off with 15 regular shots
| <ASSISTANT_TASK:>
Python Code:
profiles[0] = {'shots': 10, 'p_hit': 1 / 2, 'p_wound': 1 / 2, 'p_unsaved': 4 / 6, 'damage': '1'}
profile_damage = damage_dealt(profiles[0])
wound_chart(profile_damage, profiles)
profiles[0]['p_hit'] = 0.583
wound_chart(damage_dealt(profiles[0]), profiles)
profiles[0]['p_hit'] = 0.5
profiles[0]['shots'] = 20
wound_chart(damage_dealt(profiles[0]), profiles)
profiles[0]['shots'] = 10
profiles[0]['p_hit'] = 2 / 3
wound_chart(damage_dealt(profiles[0]), profiles)
profiles[0]['shots'] = 15
profiles[0]['p_hit'] = 1 / 2
wound_chart(damage_dealt(profiles[0]), profiles)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="ref0"></a>
Step2: dataset object
Step3: <a id='ref1'> </a>
Step4: A function used to train.
Step5: A function used to calculate accuracy
Step6: <a id="ref2"></a>
Step7: Create a network to classify three classes with 1 hidden layer with 50 neurons
Step8: Create a network to classify three classes with 2 hidden layer with 20 neurons
| <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from matplotlib.colors import ListedColormap
torch.manual_seed(1)
def plot_decision_regions_3class(model,data_set):
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA','#00AAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00','#00AAFF'])
X=data_set.x.numpy()
y=data_set.y.numpy()
h = .02
x_min, x_max = X[:, 0].min()-0.1 , X[:, 0].max()+0.1
y_min, y_max = X[:, 1].min()-0.1 , X[:, 1].max() +0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))
XX=torch.torch.Tensor(np.c_[xx.ravel(), yy.ravel()])
_,yhat=torch.max(model(XX),1)
yhat=yhat.numpy().reshape(xx.shape)
plt.pcolormesh(xx, yy, yhat, cmap=cmap_light)
plt.plot(X[y[:]==0,0],X[y[:]==0,1],'ro',label='y=0')
plt.plot(X[y[:]==1,0],X[y[:]==1,1],'go',label='y=1')
plt.plot(X[y[:]==2,0],X[y[:]==2,1],'o',label='y=2')
plt.title("decision region")
plt.legend()
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
# modified from: http://cs231n.github.io/neural-networks-case-study/
def __init__(self,K=3,N=500):
D = 2
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
self.y=torch.from_numpy(y).type(torch.LongTensor)
self.x=torch.from_numpy(X).type(torch.FloatTensor)
self.len=y.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
def plot_stuff(self):
plt.plot(self.x[self.y[:]==0,0].numpy(),self.x[self.y[:]==0,1].numpy(),'o',label="y=0")
plt.plot(self.x[self.y[:]==1,0].numpy(),self.x[self.y[:]==1,1].numpy(),'ro',label="y=1")
plt.plot(self.x[self.y[:]==2,0].numpy(),self.x[self.y[:]==2,1].numpy(),'go',label="y=2")
plt.legend()
class Net(nn.Module):
def __init__(self,Layers):
super(Net,self).__init__()
self.hidden = nn.ModuleList()
for input_size,output_size in zip(Layers,Layers[1:]):
self.hidden.append(nn.Linear(input_size,output_size))
def forward(self,activation):
L=len(self.hidden)
for (l,linear_transform) in zip(range(L),self.hidden):
if l<L-1:
activation =F.relu(linear_transform (activation))
else:
activation =linear_transform (activation)
return activation
def train(data_set,model,criterion, train_loader, optimizer, epochs=100):
LOSS=[]
ACC=[]
for epoch in range(epochs):
for x,y in train_loader:
optimizer.zero_grad()
yhat=model(x)
loss=criterion(yhat,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
LOSS.append(loss.item())
ACC.append(accuracy(model,data_set))
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(LOSS,color=color)
ax1.set_xlabel('epoch',color=color)
ax1.set_ylabel('total loss',color=color)
ax1.tick_params(axis='y', color=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color=color) # we already handled the x-label with ax1
ax2.plot( ACC, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
return LOSS
def accuracy(model,data_set):
_,yhat=torch.max(model(data_set.x),1)
return (yhat==data_set.y).numpy().mean()
data_set=Data()
data_set.plot_stuff()
data_set.y=data_set.y.view(-1)
Layers=[2,50,3]
model=Net(Layers)
learning_rate=0.10
optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loader=DataLoader(dataset=data_set,batch_size=20)
criterion=nn.CrossEntropyLoss()
LOSS=train(data_set,model,criterion, train_loader, optimizer, epochs=100)
plot_decision_regions_3class(model,data_set)
Layers=[2,10,10,3]
model=Net(Layers)
learning_rate=0.01
optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loader=DataLoader(dataset=data_set,batch_size=20)
criterion=nn.CrossEntropyLoss()
LOSS=train(data_set,model,criterion, train_loader, optimizer, epochs=1000)
plot_decision_regions_3class(model,data_set)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = self.sigmoid
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def forward_pass(self, inputs):
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
self.final_outputs = final_inputs # signals from final output layer
return {'final': self.final_outputs, 'hidden': hidden_outputs}
def backward_pass(self, targets, outputs, inputs):
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = targets - outputs['final'] # Output layer error is the difference between desired target and actual output.
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = outputs['hidden'] * (1.0 - outputs['hidden']) #derivative of sigmoid
# Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, outputs['hidden'].T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T) # update input-to-hidden weights with gradient descent step
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
outputs = self.forward_pass(inputs)
self.backward_pass(targets, outputs, inputs)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
return self.forward_pass(inputs)['final']
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 500
learning_rate = 0.005
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
print("\nTraining complete!")
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=1.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables)
Step2: Question
Step3: Question
Step4: Previewing sample rows of data values
Step5: A note on our data
Step6: Running the first benchmark test
Step7: It should execute in just a few seconds. Then try running it again and see if you get the same performance. BigQuery will automatically cache the results from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics.
Step9: Looking at the job statistics we can see our most recent query hit cache
Step11: 132 GB will be processed. At the time of writing, BigQuery pricing is \$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \$0.66 to run.
Step12: If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below
Step13: Use the BigQuery Data Transfer Service to copy an existing dataset
Step14: Setup an automated test
Step15: Viewing the benchmark results
Step16: And finally, the overall statistics for the entire test
Step17: Benchmarking all 99 queries
Step18: And the results of the complete test
| <ASSISTANT_TASK:>
Python Code:
%%bigquery
SELECT
dataset_id,
table_id,
-- Convert bytes to GB.
ROUND(size_bytes/pow(10,9),2) as size_gb,
-- Convert UNIX EPOCH to a timestamp.
TIMESTAMP_MILLIS(creation_time) AS creation_time,
TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,
row_count,
CASE
WHEN type = 1 THEN 'table'
WHEN type = 2 THEN 'view'
ELSE NULL
END AS type
FROM
`qwiklabs-resources.tpcds_2t_baseline.__TABLES__`
ORDER BY size_gb DESC
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
%%bigquery
SELECT * FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
WHERE
is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL
%%bigquery
SELECT
COUNT(column_name) AS column_count,
table_name
FROM
`qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`
GROUP BY table_name
ORDER BY column_count DESC, table_name
%%bigquery --verbose
SELECT
cs_item_sk,
COUNT(cs_order_number) AS total_orders,
SUM(cs_quantity) AS total_quantity,
SUM(cs_ext_sales_price) AS total_revenue,
SUM(cs_net_profit) AS total_profit
FROM
`qwiklabs-resources.tpcds_2t_baseline.catalog_sales`
GROUP BY
cs_item_sk
ORDER BY
total_orders DESC
LIMIT
100
!head --lines=50 'sql/example_baseline_queries.sql'
%%bigquery --verbose
# start query 1 in stream 0 using template query96.tpl
select count(*)
from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales
,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics
,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim,
`qwiklabs-resources.tpcds_2t_baseline.store` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
!bq ls -j -a -n 5
!bq show --format=prettyjson -j 612a4b28-cb5c-4e0b-ad5b-ebd51c3b2439
%%bash
bq query \
--dry_run \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
# Convert bytes to GB
132086388641 / 1e+9
%%bash
bq query \
--nouse_cache \
--use_legacy_sql=false \
\
select count(*)
from \`qwiklabs-resources.tpcds_2t_baseline.store_sales\` as store_sales
,\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\` as household_demographics
,\`qwiklabs-resources.tpcds_2t_baseline.time_dim\` as time_dim, \`qwiklabs-resources.tpcds_2t_baseline.store\` as store
where ss_sold_time_sk = time_dim.t_time_sk
and ss_hdemo_sk = household_demographics.hd_demo_sk
and ss_store_sk = s_store_sk
and time_dim.t_hour = 8
and time_dim.t_minute >= 30
and household_demographics.hd_dep_count = 5
and store.s_store_name = 'ese'
order by count(*)
limit 100;
%%bash
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create
## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist
datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME"
bq --location=US mk --dataset \
--description 'Benchmark Dataset' \
$PROJECT:$BENCHMARK_DATASET_NAME
echo "\nHere are your current datasets:"
bq ls
fi
%%bigquery
SELECT COUNT(*) AS store_transaction_count
FROM tpcds_2t_baseline.store_sales
%%bash
# runs the SQL queries from the TPCDS benchmark
# Pull the current Google Cloud Platform project name
BQ_DATASET="tpcds_2t_baseline" # let's start by benchmarking our baseline dataset
QUERY_FILE_PATH="./sql/example_baseline_queries.sql" # the full test is on 99_baseline_queries but that will take 80+ mins to run
IFS=";"
# create perf table to keep track of run times for all 99 queries
printf "\033[32;1m Housekeeping tasks... \033[0m\n\n";
printf "Creating a reporting table perf to track how fast each query runs...";
perf_table_ddl="CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)"
bq rm -f $BQ_DATASET.perf
bq query --nouse_legacy_sql $perf_table_ddl
start=$(date +%s)
index=0
for select_stmt in $(<$QUERY_FILE_PATH)
do
# run the test until you hit a line with the string 'END OF BENCHMARK' in the file
if [[ "$select_stmt" == *'END OF BENCHMARK'* ]]; then
break
fi
printf "\n\033[32;1m Let's benchmark this query... \033[0m\n";
printf "$select_stmt";
SECONDS=0;
bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test
duration=$SECONDS
# get current timestamp in milliseconds
ran_on=$(date +%s)
index=$((index+1))
printf "\n\033[32;1m Here's how long it took... \033[0m\n\n";
echo "Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds."
printf "\n\033[32;1m Writing to our benchmark table... \033[0m\n\n";
insert_stmt="insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)"
printf "$insert_stmt"
bq query --nouse_legacy_sql $insert_stmt
done
end=$(date +%s)
printf "Benchmark test complete"
%%bigquery
SELECT * FROM tpcds_2t_baseline.perf
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
ORDER BY ran_on
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
MAX(performance_test_num) AS latest_performance_test_num,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec
FROM
tpcds_2t_baseline.perf
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)
%%bigquery
SELECT
TIMESTAMP_SECONDS(performance_test_num) AS test_date,
query_num,
TIMESTAMP_SECONDS(ran_on) AS query_ran_on,
TIMESTAMP_SECONDS(ran_on + elapsed_time_sec) AS query_completed_on,
elapsed_time_sec
FROM `qwiklabs-resources.tpcds_2t_baseline.perf` # public table
WHERE
# Let's only pull the results from our most recent test
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
ORDER BY ran_on
%%bigquery
SELECT
TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,
COUNT(DISTINCT query_num) AS count_queries_benchmarked,
SUM(elapsed_time_sec) AS total_time_sec,
ROUND(SUM(elapsed_time_sec)/60,2) AS total_time_min,
MIN(elapsed_time_sec) AS fastest_query_time_sec,
MAX(elapsed_time_sec) AS slowest_query_time_sec,
ROUND(AVG(elapsed_time_sec),2) AS avg_query_time_sec
FROM
`qwiklabs-resources.tpcds_2t_baseline.perf`
WHERE
performance_test_num = (SELECT MAX(performance_test_num) FROM `qwiklabs-resources.tpcds_2t_baseline.perf`)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='variables'></a>
Step2: <a id='strings'></a>
Step3: <a id='lists'></a>
Step4: <a id='tricks'></a>
Step5: <a id='list_methods'></a>
| <ASSISTANT_TASK:>
Python Code:
# Addition
2+5
# Let's have Python report the results from three operations at the same time
print(2-5)
print(2*5)
print(2/5)
# If we have all of our operations in the last line of the cell, Jupyter will print them together
2-5, 2*5, 2/5
# And let's compare values
2>5
# 'a' is being given the value 2; 'b' is given 5
a = 2
b = 5
# Let's perform an operation on the variables
a+b
# Variables can have many different kinds of names
this_number = 2
b/this_number
# The iconic string
print("Hello, World!")
# Assign these strings to variables
a = "Hello"
b = 'World'
# Try out arithmetic operations.
# When we add strings we call it 'concatenation'
print(a+" "+b)
print(a*5)
# Unlike a number that consists of a single value, a string is an ordered
# sequence of characters. We can find out the length of that sequence.
len("Hello, World!")
## EX. How long is the string below?
this_string = "It was the best of times; it was the worst of times."
len(this_string)
# Let's assign a couple lists to variables
list1 = ['Call', 'me', 'Ishmael']
list2 = ['In', 'the', 'beginning']
## Q. Predict what will happen when we perform the following operations
print(list1+list2)
print(list1*5)
# As with a string, we can find out the length of a list
len(list1)
# Sometimes we just want a single value from the list at a time
print(list1[0])
print(list1[1])
print(list1[2])
# Or maybe we want the first few
print(list1[0:2])
print(list1[:2])
# Of course, lists can contain numbers or even a mix of numbers and strings
list3 = [7,8,9]
list4 = [7,'ate',9]
# And python is smart with numbers, so we can add them easily!
sum(list3)
## EX. Concatenate 'list1' and 'list2' into a single list.
## Retrieve the third element from the combined list.
## Retrieve the fourth through sixth elements from the combined list.
new_list = list1+list2
new_list[3:]
# Let's assign a variable to perform methods upon
greeting = "Hello, World!"
# We saw the 'endswith' method at the very beginning
# Note the type of output that gets printed
greeting.startswith('H'), greeting.endswith('d')
# We can check whether the string is a letter or a number
this_string = 'f'
this_string.isalpha()
# When there are multiple characters, it checks whether *all*
# of the characters belong to that category
greeting.isalpha(), greeting.isdigit()
# Similarly, we can check whether the string is lower or upper case
greeting.islower(), greeting.isupper(), greeting.istitle()
# Sometimes we want not just to check, but to change the string
greeting.lower(), greeting.upper()
# The case of the string hasn't changed!
greeting
# But if we want to permanently make it lower case we re-assign it
greeting = greeting.lower()
greeting
# Oh hey. And strings are kind of like lists, so we can slice them similarly
greeting[:3]
# Strings may be like lists of characters, but as humans we often treat them as
# lists of words. We tell the computer to can perform that conversion.
greeting.split()
## EX. Return the second through eighth characters in 'greeting'
## EX. Split the string below into a list of words and assign this to a new variable
## Note: A slash at the end of a line allows a string to continue unbroken onto the next
new_string = "It seems very strange that one must turn back, \
and be transported to the very beginnings of history, \
in order to arrive at an understanding of humanity as it is at present."
print(greeting[1:8])
new_string_list = new_string.split()
new_string_list
# 'list1' had contained three words, two of which were in title case.
# We can automatically return those words using a list comprehension
[word for word in list1 if word.istitle()]
# Or we can include all the words in the list but just take their first letters
[word[0] for word in list1]
for word in list1:
print(word[0])
## EX. Using the list of words you produced by splitting 'new_string', create
## a new list that contains only the words whose last letter is "y"
y_list = [word for word in new_string_list if word.endswith('y')]
print(y_list)
## EX. Create a new list that contains the first letter of each word.
first_letter = [word[0] for word in new_string_list]
print(first_letter)
## EX. Create a new list that contains only words longer than two letters.
long_words = [word for word in new_string_list if len(word)>2]
print(long_words)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Step5: Laser alternation selection
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
Step10: Or check the measurements duration
Step11: Compute background
Step12: Burst search and selection
Step14: Donor Leakage fit
Step15: Gaussian Fit
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables
| <ASSISTANT_TASK:>
Python Code:
ph_sel_name = "all-ph"
data_id = "12d"
# ph_sel_name = "all-ph"
# data_id = "7d"
from fretbursts import *
init_notebook()
from IPython.display import display
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
d = loader.photon_hdf5(filename=files_dict[data_id])
d.ph_times_t, d.det_t
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d
d.time_max
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
ds_fret.fit_E_m(weights='size')
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
sample = data_id
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EBLUP Predictor
Step2: Now the the model has been fitted, we can obtain the EBLUP average expenditure on milk by running predict() which is a method of EblupAreaModel class. This run will produce two main attributes that is area_est and area_mse which are python dictionaries pairing the small areas to the eblup estimates and the MSE estimates, respectively.
Step3: We can use the utility method to_dataframe() to output the estimates as a pandas dataframe. The function provides the area, the estimate and its MSE estimates. We can use col_names to customize the name of the columns. For example, using col_names = ["small_area", "eblup_estimate", "eblup_mse"]. Otherwise, if col_names is not provided, "_area", "_estimates" and "_mse" are used as defaults.
Step4: We could also fit the model parameters using the maximum likelihood (ML) method which will impact the MSE estimation as well. To estimate the area means using the ML methdo, we only need to set method="ML" then run the prediction as follows.
Step5: Similar, we can use the Fay-Herriot method as follows
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import samplics
from samplics.datasets import ExpenditureMilk
from samplics.sae import EblupAreaModel
# Load Expenditure on Milk sample data
milk_exp_cls = ExpenditureMilk()
milk_exp_cls.load_data()
milk_exp = milk_exp_cls.data
nb_obs = 15
print(f"\nFirst {nb_obs} observations of the Milk Expendure dataset\n")
milk_exp.tail(nb_obs)
area = milk_exp["small_area"]
yhat = milk_exp["direct_est"]
X = pd.get_dummies(milk_exp["major_area"],drop_first=True)
sigma_e = milk_exp["std_error"]
## REML method
fh_model_reml = EblupAreaModel(method="REML")
fh_model_reml.fit(
yhat=yhat, X=X, area=area, error_std=sigma_e, intercept=True, tol=1e-8,
)
print(f"\nThe estimated fixed effects are: {fh_model_reml.fixed_effects}")
print(f"\nThe estimated standard error of the area random effects is: {fh_model_reml.re_std}")
print(f"\nThe convergence statistics are: {fh_model_reml.convergence}")
print(f"\nThe goodness of fit statistics are: {fh_model_reml.goodness}\n")
fh_model_reml.predict(
X=X, area=area, intercept=True
)
import pprint
pprint.pprint(fh_model_reml.area_est)
milk_est_reml = fh_model_reml.to_dataframe(col_names = ["small_area", "eblup_estimate", "eblup_mse"])
print(f"\nThe dataframe version of the area level estimates:\n\n {milk_est_reml}")
## ML method
fh_model_ml = EblupAreaModel(method="ML")
fh_model_ml.fit(
yhat=yhat, X=X, area=area, error_std=sigma_e, intercept=True, tol=1e-8,
)
milk_est_ml = fh_model_ml.predict(
X=X, area=area, intercept=True
)
milk_est_ml = fh_model_ml.to_dataframe(col_names = ["small_area", "eblup_estimate", "eblup_mse"])
print(f"\nThe dataframe version of the ML area level estimates:\n\n {milk_est_ml}")
## FH method
fh_model_fh = EblupAreaModel(method="FH")
fh_model_fh.fit(
yhat=yhat, X=X, area=area, error_std=sigma_e, intercept=True, tol=1e-8,
)
milk_est_fh = fh_model_fh.predict(
X=X, area=area, intercept=True
)
milk_est_fh = fh_model_fh.to_dataframe(col_names = ["small_area", "eblup_estimate", "eblup_mse"])
print(f"\nThe dataframe version of the ML area level estimates:\n\n {milk_est_fh}")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: cf. Examples for solve_ivp
Step2: An example for unit tests
Step3: Consider example
Step4: We first solve this problem using RK4 with $h = 0.5$ and from $t=0$ to $t=2$.
Step5: Step 2 $\quad \, x_2 = 1$
Step6: Calculated values for unit tests
Step7: Embedded Formulas of Order 5
Step8: Coefficients for dense output
Step9: Hermite Interpolation
Step10: cf. pp. 916 17.2.2 Dense Output, Ch. 17. Integration of Ordinary Differential Equations of Numerical Recipes, 3rd Ed., and
| <ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import sys
notebook_directory_parent = Path.cwd().resolve().parent.parent
if str(notebook_directory_parent) not in sys.path:
sys.path.append(str(notebook_directory_parent))
%matplotlib inline
import numpy as np
import scipy
import sympy
from numpy import linspace
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
from T1000.numerical.RungeKuttaMethod import RungeKuttaMethod
from T1000.numerical.RKMethods.bCoefficients import bCoefficients
from T1000.numerical.RKMethods.DOPRI5Coefficients import DOPRI5Coefficients
def exponential_decay(t, y):
return -0.5 * y
sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8])
print("t\n", sol.t)
print("y\n", sol.y)
cs_for_rk4 = [0.5, 0.5, 1]
as_for_rk4 = [0.5, 0, 0.5, 0, 0, 1]
bs_for_rk4 = [1/6., 1/3., 1/3., 1/6.]
rk4 = RungeKuttaMethod(4, cs_for_rk4, as_for_rk4, bs_for_rk4)
def derivative(x, y):
return y - np.power(x, 2) + 1.0
def exact_y(x):
return np.power(x, 2) + 2.0 * x + 1.0 - 0.5 * np.exp(x)
ks0 = rk4._calculate_k_coefficients(0.5, 0.0, 0.5, derivative)
print(ks0)
y1 = rk4.calculate_next_step(0.5, 0.0, 0.5, derivative)
print(y1)
print(exact_y(0))
print(exact_y(0.5))
ks1 = rk4._calculate_k_coefficients(y1, 0.5, 0.5, derivative)
print(ks1)
y2 = rk4.calculate_next_step(y1, 0.5, 0.5, derivative)
print(y2)
print(exact_y(1.))
alpha_for_rk4 = [0.5, 0.5, 1.]
beta_for_rk4 = [0.5, 0., 0.5, 0., 0., 1.]
c_for_rk4 = [1./6., 1./3., 1./3., 1./6.]
rk4 = RungeKuttaMethod(4, alpha_for_rk4, beta_for_rk4, c_for_rk4)
m = 4
print(rk4._beta_coefficients)
for i in range(2, m + 1):
for j in range(1, i):
print(i, " ", j, " ", rk4.get_beta_ij(i, j))
print(rk4._alpha_coefficients)
for i in range(2, m + 1):
print(i, " ", rk4.get_alpha_i(i))
print(rk4._c_coefficients)
for i in range(1, m + 1):
print(i, " ", rk4.get_c_i(i))
x_n = [np.array([2, 4, 8]), np.array([1.88836035, 3.7767207, 7.5534414])]
t_n = [0., 0.11487653, 1.26364188]
kresult = rk4._calculate_k_coefficients(x_n[0], t_n[0], t_n[1] - t_n[0], exponential_decay)
print(kresult)
result1 = rk4.calculate_next_step(x_n[0], t_n[0], t_n[1] - t_n[0], exponential_decay)
print(result1)
result2 = rk4.calculate_next_step(x_n[1], t_n[1], t_n[2] - t_n[1], exponential_decay)
print(result2)
[1, 2, 3] + [ 4, 5, 6]
np.array([2])
# print(DOPRI5Coefficients.b_coefficients.get_ith_element(1))
deltas = []
for i in range(1, 8):
delta = \
DOPRI5Coefficients.b_coefficients.get_ith_element(i) - \
DOPRI5Coefficients.bstar_coefficients.get_ith_element(i)
deltas.append(delta)
if (type(delta) != int):
print(delta.simplify())
else:
print(delta)
bs = DOPRI5Coefficients.b_coefficients
ds = DOPRI5Coefficients.dense_output_coefficients
cstars = DOPRI5Coefficients.cstar_coefficients
-2 * (1 + 4 * bs.get_ith_element(1) - 4 * cstars.get_ith_element(1))
from sympy import Matrix, Symbol, symbols, pprint
theta, y_n, y_np1, f_n, f_np1, h = symbols("theta y_n y_np1 f_n f_np1 h")
hermite_interpolation = (1 - theta) * y_n + theta * y_np1 + \
theta * (theta - 1) * ((1 - 2 * theta) * (y_np1 - y_n) + (theta - 1) * h *f_n + theta * h * f_np1)
pprint(hermite_interpolation)
pprint(hermite_interpolation.expand())
pprint(sympy.collect(hermite_interpolation.expand(), y_n, evaluate=False)[y_n])
pprint(sympy.collect(hermite_interpolation.expand(), y_np1, evaluate=False)[y_np1])
pprint(sympy.collect(hermite_interpolation.expand(), f_n, evaluate=False)[f_n])
pprint(sympy.collect(hermite_interpolation.expand(), f_np1, evaluate=False)[f_np1])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Corpus acquisition
Step2: 1.1.2. Parsing XML
Step3: or directly reading a string
Step4: fromstring() parses XML from a string directly into an Element, which is the root element of the parsed tree. Other parsing functions may create an ElementTree, but we will not cover them here.
Step5: It also has children nodes over which we can iterate
Step6: Children are nested, and we can access specific child nodes by index. We can also access the text of specified elements. For instance
Step8: The presented classes and functions are all you need to solve the following exercise. However, there are many other interesting functions that can probably make it easier for you to work with XML files. For more information, please refer to the ElementTree API.
Step9: 1.2. Building the dataset
Step10: We will extract some characteristics of the constructed dataset
Step11: Exercise
Step12: 2. Corpus Processing
Step13: We will create a list that contains just the abstracts in the dataset. As the order of the elements in a list is fixed, it will be later straightforward to match the processed abstracts to metadata associated to their corresponding projects.
Step14: 2.2. Homogeneization
Step15: 2.2.1. Stemming vs Lemmatization
Step16: One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
Step17: 2.3. Cleaning
Step18: 2.4. Vectorization
Step19: We can also filter out terms that appear in too few or too many of the documents in the dataset
Step20: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transforms any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Step21: At this point, it is good to make sure to understand what has happened. In clean_abstracts we had a list of token lists. With it, we have constructed a Dictionary, D, which assigns an integer identifier to each token in the corpus.
Step22: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
Step23: 3. Topic Modeling
Step24: 3.2. LDA model visualization
Step25: A more useful visualization is provided by the python LDA visualization library, pyLDAvis.
Step26: 3.3. Gensim utility functions
Step27: An alternative to the use of the get_document_topics() function is to directly transform a dataset using the ldag object as follows. You can apply this transformation to several documents at once, but then the result is an iterator from which you can build the corresponding list if necessary
Step28: Finally, Gensim provides some useful functions to convert between formats, and to simplify interaction with numpy and scipy. The following code fragment converts a corpus in sparse format to a full numpy matrix
Step30: Exercise
Step32: Exercise
Step33: Function that creates the Node CSV file for Gephi
| <ASSISTANT_TASK:>
Python Code:
# Common imports
import numpy as np
# import pandas as pd
# import os
from os.path import isfile, join
# import scipy.io as sio
# import scipy
import zipfile as zp
# import shutil
# import difflib
xmlfile = '../data/1600057.xml'
with open(xmlfile,'r') as fin:
print(fin.read())
import xml.etree.ElementTree as ET
tree = ET.parse(xmlfile)
root = tree.getroot
root = ET.fromstring(open(xmlfile,'r').read())
print(root.tag)
print(root.attrib)
for child in root:
print(child.tag, child.attrib)
for child in root[0]:
print(child.tag, child.attrib, child.text)
def parse_xmlproject(xml_string):
This function processess the specified XML field,
and outputs a dictionary with the desired project information
:xml_string: String with XML content
:Returns: Dictionary with indicated files
#<SOL>
#</SOL>
parse_xmlproject(open(xmlfile,'r').read())
# Construct an iterator (or a list) for the years you want to work with
years = range(2015,2017)
datafiles_path = '../data/'
NSF_data = []
for year in years:
zpobj = zp.ZipFile(join(datafiles_path, str(year)+'.zip'))
for fileinzip in zpobj.namelist():
if fileinzip.endswith('xml'):
#Some files seem to be incorrectly parsed
try:
project_dictio = parse_xmlproject(zpobj.read(fileinzip))
if project_dictio['abstract']:
NSF_data.append(project_dictio)
except:
pass
print('Number of projects in dataset:', len(NSF_data))
####
budget_data = list(map(lambda x: x['budget'], NSF_data))
print('Average budget of projects in dataset:', np.mean(budget_data))
####
insti_data = list(map(lambda x: x['institution'], NSF_data))
print('Number of unique institutions in dataset:', len(set(insti_data)))
####
counts = dict()
for project in NSF_data:
counts[project['year']] = counts.get(project['year'],0) + 1
print('Breakdown of projects by starting year:')
for el in counts:
print(el, ':', counts[el])
#<SOL>
#</SOL>
import nltk
# You should comment this code fragment if the package is already available.
# Select option "d) Download", and identifier "punkt"
# nltk.download()
from nltk.tokenize import word_tokenize
NSF_abstracts = list(map(lambda x: x['abstract'], NSF_data))
tokenized_abstracts = []
nprojects = len(NSF_abstracts)
for n, abstract in enumerate(NSF_abstracts):
if not n%100:
print('\rTokenizing abstract', n, 'out of', nprojects, end='', flush=True)
tokenized_abstracts.append(word_tokenize(abstract))
print('\n\n The corpus has been tokenized. Check the result for the first abstract:')
print(NSF_abstracts[0])
print(tokenized_abstracts[0])
filtered_abstracts = []
for n, abstract in enumerate(tokenized_abstracts):
if not n%100:
print('\rFiltering abstract', n, 'out of', nprojects, end='', flush=True)
#<SOL>
#</SOL>
print('\n',filtered_abstracts[0])
stemmer = nltk.stem.SnowballStemmer('english')
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
print('Result for the first abstract in dataset applying stemming')
print([stemmer.stem(el) for el in filtered_abstracts[0]])
print('Result for the first abstract in the dataset applying lemmatization')
print([wnl.lemmatize(el) for el in filtered_abstracts[0]])
lemmatized_abstracts = []
for n, abstract in enumerate(filtered_abstracts):
if not n%100:
print('\rLemmatizing abstract', n, 'out of', nprojects, end='', flush=True)
#<SOL>
#</SOL>
print('Result for the first abstract in the dataset applying lemmatization')
print('\n',lemmatized_abstracts[0])
from nltk.corpus import stopwords
stopwords_en = stopwords.words('english')
clean_abstracts = []
for n, abstract in enumerate(lemmatized_abstracts):
if not n%100:
print('\rCleaning abstract', n, 'out of', nprojects, end='', flush=True)
# Remove all tokens in the stopwords list and append the result to clean_abstracts
# <SOL>
# </SOL>
clean_abstracts.append(clean_tokens)
print('\n Let us check tokens after cleaning:')
print(clean_abstracts[0])
import gensim
# Create dictionary of tokens
D = gensim.corpora.Dictionary(clean_abstracts)
n_tokens = len(D)
print('The dictionary contains', n_tokens, 'terms')
print('First terms in the dictionary:')
for n in range(10):
print(str(n), ':', D[n])
no_below = 5 #Minimum number of documents to keep a term in the dictionary
no_above = .75 #Maximum proportion of documents in which a term can appear to be kept in the dictionary
D.filter_extremes(no_below=no_below,no_above=no_above, keep_n=25000)
n_tokens = len(D)
print('The dictionary contains', n_tokens, 'terms')
print('First terms in the dictionary:')
for n in range(10):
print(str(n), ':', D[n])
corpus_bow = [D.doc2bow(doc) for doc in clean_abstracts]
print('Original article (after cleaning):')
print(clean_abstracts[0])
print('Sparse vector representation (first 10 components):')
print(corpus_bow[0][:10])
print('Word counts for the first project (first 10 components):')
print(list(map(lambda x: (D[x[0]], x[1]), corpus_bow[0][:10])))
all_counts = [(D[el], D.dfs[el]) for el in D.dfs]
all_counts = sorted(all_counts, key=lambda x: x[1])
import gensim
num_topics = 50
ldag = gensim.models.ldamodel.LdaModel(corpus=corpus_bow, id2word=D, num_topics=num_topics)
ldag.print_topics(num_topics=-1, num_words=10)
import pyLDAvis.gensim as gensimvis
import pyLDAvis
vis_data = gensimvis.prepare(ldag, corpus_bow, D)
pyLDAvis.display(vis_data)
ldag.get_topic_terms(topicid=0)
ldag.get_document_topics(corpus_bow[0])
print(ldag[corpus_bow[0]])
print('When applied to a dataset it will provide an iterator')
print(ldag[corpus_bow[:3]])
print('We can rebuild the list from the iterator with a one liner')
print([el for el in ldag[corpus_bow[:3]]])
reduced_corpus = [el for el in ldag[corpus_bow[:3]]]
reduced_corpus = gensim.matutils.corpus2dense(reduced_corpus, num_topics).T
print(reduced_corpus)
def most_relevant_projects(ldag, topicid, corpus_bow, nprojects=10):
This function returns the most relevant projects in corpus_bow
: ldag: The trained topic model object provided by gensim
: topicid: The topic for which we want to find the most relevant documents
: corpus_bow: The BoW representation of documents in Gensim format
: nprojects: Number of most relevant projects to identify
: Returns: A list with the identifiers of the most relevant projects
print('Computing most relevant projects for Topic', topicid)
print('Topic composition is:')
print(ldag.show_topic(topicid))
#<SOL>
#</SOL>
#To test the function we will find the most relevant projects for a subset of the NSF dataset
project_id = most_relevant_projects(ldag, 17, corpus_bow[:10000])
#Print titles of selected projects
for idproject in project_id:
print(NSF_data[idproject]['title'])
def pairwase_dist(doc1, doc2):
This function returns the Jensen-Shannon
distance between the corresponding vectors of the documents
: doc1: Semantic representation for the doc1 (a vector of length ntopics)
: doc2: Semantic representation for the doc2 (a vector of length ntopics)
: Returns: The JS distance between doc1 and doc2 (a number)
#<SOL>
#</SOL>
#print(NSF_data[0].keys())
#print(NSF_data[0]['institution'])
def strNone(str_to_convert):
if str_to_convert is None:
return ''
else:
return str_to_convert
with open('NSF_nodes.csv','w') as fout:
fout.write('Id;Title;Year;Budget;UnivName;UnivZIP;State\n')
for project in NSF_data:
fout.write(project['project_code']+';'+project['title']+';')
fout.write(project['year']+';'+str(project['budget'])+';')
fout.write(project['institution'][0]+';')
fout.write(strNone(project['institution'][1])+';')
fout.write(strNone(project['institution'][2])+'\n')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have 5 coefficients. If we make the maximum ratio a 10
Step2: Well, that's not very efficient. As the ratio increases, the computation takes an exponentially longer time. Checking up to 24
| <ASSISTANT_TASK:>
Python Code:
def combos(combo_min, combo_max, combo_len):
for combo in it.product(xrange(combo_min, combo_max + 1),
repeat=combo_len):
yield combo
def combo_dicts(param_names, combo_min, combo_max, combo_len):
for d in (OrderedDict(it.izip(param1_names, combo)) for combo in combos(combo_min, combo_max, combo_len)):
yield d
def reduction_percentage(max_ratio, n_slots=5):
all_combos = list(combos(1, max_ratio, n_slots))
return 100*(len(all_combos) - len(set(tuple(np.array(i) / max(i)) for i in all_combos))) / len(all_combos)
response = raw_input("This cell takes about 4 minutes to run. Press 'y' to continue: ").lower()
if response == "y":
x = np.arange(1, 25)
y = [reduction_percentage(i) for i in tqdm.tqdm_notebook(x)]
if response == "y":
plt.plot(x, y)
plt.title("Maximum iteration reduction")
plt.ylabel("Reduction (%)")
plt.xlabel("Maximum ratio")
combo_min = 1
combo_max = 10
combo_len = 5
param1_names = ["centrality", "direction", "distance", "direction_with_current", "distance_with_current"]
param2_names = ["centrality2", "direction", "distance", "direction_with_current", "distance_with_current"]
params1 = combo_dicts(param1_names, combo_min, combo_max, combo_len)
params2 = combo_dicts(param2_names, combo_min, combo_max, combo_len)
combo = next(params1)
import rospy
rospy.init_node("selector", log_level=rospy.INFO)
rospy.set_param("~ref_distance", 1.5)
rospy.set_param("~image_queue_length", 60)
rospy.set_param("~eval_method", "Spirit")
rospy.set_param("~thresh_distance", 0.01)
rospy.set_param("~thresh_yaw", 0.02)
from ..src.ros.spirit.src.past_image_selector import Selector
from ..src.ros.spirit.src.evaluators import Spirit
my_selector = Selector(debug=True)
spirit = Spirit(my_selector)
p = Pose.from_components([0, 1, 2], [3, 4, 5, 6])
f = Frame(Pose.generate_stamped([0, 1, 3], [3, 4, 5, 6]), 1)
# Create first frame
my_selector.tracked = True
my_selector.pose_callback(p.pose_stamped)
my_selector.image_callback(1)
# Select current frame
my_selector.pose_callback(p.pose_stamped)
# Update current frame
my_selector.pose_callback(p.pose_stamped)
my_selector.frames
spirit.select_best_frame()
spirit._evaluate_frame(p, f)
sum(coeff * getattr(spirit, fn)(p, f) for fn, coeff in combo.iteritems())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: JAX에서 TensorFlow 확률(TFP on JAX)
Step2: TFP의 최신 야간 빌드를 사용하여 TFP on JAX를 설치할 수 있습니다.
Step3: 몇 가지 유용한 Python 라이브러리를 가져옵니다.
Step4: 또한 몇 가지 기본 JAX 기능을 가져옵니다.
Step5: TFP on JAX 가져오기
Step6: 데모
Step7: tfd.JointDistributionCoroutine를 사용하여 모델을 정의할 수 있습니다. 가중치와 바이어스 항 모두에 표준 정규 사전 분포를 넣은 후 샘플링된 레이블을 데이터에 고정하는 target_log_prob 함수를 작성합니다.
Step8: dist에서 샘플링하여 MCMC의 초기 상태를 생성합니다. 그런 다음 무작위 키와 초기 상태를 취하는 함수를 정의하고 No-U-Turn-Sampler(NUTS)에서 500개의 샘플을 생성할 수 있습니다. jit과 같은 JAX 변환을 사용하여 XLA로 NUTS 샘플러를 컴파일할 수 있습니다.
Step9: 샘플을 사용하여 각 가중치 세트의 예측 확률을 평균화하는 방식으로 베이지안 모델 평균화(BMA)를 수행하겠습니다.
Step10: 샘플 세트에 vmap(classifier_probs)를 수행하여 각 샘플의 예측된 클래스 확률을 얻을 수 있습니다. 그런 다음 각 샘플의 평균 정확성과 베이지안 모델 평균화의 정확성을 계산합니다.
Step11: BMA가 오류율을 거의 3분의 1로 줄이는 것으로 보입니다!
Step12: 한 분포로부터 샘플링하려면 PRNGKey(또는 정수 목록)에서 seed 키워드 인수로 명시적으로 전달해야 합니다. 명시적으로 시드를 전달하지 못하면 오류가 발생합니다.
Step13: 분포의 형상 의미 체계는 JAX에서 동일하게 유지됩니다. 여기서 분포는 각각 event_shape와 batch_shape를 갖게 되며 많은 샘플을 이끌어낼수록 sample_shape 차원이 더 많이 추가됩니다.
Step14: 반면에 벡터로 매개변수화된 tfd.Normal는 스칼라 이벤트 형상과 벡터 배치 형상을 갖게 됩니다.
Step15: 샘플의 log_prob를 취하는 의미 체계는 JAX에서도 동일하게 작동합니다.
Step16: JAX DeviceArray는 NumPy 및 Matplotlib와 같은 라이브러리와 호환되므로 샘플을 직접 플로팅 함수에 공급할 수 있습니다.
Step17: Distribution 메서드는 JAX 변환과 호환됩니다.
Step18: TFP 분포는 JAX pytree 노드로 등록되어 있기 때문에 분포를 입력 또는 출력으로 사용하여 함수를 작성하고 이러한 함수를 jit를 사용하여 변환할 수는 있지만 아직 vmap를 적용한 함수에 대한 인수로는 지원되지 않습니다.
Step19: 변환된 분포
Step20: 결합 분포
Step21: 기타 분포
Step22: Hidden Markov 모델도 지원합니다.
Step23: TensorFlow 또는 XLA 비호환성에 대한 엄격한 종속성으로 인해 PixelCNN와 같은 몇몇 분포는 아직 지원하지 않습니다.
Step24: Bijector는 jit, grad 및 vmap과 같은 JAX 변환과 호환됩니다.
Step25: RealNVP 및 FFJORD와 같은 일부 Bijector는 아직 지원하지 않습니다.
Step26: TFP on TF와 달리 seed 키워드 인수를 사용하여 PRNGKey를 sample_chain로 전달해야 합니다.
Step27: 여러 체인을 실행하기 위해 상태 배치를 sample_chain에 전달하거나 vmap를 사용할 수 있습니다(두 접근법의 성능 차이는 아직 조사하지 않음).
Step28: 옵티마이저
Step29: BFGS는 이러한 손실의 최소값을 찾을 수 있습니다.
Step30: L-BFGS도 마찬가지입니다.
Step31: vmap L-BFGS를 하기 위해 단일 시작점에 대한 손실을 최적화하는 함수를 설정해보겠습니다.
Step32: 주의 사항
Step33: JAX에서 난수 함수는 키를 사용하여 결정론적으로 난수 변량을 생성하므로 다시 사용해서는 안 됩니다. 예를 들어 key를 사용하여 정규 분포 값을 샘플링할 수는 있지만 다른 곳에서 key를 다시 사용해서는 안 됩니다. 또한, random.normal에 동일한 값을 전달하면 동일한 값이 생성됩니다.
Step34: 그렇다면 어떻게 해야 단일 키로 여러 샘플을 이끌어 낼 수 있을까요? 정답은 키 분할입니다. 기본 개념은 PRNGKey를 여러 개로 분할할 수 있으며, 각각의 새 키를 독립적인 난수 소스로 취급할 수 있다는 것입니다.
Step35: 키 분할은 결정론적이지만 혼란스럽기 때문에 이제 각각의 새 키를 사용하여 고유한 난수 샘플을 이끌어낼 수 있습니다.
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip uninstall tensorflow -y -q
!pip install -Uq tfp-nightly[jax] > /dev/null
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn import datasets
sns.set(style='white')
import jax.numpy as jnp
from jax import grad
from jax import jit
from jax import random
from jax import value_and_grad
from jax import vmap
from tensorflow_probability.substrates import jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
iris = datasets.load_iris()
features, labels = iris['data'], iris['target']
num_features = features.shape[-1]
num_classes = len(iris.target_names)
Root = tfd.JointDistributionCoroutine.Root
def model():
w = yield Root(tfd.Sample(tfd.Normal(0., 1.),
sample_shape=(num_features, num_classes)))
b = yield Root(
tfd.Sample(tfd.Normal(0., 1.), sample_shape=(num_classes,)))
logits = jnp.dot(features, w) + b
yield tfd.Independent(tfd.Categorical(logits=logits),
reinterpreted_batch_ndims=1)
dist = tfd.JointDistributionCoroutine(model)
def target_log_prob(*params):
return dist.log_prob(params + (labels,))
init_key, sample_key = random.split(random.PRNGKey(0))
init_params = tuple(dist.sample(seed=init_key)[:-1])
@jit
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-3)
return tfp.mcmc.sample_chain(500,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
num_burnin_steps=500,
seed=key)
states, log_probs = run_chain(sample_key, init_params)
plt.figure()
plt.plot(log_probs)
plt.ylabel('Target Log Prob')
plt.xlabel('Iterations of NUTS')
plt.show()
def classifier_probs(params):
dists, _ = dist.sample_distributions(seed=random.PRNGKey(0),
value=params + (None,))
return dists[-1].distribution.probs_parameter()
all_probs = jit(vmap(classifier_probs))(states)
print('Average accuracy:', jnp.mean(all_probs.argmax(axis=-1) == labels))
print('BMA accuracy:', jnp.mean(all_probs.mean(axis=0).argmax(axis=-1) == labels))
dist = tfd.Normal(0., 1.)
print(dist.log_prob(0.))
tfd.Normal(0., 1.).sample(seed=random.PRNGKey(0))
dist = tfd.MultivariateNormalDiag(
loc=jnp.zeros(5),
scale_diag=jnp.ones(5)
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
dist = tfd.Normal(
loc=jnp.ones(5),
scale=jnp.ones(5),
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
dist = tfd.Normal(jnp.zeros(5), jnp.ones(5))
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
dist = tfd.Independent(tfd.Normal(jnp.zeros(5), jnp.ones(5)), 1)
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
sns.distplot(tfd.Normal(0., 1.).sample(1000, seed=random.PRNGKey(0)))
plt.show()
sns.distplot(jit(vmap(lambda key: tfd.Normal(0., 1.).sample(seed=key)))(
random.split(random.PRNGKey(0), 2000)))
plt.show()
x = jnp.linspace(-5., 5., 100)
plt.plot(x, jit(vmap(grad(tfd.Normal(0., 1.).prob)))(x))
plt.show()
@jit
def random_distribution(key):
loc_key, scale_key = random.split(key)
loc, log_scale = random.normal(loc_key), random.normal(scale_key)
return tfd.Normal(loc, jnp.exp(log_scale))
random_dist = random_distribution(random.PRNGKey(0))
print(random_dist.mean(), random_dist.variance())
dist = tfd.TransformedDistribution(
tfd.Normal(0., 1.),
tfb.Sigmoid()
)
sns.distplot(dist.sample(1000, seed=random.PRNGKey(0)))
plt.show()
dist = tfd.JointDistributionSequential([
tfd.Normal(0., 1.),
lambda x: tfd.Normal(x, 1e-1)
])
plt.scatter(*dist.sample(1000, seed=random.PRNGKey(0)), alpha=0.5)
plt.show()
joint = tfd.JointDistributionNamed(dict(
e= tfd.Exponential(rate=1.),
n= tfd.Normal(loc=0., scale=2.),
m=lambda n, e: tfd.Normal(loc=n, scale=e),
x=lambda m: tfd.Sample(tfd.Bernoulli(logits=m), 12),
))
joint.sample(seed=random.PRNGKey(0))
Root = tfd.JointDistributionCoroutine.Root
def model():
e = yield Root(tfd.Exponential(rate=1.))
n = yield Root(tfd.Normal(loc=0, scale=2.))
m = yield tfd.Normal(loc=n, scale=e)
x = yield tfd.Sample(tfd.Bernoulli(logits=m), 12)
joint = tfd.JointDistributionCoroutine(model)
joint.sample(seed=random.PRNGKey(0))
k1, k2, k3 = random.split(random.PRNGKey(0), 3)
observation_noise_variance = 0.01
f = lambda x: jnp.sin(10*x[..., 0]) * jnp.exp(-x[..., 0]**2)
observation_index_points = random.uniform(
k1, [50], minval=-1.,maxval= 1.)[..., jnp.newaxis]
observations = f(observation_index_points) + tfd.Normal(
loc=0., scale=jnp.sqrt(observation_noise_variance)).sample(seed=k2)
index_points = jnp.linspace(-1., 1., 100)[..., jnp.newaxis]
kernel = tfpk.ExponentiatedQuadratic(length_scale=0.1)
gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)
samples = gprm.sample(10, seed=k3)
for i in range(10):
plt.plot(index_points, samples[i], alpha=0.5)
plt.plot(observation_index_points, observations, marker='o', linestyle='')
plt.show()
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
print(model.mean())
print(model.log_prob(jnp.zeros(7)))
print(model.sample(seed=random.PRNGKey(0)))
tfb.Exp().inverse(1.)
bij = tfb.Shift(1.)(tfb.Scale(3.))
print(bij.forward(jnp.ones(5)))
print(bij.inverse(jnp.ones(5)))
b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None)
print(b.forward(x=[0., 0., 0.]))
print(b.inverse(y=[[1., 0], [.5, 2]]))
b = tfb.Chain([tfb.Exp(), tfb.Softplus()])
# or:
# b = tfb.Exp()(tfb.Softplus())
print(b.forward(-jnp.ones(5)))
jit(vmap(tfb.Exp().inverse))(jnp.arange(4.))
x = jnp.linspace(0., 1., 100)
plt.plot(x, jit(grad(lambda x: vmap(tfb.Sigmoid().inverse)(x).sum()))(x))
plt.show()
target_log_prob = tfd.MultivariateNormalDiag(jnp.zeros(2), jnp.ones(2)).log_prob
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-1)
return tfp.mcmc.sample_chain(1000,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
seed=key)
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros(2))
plt.figure()
plt.scatter(*states.T, alpha=0.5)
plt.figure()
plt.plot(log_probs)
plt.show()
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros([10, 2]))
plt.figure()
for i in range(10):
plt.scatter(*states[:, i].T, alpha=0.5)
plt.figure()
for i in range(10):
plt.plot(log_probs[:, i], alpha=0.5)
plt.show()
minimum = jnp.array([1.0, 1.0]) # The center of the quadratic bowl.
scales = jnp.array([2.0, 3.0]) # The scales along the two axes.
# The objective function and the gradient.
def quadratic_loss(x):
return jnp.sum(scales * jnp.square(x - minimum))
start = jnp.array([0.6, 0.8]) # Starting point for the search.
optim_results = tfp.optimizer.bfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
optim_results = tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
def optimize_single(start):
return tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
all_results = jit(vmap(optimize_single))(
random.normal(random.PRNGKey(0), (10, 2)))
assert all(all_results.converged)
for i in range(10):
np.testing.assert_allclose(optim_results.position[i], minimum)
print("Function evaluations: %s" % all_results.num_objective_evaluations)
key = random.PRNGKey(0) # Creates a key with value [0, 0]
print(key)
print(random.normal(key))
key1, key2 = random.split(key, num=2)
print(key1, key2)
print(random.normal(key1), random.normal(key2))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A repository
Step2: And this is pretty much the essence of Git!
Step3: And how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files)
Step4: Set git to use the credential memory cache so we don't have to retype passwords too frequently. On Linux, you should run the following (note that this requires git version 1.7.10 or newer)
Step5: Github offers in its help pages instructions on how to configure the credentials helper for Mac OSX and Windows.
Step6: git init
Step7: Note
Step8: Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand
Step9: git add
Step10: We can now ask git about what happened with status
Step11: git commit
Step12: In the commit above, we used the -m flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string
Step13: git diff
Step14: And now we can ask git what is different
Step15: The cycle of git virtue
Step16: git log revisited
Step17: Sometimes it's handy to see a very summarized version of the log
Step18: Git supports aliases
Step19: git mv and rm
Step20: Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere.
Step21: And git rm works in a similar fashion.
Step22: We are now going to try two different routes of development
Step23: Using remotes as a single user
Step24: Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer.
Step25: Let's see the remote situation again
Step26: We can now see this repository publicly on github.
Step27: Let's now make some changes in one 'computer' and synchronize them on the second.
Step28: Now we put this new work up on the github server so it's available from the internet
Step29: Now let's fetch that work from machine #1
Step30: An important aside
Step31: And now we go back to the master branch, where we change the same file
Step32: So now let's see what happens if we try to merge the trouble branch into master
Step33: Let's see what git has put into our file
Step34: At this point, we go into the file with a text editor, decide which changes to keep, and make a new commit that records our decision. I've now made the edits, in this case I decided that both pieces of text were useful, but integrated them with some changes
Step35: Let's then make our new commit
Step36: Note
| <ASSISTANT_TASK:>
Python Code:
ls
import sha
# Our first commit
data1 = 'This is the start of my paper2.'
meta1 = 'date: 1/1/12'
hash1 = sha.sha(data1 + meta1).hexdigest()
print 'Hash:', hash1
# Our second commit, linked to the first
data2 = 'Some more text in my paper...'
meta2 = 'date: 1/2/12'
# Note we add the parent hash here!
hash2 = sha.sha(data2 + meta2 + hash1).hexdigest()
print 'Hash:', hash2
%%bash
git config --global user.name "Fernando Perez"
git config --global user.email "[email protected]"
%%bash
# Put here your preferred editor. If this is not set, git will honor
# the $EDITOR environment variable
git config --global core.editor /usr/bin/jed # my lightweight unix editor
# On Windows Notepad will do in a pinch, I recommend Notepad++ as a free alternative
# On the mac, you can set nano or emacs as a basic option
# And while we're at it, we also turn on the use of color, which is very useful
git config --global color.ui "auto"
%%bash
git config --global credential.helper cache
# Set the cache to timeout after 2 hours (setting is in seconds)
git config --global credential.helper 'cache --timeout=7200'
!git
%%bash
rm -rf test
git init test
%%bash
cd test
ls
%%bash
cd test
ls -la
%%bash
cd test
ls -l .git
%%bash
cd test
echo "My first bit of text" > file1.txt
%%bash
cd test
git add file1.txt
%%bash
cd test
git status
%%bash
cd test
git commit -a -m"This is our first commit"
%%bash
cd test
git log
%%bash
cd test
echo "And now some more text..." >> file1.txt
%%bash
cd test
git diff
%%bash
cd test
git commit -a -m"I have made great progress on this critical matter."
%%bash
cd test
git log
%%bash
cd test
git log --oneline --topo-order --graph
%%bash
cd test
# We create our alias (this saves it in git's permanent configuration file):
git config --global alias.slog "log --oneline --topo-order --graph"
# And now we can use it
git slog
%%bash
cd test
git mv file1.txt file-newname.txt
git status
%%bash
cd test
git commit -a -m"I like this new name better"
echo "Let's look at the log again:"
git slog
%%bash
cd test
git status
ls
%%bash
cd test
git branch experiment
git checkout experiment
%%bash
cd test
echo "Some crazy idea" > experiment.txt
git add experiment.txt
git commit -a -m"Trying something new"
git slog
%%bash
cd test
git checkout master
git slog
%%bash
cd test
echo "All the while, more work goes on in master..." >> file-newname.txt
git commit -a -m"The mainline keeps moving"
git slog
%%bash
cd test
ls
%%bash
cd test
git merge experiment
git slog
%%bash
cd test
ls
echo "Let's see if we have any remote repositories here:"
git remote -v
%%bash
cd test
git remote add origin https://github.com/fperez/test.git
git push -u origin master
%%bash
cd test
git remote -v
%%bash
# Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer
git clone https://github.com/fperez/test.git test2
cd test2
pwd
git remote -v
%%bash
cd test2 # working on computer #2
echo "More new content on my experiment" >> experiment.txt
git commit -a -m"More work, on machine #2"
%%bash
cd test2
git push
%%bash
cd test
git pull
%%bash
cd test
git branch trouble
git checkout trouble
echo "This is going to be a problem..." >> experiment.txt
git commit -a -m"Changes in the trouble branch"
%%bash
cd test
git checkout master
echo "More work on the master branch..." >> experiment.txt
git commit -a -m"Mainline work"
%%bash
cd test
git merge trouble
%%bash
cd test
cat experiment.txt
%%bash
cd test
cat experiment.txt
%%bash
cd test
git commit -a -m"Completed merge of trouble, fixing conflicts along the way"
git slog
from IPython.display import YouTubeVideo
YouTubeVideo('U8GBXvdmHT4')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scope Review
Step2: Remember that Python functions create a new scope, meaning the function has its own namespace to find variable names when they are mentioned within the function. We can check for local variables and global variables with the local() and globals() functions. For example
Step3: Here we get back a dictionary of all the global variables, many of them are predefined in Python. So let's go ahead and look at the keys
Step4: Note how s is there, the Global Variable we defined as a string
Step5: Now lets run our function to check for any local variables in the func() (there shouldn't be any)
Step6: Great! Now lets continue with building out the logic of what a decorator is. Remember that in Python everything is an object. That means functions are objects which can be assigned labels and passed into other functions. Lets start with some simple examples
Step7: Assign a label to the function. Note that e are not using parentheses here because we are not calling the function hello, instead we are just putting it into the greet variable.
Step8: This assignment is not attached to the original function
Step9: Functions within functions
Step10: Note how due to scope, the welcome() function is not defined outside of the hello() function. Now lets learn about returning functions from within functions
Step11: Now lets see what function is returned if we set x = hello(), note how the closed parenthesis means that name ahs been defined as Jose.
Step12: Great! Now we can see how x is pointing to the greet function inside of the hello function.
Step13: Lets take a quick look at the code again.
Step14: Great! Note how we can pass the functions as objects and then use them within other functions. Now we can get started with writing our first decorator
Step15: So what just happened here? A decorator simple wrapped the function and modified its behavior. Now lets understand how we can rewrite this code using the @ symbol, which is what Python uses for Decorators
| <ASSISTANT_TASK:>
Python Code:
def func():
return 1
func()
s = 'Global Variable'
def func():
print locals()
print globals()
print globals().keys()
globals()['s']
func()
def hello(name='Jose'):
return 'Hello '+name
hello()
greet = hello
greet
greet()
del hello
hello()
greet()
def hello(name='Jose'):
print 'The hello() function has been executed'
def greet():
return '\t This is inside the greet() function'
def welcome():
return "\t This is inside the welcome() function"
print greet()
print welcome()
print "Now we are back inside the hello() function"
hello()
welcome()
def hello(name='Jose'):
def greet():
return '\t This is inside the greet() function'
def welcome():
return "\t This is inside the welcome() function"
if name == 'Jose':
return greet
else:
return welcome
x = hello()
x
print x()
def hello():
return 'Hi Jose!'
def other(func):
print 'Other code would go here'
print func()
other(hello)
def new_decorator(func):
def wrap_func():
print "Code would be here, before executing the func"
func()
print "Code here will execute after the func()"
return wrap_func
def func_needs_decorator():
print "This function is in need of a Decorator"
func_needs_decorator()
# Reassign func_needs_decorator
func_needs_decorator = new_decorator(func_needs_decorator)
func_needs_decorator()
@new_decorator
def func_needs_decorator():
print "This function is in need of a Decorator"
func_needs_decorator()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple logistic Regression
Step2: Lets get VGG embeddings for train and test input images and convert them to transfer learnt space.
Step3: Model with transfer learnt features
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=False)
img = mnist.train.images[123]
img = np.reshape(img,(28,28))
plt.imshow(img, cmap = 'gray')
plt.show()
img = np.reshape(img,(28,28,1))
print img.shape, 'label = ', mnist.train.labels[123]
from dvd import dvd
img_embedding = dvd.get_embedding_x(img)
print img_embedding.shape
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(mnist.train.images, mnist.train.labels)
preds = clf.predict(mnist.test.images)
print accuracy_score(preds, mnist.test.labels)
train = np.reshape(mnist.train.images, (mnist.train.images.shape[0],28,28))
print 'initial training shape = ', train.shape
train = dvd.get_embedding_X(train)
print 'training shape after embedding =', train.shape
test = np.reshape(mnist.test.images, (mnist.test.images.shape[0],28,28))
test = dvd.get_embedding_X(test)
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(train, mnist.train.labels)
preds = clf.predict(test)
print accuracy_score(preds, mnist.test.labels)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Задание 1
Step2: Задание 2
Step3: Задание 3
Step4: Задание 4
Step5: Задание 5
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
boston = load_boston()
print(boston.data.shape)
print(boston.DESCR)
p = 0.75
idx = int(p * boston.data.shape[0]) + 1
X_train, X_test = np.split(boston.data, [idx])
y_train, y_test = np.split(boston.target, [idx])
def L_derivative(y_train, z):
return (y_train - z)
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]
base_algorithms_list = []
coefficients_list = []
z = np.zeros( (y_train.shape) )
for _ in range(50):
coefficients_list.append(0.9)
dt_regressor = DecisionTreeRegressor(max_depth=5, random_state=42)
dt_regressor.fit(X_train, L_derivative(y_train, z))
base_algorithms_list.append(dt_regressor)
z = gbm_predict(X_train)
alg_predict = gbm_predict(X_test)
alg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))
print(alg_rmse)
with open('answer2.txt', 'w') as fout:
fout.write(str(alg_rmse))
base_algorithms_list = []
coefficients_list = []
z = np.zeros( (y_train.shape) )
for i in range(50):
coeff = 0.9 / (1. + i)
coefficients_list.append(coeff)
dt_regressor = DecisionTreeRegressor(max_depth=5, random_state=42)
dt_regressor.fit(X_train, L_derivative(y_train, z))
base_algorithms_list.append(dt_regressor)
z = gbm_predict(X_train)
alg_predict = gbm_predict(X_test)
alg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))
print(alg_rmse)
with open('answer3.txt', 'w') as fout:
fout.write(str(alg_rmse))
answer = str(2) + ' ' + str(3)
with open('answer4.txt', 'w') as fout:
fout.write(answer)
lr_regressor = LinearRegression()
lr_regressor.fit(X_train, y_train)
alg_predict = lr_regressor.predict(X_test)
alg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))
print(alg_rmse)
with open('answer5.txt', 'w') as fout:
fout.write(str(alg_rmse))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting data
Step2: Indexes in a numpy array can only be integers.
Step3: In this second example indexing is made using strings, that are the representation on the indexes (labels). We can also highlight that, in this case, the last element in the slice IS INCLUDED.
Step4: Depending how are defined the column names we can access the column values using dot notation but this way not always work so I strongly recommend not to use it
Step5: Fancy indexing with Series
Step6: With DataFrames the fancy indexing can be ambiguous and it will raise an IndexError.
Step7: Boolean indexing
Step8: We can use several conditions. for instance, let's refine the previous result
Step9: Using conditions coud be less readable. Since version 0.13 you can use the query method to make the expression more readable.
Step10: Using these ways of selection can be ambiguous in some cases. Let's make a parenthetical remark to come bacllater to see more advanced ways of selection.
Step11: Now, if we perform an operation between both Series, where there are the same index we can perform the operation and where there are no indexes on both sides of the operation we conserve the index in the result but the operation could not be performed and a NaN is returned but we will not get an error
Step12: Coming back to indexing (recap)
Step13: Or we can select several columns
Step14: But with slicing we will access the indexes
Step15: So the following will provide an error
Step16: Uh, what a mess!!
Step17: A fourth way not seen before would be
Step18: Let's practice all of this
| <ASSISTANT_TASK:>
Python Code:
# first, the imports
import os
import datetime as dt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
np.random.seed(19760812)
%matplotlib inline
# We read the data in the file 'mast.txt'
ipath = os.path.join('Datos', 'mast.txt')
def dateparse(date, time):
YY = 2000 + int(date[:2])
MM = int(date[2:4])
DD = int(date[4:])
hh = int(time[:2])
mm = int(time[2:])
return dt.datetime(YY, MM, DD, hh, mm, 0)
cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',
'x1', 'x2', 'x3', 'x4', 'x5',
'wspd_std']
wind = pd.read_csv(ipath, sep = "\s*", names = cols,
parse_dates = [[0, 1]], index_col = 0,
date_parser = dateparse)
wind[0:10]
wind['2013-09-04 00:00:00':'2013-09-04 01:30:00']
wind['wspd'].head(3)
# Thi is similar to what we did in the previous code cell
wind.wspd.head(3)
# An example that can raise an error
df1 = pd.DataFrame(np.random.randn(5,2), columns = [1, 2])
df1
# This will be wrong
df1.1
# In order to use it we have to use
df1[1]
# Create a Series
wspd = wind['wspd']
# Access the elements located at positions 0, 100 and 1000
print(wspd[[0, 100, 1000]])
print('\n' * 3)
# Using indexes at locations 0, 100 and 1000
idx = wspd[[0, 100, 1000]].index
print(idx)
print('\n' * 3)
# We access the same elements than initially but using the labels instead
# the location of the elements
print(wspd[idx])
# Try it...
idx = wind['wspd'] > 35
wind[idx]
idx = (wind['wspd'] > 35) & (wind['wdir'] > 225)
wind[idx]
# To make it more efficient you should install 'numexpr'
# tht is the default engine. If you don't have it installed
# and you don't define the engine ('python') you will get an ImportError
wind.query('wspd > 35 and wdir > 225', engine = 'python')
s1 = pd.Series(np.arange(0,10), index = np.arange(0,10))
s2 = pd.Series(np.arange(10,20), index = np.arange(5,15))
print(s1)
print(s2)
s1 + s2
wind['wspd_std']
wind[['wspd', 'wspd_std']]
wind['2015/01/01 00:00':'2015/01/01 02:00']
wind['wspd':'wdir']
wind[['wspd':'wdir']]
wind.loc['2013-09-04 00:00:00':'2013-09-04 00:20:00', 'wspd':'wspd_max']
wind.iloc[0:3, 0:2] # similar to indexing a numpy arrays wind.values[0:3, 0:2]
wind.ix[0:3, 'wspd':'wspd_max']
wind[0:3][['wspd', 'wspd_max']]
wind[['wspd', 'wspd_max']][0:3]
wind.between_time('00:00', '00:30').head(20)
# It also works with series:
wind['wspd'].between_time('00:00', '00:30').head(20)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: With these 3 elements it is possible to assemble a OGC Filter Encoding (FE) using the owslib.fes* module.
Step4: We have created a csw object, but nothing has been searched yet.
Step5: That search returned a lot of records!
Step6: Now we got fewer records to deal with. That's better. But if the user is interested in only some specific service, it is better to filter by a string, like CO-OPS.
Step7: The easiest way to get more information is to explorer the individual records.
Step8: The next step is to inspect the type services/schemes available for downloading the data. The easiest way to accomplish that is with by "sniffing" the URLs with geolinks.
Step9: There are many direct links to Comma Separated Value (CSV) and
Step10: Note that the URL returned by the service has some hard-coded start/stop dates.
Step11: Finally, it is possible to download the data directly into a data pandas data frame and plot it.
| <ASSISTANT_TASK:>
Python Code:
from datetime import datetime
# Region: Northwest coast.
bbox = [-127, 43, -123.75, 48]
min_lon, max_lon = -127, -123.75
min_lat, max_lat = 43, 48
bbox = [min_lon, min_lat, max_lon, max_lat]
crs = "urn:ogc:def:crs:OGC:1.3:CRS84"
# Temporal range of 1 week.
start = datetime(2017, 4, 14, 0, 0, 0)
stop = datetime(2017, 4, 21, 0, 0, 0)
# Sea surface temperature CF names.
cf_names = [
"sea_water_temperature",
"sea_surface_temperature",
"sea_water_potential_temperature",
"equivalent_potential_temperature",
"sea_water_conservative_temperature",
"pseudo_equivalent_potential_temperature",
]
from owslib import fes
def fes_date_filter(start, stop, constraint="overlaps"):
Take datetime-like objects and returns a fes filter for date range
(begin and end inclusive).
NOTE: Truncates the minutes!!!
Examples
--------
>>> from datetime import datetime, timedelta
>>> stop = datetime(2010, 1, 1, 12, 30, 59).replace(tzinfo=pytz.utc)
>>> start = stop - timedelta(days=7)
>>> begin, end = fes_date_filter(start, stop, constraint='overlaps')
>>> begin.literal, end.literal
('2010-01-01 12:00', '2009-12-25 12:00')
>>> begin.propertyoperator, end.propertyoperator
('ogc:PropertyIsLessThanOrEqualTo', 'ogc:PropertyIsGreaterThanOrEqualTo')
>>> begin, end = fes_date_filter(start, stop, constraint='within')
>>> begin.literal, end.literal
('2009-12-25 12:00', '2010-01-01 12:00')
>>> begin.propertyoperator, end.propertyoperator
('ogc:PropertyIsGreaterThanOrEqualTo', 'ogc:PropertyIsLessThanOrEqualTo')
start = start.strftime("%Y-%m-%d %H:00")
stop = stop.strftime("%Y-%m-%d %H:00")
if constraint == "overlaps":
propertyname = "apiso:TempExtent_begin"
begin = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname, literal=stop)
propertyname = "apiso:TempExtent_end"
end = fes.PropertyIsGreaterThanOrEqualTo(
propertyname=propertyname, literal=start
)
elif constraint == "within":
propertyname = "apiso:TempExtent_begin"
begin = fes.PropertyIsGreaterThanOrEqualTo(
propertyname=propertyname, literal=start
)
propertyname = "apiso:TempExtent_end"
end = fes.PropertyIsLessThanOrEqualTo(propertyname=propertyname, literal=stop)
else:
raise NameError("Unrecognized constraint {}".format(constraint))
return begin, end
kw = dict(wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:AnyText")
or_filt = fes.Or([fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in cf_names])
begin, end = fes_date_filter(start, stop)
bbox_crs = fes.BBox(bbox, crs=crs)
filter_list = [
fes.And(
[
bbox_crs, # bounding box
begin,
end, # start and end date
or_filt, # or conditions (CF variable names)
]
)
]
from owslib.csw import CatalogueServiceWeb
endpoint = "https://data.ioos.us/csw"
csw = CatalogueServiceWeb(endpoint, timeout=60)
def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000):
Iterate `maxrecords`/`pagesize` times until the requested value in
`maxrecords` is reached.
from owslib.fes import SortBy, SortProperty
# Iterate over sorted results.
sortby = SortBy([SortProperty("dc:title", "ASC")])
csw_records = {}
startposition = 0
nextrecord = getattr(csw, "results", 1)
while nextrecord != 0:
csw.getrecords2(
constraints=filter_list,
startposition=startposition,
maxrecords=pagesize,
sortby=sortby,
)
csw_records.update(csw.records)
if csw.results["nextrecord"] == 0:
break
startposition += pagesize + 1 # Last one is included.
if startposition >= maxrecords:
break
csw.records.update(csw_records)
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = "\n".join(csw.records.keys())
print("Found {} records.\n".format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print(u"[{}]\n{}\n".format(value.title, key))
kw = dict(wildCard="*", escapeChar="\\\\", singleChar="?", propertyname="apiso:AnyText")
filter_list = [
fes.And(
[
bbox_crs, # Bounding box
begin,
end, # start and end date
or_filt, # or conditions (CF variable names).
fes.Not([fes.PropertyIsLike(literal="*NAM*", **kw)]), # no NAM results
fes.Not([fes.PropertyIsLike(literal="*CONUS*", **kw)]), # no NAM results
fes.Not([fes.PropertyIsLike(literal="*GLOBAL*", **kw)]), # no NAM results
fes.Not([fes.PropertyIsLike(literal="*ROMS*", **kw)]), # no NAM results
]
)
]
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = "\n".join(csw.records.keys())
print("Found {} records.\n".format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print(u"[{}]\n{}\n".format(value.title, key))
filter_list = [
fes.And(
[
bbox_crs, # Bounding box
begin,
end, # start and end date
or_filt, # or conditions (CF variable names).
fes.PropertyIsLike(literal="*CO-OPS*", **kw), # must have CO-OPS
]
)
]
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = "\n".join(csw.records.keys())
print("Found {} records.\n".format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print("[{}]\n{}\n".format(value.title, key))
import textwrap
value = csw.records[key]
print("\n".join(textwrap.wrap(value.abstract)))
print("\n".join(value.subjects))
from geolinks import sniff_link
msg = "geolink: {geolink}\nscheme: {scheme}\nURL: {url}\n".format
for ref in value.references:
print(msg(geolink=sniff_link(ref["url"]), **ref))
start, stop
for ref in value.references:
url = ref["url"]
if "csv" in url and "sea" in url and "temperature" in url:
print(msg(geolink=sniff_link(url), **ref))
break
fmt = (
"https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/SOS?"
"service=SOS&"
"eventTime={0:%Y-%m-%dT00:00:00}/{1:%Y-%m-%dT00:00:00}&"
"observedProperty=http://mmisw.org/ont/cf/parameter/sea_water_temperature&"
"version=1.0.0&"
"request=GetObservation&offering=urn:ioos:station:NOAA.NOS.CO-OPS:9439040&"
"responseFormat=text/csv"
)
url = fmt.format(start, stop)
import io
import pandas as pd
import requests
r = requests.get(url)
df = pd.read_csv(
io.StringIO(r.content.decode("utf-8")), index_col="date_time", parse_dates=True
)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(11, 2.75))
ax = df["sea_water_temperature (C)"].plot(ax=ax)
ax.set_xlabel("")
ax.set_ylabel(r"Sea water temperature ($^\circ$C)")
ax.set_title(value.title)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Identify Balmer absorption lines in a star
Step2: 2. Identify Balmer emission lines in a galaxy
Step3: Balmer Series
Step4: Find the wavelength at which the lines are observed, due to redshifting.
Step5: The H$\alpha$ line is clear, but the others are hard to see when looking at the full spectrum.
Step6: Zooming in
Step7: Absorption to Emission
Step8: 6. Estimate dust extinction
Step9: Let's construct arrays with the rest-frame wavelength and the flux. We will not concern ourselves with the overall normalization of the flux in this step.
Step10: We can plot both, and we see that for UGC 10227, which is seen edge-on, it is a much redder spectrum than MCG -01-53-020, which is seen face-on. But many of the small scale features of the spectra are similar
Step11: We want to put these functions on the same wavelength grid. For our purposes, a simple 3rd-order spline interpolation scheme will be sufficient. Note that for more demanding purposes, a more accurate interpolation, or avoiding interpolation altogether, could be necessary. Whenever you interpolate, you usually cause error covariances between the output pixels and a loss of information.
Step12: Let's just check that the interpolation didn't do anything silly.
Step13: Now we can just divide the two arrays on the same wavelength grid to get some estimate of the extinction (here quantified in magnitude units).
Step14: Now we will estimate the total dust extinction under the assumption that the extinction follows the law
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.interpolate as interpolate
import astropy.io.fits as fits
import matplotlib.pyplot as plt
import requests
def find_nearest(array, value):
index = (np.abs(array - value)).argmin()
return index
def find_local_min(array, index):
min_index = np.argmin(array[index-25:index+26])
return min_index + index - 25
balmer_series = np.array((6562.79, 4861.35, 4340.472, 4101.734, 3970.075, 3889.064, 3835.397))
balmer_labels = [r'H$\alpha$', r'H$\beta$', r'H$\gamma$', r'H$\delta$', r'H$\epsilon$', r'H$\zeta$', r'H$\eta$']
hdul = fits.open('A0.fits')
data = hdul[1].data
loglam = data['Loglam']
lam = 10**loglam
flux = data['Flux']
mask = lam < 8000
plt.figure(figsize=(15,8))
plt.plot(lam[mask],flux[mask])
for i in range(len(balmer_series)):
index = find_nearest(lam, balmer_series[i]) # finds the closest wavelength index to current balmer series
min_index = find_local_min(flux, index) # finds the local minimum near current index
plt.text(lam[min_index]-30,flux[min_index]-0.3, balmer_labels[i], fontsize=10) # puts the appropriate label near each local minimum
plt.xlabel('Wavelength (Angstroms)', fontsize=14)
plt.ylabel('Normalized Flux', fontsize=14)
plt.title('Balmer Absorption Lines for an A star', fontsize=14)
plt.savefig('balmer.png', dpi=300)
request_template = 'https://dr13.sdss.org/optical/spectrum/view/data/format=fits/spec=lite?plateid={plate}&mjd={mjd}&fiberid={fiberid}'
request = request_template.format(plate=2214, fiberid=6, mjd=53794)
r = requests.get(request)
fp = open('spec-2214-53794-0006.fits', 'wb')
fp.write(r.content)
fp.close()
hdu = fits.open('spec-2214-53794-0006.fits')
header = hdu[0].header
data = hdu[1].data
z = 0.0657799 #Redshift at link above
wl = 10**data['loglam']
flux = data['flux']
model = data['model']
#Balmer series
halpha = 6564.5377
hbeta = 4861.3615
hgamma = 4340.462
hdelta = 4101.74
lines = [halpha, hbeta, hgamma, hdelta]
labels = [r'H$_{\alpha}$', r'H$_{\beta}$', r'H$_{\gamma}$', r'H$_{\delta}$']
#Shifted
lines_shifted = np.empty(len(lines))
for i in range(len(lines)):
lines_shifted[i] = lines[i]*(1+z)
fig = plt.figure(figsize=(13, 7))
plt.plot(wl, flux)
plt.plot(wl, model, color='black')
plt.xlabel('Wavelength $\lambda$ ($\AA$)')
plt.ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
for line, label in zip(lines_shifted, labels):
plt.axvline(line, color='red', alpha=0.7)
plt.annotate(label, xy=(line, 25), xytext=(line, 25), size=16)
# Zooms
width = 100
fig, axarr = plt.subplots(2,2, figsize=(15, 10))
plt.subplots_adjust(hspace=0.3)
count = 0
for i in range(2):
for j in range(2):
line = lines_shifted[count]
wf = [(w, f, m) for w, f, m in zip(wl, flux, model) if (w<line+width) and (w>line-width)]
wlcut = [tup[0] for tup in wf]
fluxcut = [tup[1] for tup in wf]
modelcut = [tup[2] for tup in wf]
axarr[i,j].set_title(labels[count], size=20)
axarr[i,j].plot(wlcut, fluxcut)
axarr[i,j].plot(wlcut, modelcut, color='black')
axarr[i,j].axvline(line, color='red', alpha=0.7)
axarr[i,j].set_xlabel('Wavelength $\lambda$ ($\AA$)')
axarr[i,j].set_ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
count += 1
width = 30
fig = plt.figure(figsize=(10, 7))
count = 1
line = lines_shifted[count] #H_beta
wf = [(w, f, m) for w, f, m in zip(wl, flux, model) if (w<line+width) and (w>line-width)]
wlcut = [tup[0] for tup in wf]
fluxcut = [tup[1] for tup in wf]
modelcut = [tup[2] for tup in wf]
plt.title(labels[count], size=20)
plt.plot(wlcut, fluxcut)
plt.plot(wlcut, modelcut, color='black')
plt.axvline(line, color='red', alpha=0.7)
plt.xlabel('Wavelength $\lambda$ ($\AA$)')
plt.ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
UG = fits.open('https://dr16.sdss.org/sas/dr16/sdss/spectro/redux/26/spectra/lite/1056/spec-1056-52764-0308.fits')
MCG = fits.open('https://dr16.sdss.org/sas/dr16/sdss/spectro/redux/26/spectra/lite/0637/spec-0637-52174-0403.fits')
z_UG = UG[2].data['Z'][0]
z_MCG = MCG[2].data['Z'][0]
lam_UG = UG[1].data['loglam'] - np.log10(1. + z_UG)
lam_MCG = MCG[1].data['loglam'] - np.log10(1. + z_MCG)
f_UG = UG[1].data['flux']
f_MCG = MCG[1].data['flux']
plt.figure()
plt.plot(10.**lam_UG, f_UG, label='UGC 10227 (edge-on)')
plt.plot(10.**lam_MCG, f_MCG, label='MCG -01-53-020 (face-on)')
plt.xlabel('wavelength')
plt.ylabel('flux')
plt.legend()
plt.show()
f_MCG_interp_func = interpolate.interp1d(lam_MCG, f_MCG, kind='cubic',
fill_value='extrapolate')
f_MCG_interp = f_MCG_interp_func(lam_UG)
plt.figure()
plt.plot(10.**lam_UG, f_UG, label='UGC 10227 (edge-on)')
plt.plot(10.**lam_UG, f_MCG_interp, label='MCG -01-53-020 (face-on)')
plt.xlabel('wavelength')
plt.ylabel('flux')
plt.legend()
plt.show()
A = - 2.5 * np.log10(np.abs(f_UG / f_MCG_interp)) # abs() is used here to avoid invalid(negative) points
plt.figure()
plt.plot(10.**lam_UG, A)
plt.xlabel('$\lambda$ in Anstroms')
plt.ylabel('extinction $A_{\lambda}$ in mag')
#plt.plot(lam,lam*A)
plt.show()
AV = 1.
Amodel_10 = AV * (5500. / 10.**lam_UG)
AV = 0.5
Amodel_05 = AV * (5500. / 10.**lam_UG)
AV = 2.0
Amodel_20 = AV * (5500. / 10.**lam_UG)
plt.figure()
plt.plot(10.**lam_UG, A - Amodel_05, label='Residuals from A_V = 0.5')
plt.plot(10.**lam_UG, A - Amodel_10, label='Residuals from A_V = 1.0')
plt.plot(10.**lam_UG, A - Amodel_20, label='Residuals from A_V = 2.0')
plt.xlabel('$\lambda$ in Anstroms')
plt.ylabel('extinction $A_{\lambda}$ in mag')
plt.legend()
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating NumPy arrays
Step2: Multidimensional lists (or tuples) produce multidimensional arrays
Step3: Evenly spaced values
Step4: Specific sized arrays
Step5: Indexing and slicing arrays
Step6: Contrary to lists, slicing is possible over all dimensions
Step7: Views and copies of arrays
Step8: Slicing creates a view to the array, and modifying the view changes corresponding original contents
Step9: Array operations
Step10: NumPy has special functions which can work with the array arguments (sin, cos, exp, sqrt, log, ...)
Step11: Vectorized operations
Step12: Linear algebra
Step13: Simple plotting with matplotlib
Step14: For showing fictures in the notebook, one can invoke the following magic
Step15: Simple line plots of NumPy arrays
Step16: Multiple subplots
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array((1, 2, 3, 4))
print(a)
print(a.dtype)
print(a.size)
a = np.array((1,2,3,4), dtype=float) # Type can be explicitly specified
print(a)
print(a.dtype)
print(a.size)
my_list = [[1,2,3], [4,5,6]]
a = np.array(my_list)
print(a)
print(a.size)
print(a.shape)
a = np.arange(6) # half open interval up to 6
print(a)
a = np.arange(0.1, 1, 0.2) # half open interval with start, stop, step
print(a)
b = np.linspace(-4.5, 4.5, 5) # specified number of samples within closed interval
print(b)
mat = np.empty((2, 2, 2), float) # uninitialized 2x2x2 array
mat = np.zeros((3,3), int) # initialized to zeros
mat = np.ones((2,3), complex) #initialized to ones
a = np.arange(6)
print(a[2])
print(a[-2])
mat = np.array([[1, 2, 3], [4, 5, 6]])
print(mat)
print(mat[0,2])
print(mat[-1,-2])
mat = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
print(mat[1, 1:3])
mat = np.zeros((4,4))
mat[1:-1,1:-1] = 2
print(mat)
a = np.arange(6)
print(a)
b = a # b is a reference, changing values in b changes also a
b[2] = -3
print(a)
b = a.copy() # b is copy, changing b does not affect a
b[0] = 66
print(b)
print(a)
c = a[1:4]
print(c)
c[-1] = 47
print(a)
a = np.array([1.0, 2.0, 3.0])
b = 2.0
print(b * a)
print(b + a)
print(a + a)
print(a * a)
x = np.linspace(-np.pi, np.pi, 5)
y = np.sin(x)
N = 1000
a = np.arange(N)
dif = np.zeros(N-1, a.dtype)
%%timeit #timeit magic allows easy timing for the execution of an cell
# brute force with for loop
for i in range(1, N):
dif[i-1] = a[i] - a[i-1]
%%timeit
# vectorized operation
dif = a[1:] - a[:-1]
A = np.array(((2, 1), (1, 3)))
B = np.array(((-2, 4.2), (4.2, 6)))
C = np.dot(A, B) # matrix-matrix product
w, v = np.linalg.eig(A) # eigenvalues in w, eigenvectors in v
b = np.array((1, 2))
x = np.linalg.solve(C, b) # Solve Cx = b
print(np.dot(C, x)) # np.dot calculates also matrix-vector and vector-vector products
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
plt.title('A simple plot')
plt.xlabel('time (s)')
x = np.linspace(-np.pi, np.pi, 100)
y1 = np.sin(x)
y2 = np.cos(x)
plt.subplot(211) #create 2x1 plot, use 1st
plt.plot(x, y1, linewidth=2)
plt.ylabel('sin')
plt.subplot(212) #use 2nd
plt.plot(x, y2, '--or') # use dashed line, 'o' markers and red color
plt.ylabel('cos', fontsize=16)
plt.xlabel(r'$\theta$') # when using Latex, string has to be so called raw string (r'my string')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The object new_data has been reprojected to Alberts and a linear model have been fitted with residuals stored as residuals
Step2: The empirical variogram
Step3: Bonus! simulation of the random processs
Step4: Bonus!
| <ASSISTANT_TASK:>
Python Code:
from external_plugins.spystats import tools
%run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py
from external_plugins.spystats import tools
hx = np.linspace(0,800000,100)
new_data.residuals[:10]
gvg.plot(refresh=False,legend=False,percentage_trunked=20)
plt.title("Semivariogram of residuals $log(Biomass) ~ log(SppR)$")
## HERE we can cast a model (Whittle) and fit it inside the global variogram
whittle_model = tools.WhittleVariogram(sill=0.345,range_a=100000,nugget=0.33,alpha=1.0)
tt = gvg.fitVariogramModel(whittle_model)
plt.plot(hx,gvg.model.f(hx),'--',lw=4,c='black')
print(whittle_model)
## This section is an example for calculating GLS. Using a small section because of computing intensity
minx = -85
maxx = -80
miny = 30
maxy = 35
section = tools._subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy)
secvg = tools.Variogram(section,'logBiomass',model=whittle_model)
MMdist = secvg.distance_coordinates.flatten()
CovMat = secvg.model.corr_f(MMdist).reshape(len(section),len(section))
plt.imshow(CovMat)
import statsmodels.regression.linear_model as lm
import statsmodels.api as sm
model1 = lm.GLS.from_formula(formula='logBiomass ~ logSppN',data=section,sigma=CovMat)
results = model1.fit()
resum = results.summary()
k = resum.as_csv()
## Without spatial structure
Id = np.identity(len(section))
model2 = lm.GLS.from_formula(formula='logBiomass ~ logSppN',data=section,sigma=Id)
results = model2.fit()
smm =results.summary()
## Without spatial structure
import statsmodels.formula.api as smf
model3 = smf.ols(formula='logBiomass ~ logSppN',data=section)
results = model3.fit()
results.summary()
from scipy.stats import multivariate_normal as mvn
from scipy.spatial import distance_matrix
n = 50
nx = np.linspace(0,100,n)
xx, yy = np.meshgrid(nx,nx)
points = np.vstack([ xx.ravel(), yy.ravel()]).transpose()## Generate dist matrix
Mdist = distance_matrix(points,points)
plt.imshow(Mdist)
Mdist.shape
covmat = secvg.model.corr_f(Mdist.flatten()).reshape(Mdist.shape)
plt.imshow(covmat)
meanx = np.zeros(n*n)
sim1 = mvn.rvs(mean=meanx,cov=covmat)
plt.imshow(sim1.reshape(n,n),interpolation=None)
%time sim2 = mvn.rvs(mean=meanx,cov=covmat)
plt.imshow(sim2.reshape(n,n))
matm = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=0.5)
expmm = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gausms = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
sphmm = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
wm = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.33,alpha=1)
map(lambda l : l.fit(gvg), [matm,expmm,gausms,sphmm,wm])
print(matm)
print(expmm)
print(gausms)
print(sphmm)
print(wm)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just adding some imports and setting graph display options.
Step2: Let's look at our data!
Step3: We'll be looking primarily at candidate, created_at, lang, place, user_followers_count, user_time_zone, polarity, and influenced_polarity, and text.
Step4: First I'll look at sentiment, calculated with TextBlob using the text column. Sentiment is composed of two values, polarity - a measure of the positivity or negativity of a text - and subjectivity. Polarity is between -1.0 and 1.0; subjectivity between 0.0 and 1.0.
Step5: Unfortunately, it doesn't work too well on anything other than English.
Step6: TextBlob has a cool translate() function that uses Google Translate to take care of that for us, but we won't be using it here - just because tweets include a lot of slang and abbreviations that can't be translated very well.
Step7: All right - let's figure out the most (positively) polarized English tweets.
Step8: Extrema don't mean much. We might get more interesting data with mean polarities for each candidate. Let's also look at influenced polarity, which takes into account the number of retweets and followers.
Step9: So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers.
Step10: Side note
Step11: Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else!
Step12: That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups.
Step13: I'll also remove English, since it would just dwarf all the other languages.
Step14: Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages.
Step15: Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul.
Step16: Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe.
Step17: So that's not it. Maybe there was a major event everyone was retweeting?
Step18: Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring.
Step19: That's it!
Step20: That's our raw data
Step21: Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders.
Step22: My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting.
Step23: I also want to look at polarity, so I'll only use English tweets.
Step24: Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities.
Step25: Exercise for the reader
| <ASSISTANT_TASK:>
Python Code:
from arrows.preprocess import load_df
from textblob import TextBlob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import cartopy
pd.set_option('display.max_colwidth', 200)
pd.options.display.mpl_style = 'default'
matplotlib.style.use('ggplot')
sns.set_context('talk')
sns.set_style('whitegrid')
plt.rcParams['figure.figsize'] = [12.0, 8.0]
% matplotlib inline
df = load_df('arrows/data/results.csv')
df.info()
df[['candidate', 'created_at', 'lang', 'place', 'user_followers_count',
'user_time_zone', 'polarity', 'influenced_polarity', 'text']].head(1)
TextBlob("Tear down this wall!").sentiment
TextBlob("Radix malorum est cupiditas.").sentiment
sentence = TextBlob("Radix malorum est cupiditas.").translate()
print(sentence)
print(sentence.sentiment)
english_df = df[df.lang == 'en']
english_df.sort('polarity', ascending = False).head(3)[['candidate', 'polarity', 'subjectivity', 'text']]
candidate_groupby = english_df.groupby('candidate')
candidate_groupby[['polarity', 'influence', 'influenced_polarity']].mean()
jeb = candidate_groupby.get_group('Jeb Bush')
jeb_influence = jeb.sort('influence', ascending = False)
jeb_influence[['influence', 'polarity', 'influenced_polarity', 'user_name', 'text', 'created_at']].head(5)
df[df.user_name == 'Donald J. Trump'].groupby('candidate').size()
language_groupby = df.groupby(['candidate', 'lang'])
language_groupby.size()
largest_languages = language_groupby.filter(lambda group: len(group) > 10)
non_english = largest_languages[largest_languages.lang != 'en']
non_english_groupby = non_english.groupby(['lang', 'candidate'], as_index = False)
sizes = non_english_groupby.text.agg(np.size)
sizes = sizes.rename(columns={'text': 'count'})
sizes_pivot = sizes.pivot_table(index='lang', columns='candidate', values='count', fill_value=0)
plot = sns.heatmap(sizes_pivot)
plot.set_title('Number of non-English Tweets by Candidate', family='Ubuntu')
plot.set_ylabel('language code', family='Ubuntu')
plot.set_xlabel('candidate', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
mean_polarities = df.groupby(['candidate', 'created_at']).influenced_polarity.mean()
plot = mean_polarities.unstack('candidate').resample('60min').plot()
plot.set_title('Influenced Polarity over Time by Candidate', family='Ubuntu')
plot.set_ylabel('influenced polarity', family='Ubuntu')
plot.set_xlabel('time', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
language_sizes = df.groupby('lang').size()
threshold = language_sizes.quantile(.75)
top_languages_df = language_sizes[language_sizes > threshold]
top_languages = set(top_languages_df.index) - {'und'}
top_languages
df['hour'] = df.created_at.apply(lambda datetime: datetime.hour)
for language_code in top_languages:
lang_df = df[df.lang == language_code]
normalized = lang_df.groupby('hour').size() / lang_df.lang.count()
plot = normalized.plot(label = language_code)
plot.set_title('Tweet Frequency in non-English Languages by Hour of Day', family='Ubuntu')
plot.set_ylabel('normalized frequency', family='Ubuntu')
plot.set_xlabel('hour of day (UTC)', family='Ubuntu')
plot.legend()
plot.figure.set_size_inches(12, 7)
df_of_interest = df[(df.hour == 2) & (df.lang == 'pt')]
print('Number of tweets:', df_of_interest.text.count())
print('Number of unique users:', df_of_interest.user_name.unique().size)
df_of_interest.text.head(25).unique()
df_of_interest[df_of_interest.text.str.contains('Jeb Bush diz que foi atingido')].text.count()
tz_df = english_df.dropna(subset=['user_time_zone'])
us_tz_df = tz_df[tz_df.user_time_zone.str.contains("US & Canada")]
us_tz_candidate_groupby = us_tz_df.groupby(['candidate', 'user_time_zone'])
us_tz_candidate_groupby.influenced_polarity.mean()
tz_shapes = cartopy.io.shapereader.Reader('arrows/world/tz_world_mp.shp')
tz_records = list(tz_shapes.records())
tz_translator = {
'Eastern Time (US & Canada)': 'America/New_York',
'Central Time (US & Canada)': 'America/Chicago',
'Mountain Time (US & Canada)': 'America/Denver',
'Pacific Time (US & Canada)': 'America/Los_Angeles',
}
american_tz_records = {
tz_name: next(filter(lambda record: record.attributes['TZID'] == tz_id, tz_records))
for tz_name, tz_id
in tz_translator.items()
}
albers_equal_area = cartopy.crs.AlbersEqualArea(-95, 35)
plate_carree = cartopy.crs.PlateCarree()
states_and_provinces = cartopy.feature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none'
)
cmaps = [matplotlib.cm.Blues, matplotlib.cm.Greens,
matplotlib.cm.Reds, matplotlib.cm.Purples]
norm = matplotlib.colors.Normalize(vmin=0, vmax=30)
candidates = df['candidate'].unique()
plt.rcParams['figure.figsize'] = [6.0, 4.0]
for index, candidate in enumerate(candidates):
plt.figure()
plot = plt.axes(projection=albers_equal_area)
plot.set_extent((-125, -66, 20, 50))
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.add_feature(states_and_provinces, edgecolor='gray')
plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4")
for tz_name, record in american_tz_records.items():
tz_specific_df = us_tz_df[us_tz_df.user_time_zone == tz_name]
tz_candidate_specific_df = tz_specific_df[tz_specific_df.candidate == candidate]
mean_polarity = tz_candidate_specific_df.influenced_polarity.mean()
plot.add_geometries(
[record.geometry],
crs=plate_carree,
color=cmaps[index](norm(mean_polarity)),
alpha=.8
)
plot.set_title('Influenced Polarity toward {} by U.S. Timezone'.format(candidate), family='Ubuntu')
plot.figure.set_size_inches(6, 3.5)
plt.show()
print()
american_timezones = ('US & Canada|Canada|Arizona|America|Hawaii|Indiana|Alaska'
'|New_York|Chicago|Los_Angeles|Detroit|CST|PST|EST|MST')
foreign_tz_df = tz_df[~tz_df.user_time_zone.str.contains(american_timezones)]
foreign_tz_groupby = foreign_tz_df.groupby('user_time_zone')
foreign_tz_groupby.size().sort(inplace = False, ascending = False).head(25)
foreign_english_tz_df = foreign_tz_df[foreign_tz_df.lang == 'en']
foreign_tz_groupby = foreign_english_tz_df.groupby(['candidate', 'user_time_zone'])
top_foreign_tz_df = foreign_tz_groupby.filter(lambda group: len(group) > 40)
top_foreign_tz_groupby = top_foreign_tz_df.groupby(['user_time_zone', 'candidate'], as_index = False)
mean_influenced_polarities = top_foreign_tz_groupby.influenced_polarity.mean()
pivot = mean_influenced_polarities.pivot_table(
index='user_time_zone',
columns='candidate',
values='influenced_polarity',
fill_value=0
)
plot = sns.heatmap(pivot)
plot.set_title('Influenced Polarity in Major Foreign Cities by Candidate', family='Ubuntu')
plot.set_ylabel('city', family='Ubuntu')
plot.set_xlabel('candidate', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
df_place = df.dropna(subset=['place'])
mollweide = cartopy.crs.Mollweide()
plot = plt.axes(projection=mollweide)
plot.set_global()
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.scatter(
list(df_place.longitude),
list(df_place.latitude),
transform=plate_carree,
zorder=2
)
plot.set_title('International Tweeters with Geolocation Enabled', family='Ubuntu')
plot.figure.set_size_inches(14, 9)
plot = plt.axes(projection=albers_equal_area)
plot.set_extent((-125, -66, 20, 50))
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.add_feature(states_and_provinces, edgecolor='gray')
plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4")
candidate_groupby = df_place.groupby('candidate', as_index = False)
colors = ['#1976d2', '#7cb342', '#f4511e', '#7b1fa2']
for index, (name, group) in enumerate(candidate_groupby):
longitudes = group.longitude.values
latitudes = group.latitude.values
plot.scatter(
longitudes,
latitudes,
transform=plate_carree,
color=colors[index],
label=name,
zorder=2
)
plot.set_title('U.S. Tweeters by Candidate', family='Ubuntu')
plt.legend(loc='lower left')
plot.figure.set_size_inches(12, 7)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create scoring method
Step2: Run Gensim LSI test
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import json
import codecs
import os
import time
docs = []
for filename in os.listdir("reuters-21578-json/data/full"):
f = open("reuters-21578-json/data/full/"+filename)
js = json.load(f)
for j in js:
if 'topics' in j and 'body' in j:
d = {}
d["id"] = j['id']
d["text"] = j['body'].replace("\n","")
d["title"] = j['title']
d["tags"] = ",".join(j['topics'])
docs.append(d)
print "loaded ",len(docs)," documents"
from seldon.text import DocumentSimilarity,DefaultJsonCorpus
corpus = DefaultJsonCorpus(docs)
ds = DocumentSimilarity()
def score(ds,model_type):
scores = []
times = []
vec_sizes = [10,50,75,100,200,500]
for i in vec_sizes:
t0 = time.clock()
params = {"vec_size":i,"model_type":model_type}
ds.set_params(**params)
ds.fit(corpus)
times.append(time.clock() - t0)
scores.append(ds.score(approx=True))
import matplotlib.pyplot as plt
fig, ax1 = plt.subplots()
plt.title("1-nn accuracy with gensim lsi")
ax1.set_xlabel("Vector Size")
ax1.set_ylabel("Score")
ax1.set_ylim(0.0, 1.1)
l1, = ax1.plot(vec_sizes, scores, 'o-', color="r",label="accuracy")
ax2 = ax1.twinx()
ax2.set_ylabel("Time (secs)")
l2, = ax2.plot(vec_sizes, times, 'o-', color="g",label="training time (secs)")
plt.legend(handles=[l1,l2])
plt.show()
score(ds,"gensim_lsi")
# Run Sklearn NMF test
score(ds,"sklearn_nmf")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Performing and evaluating the regression
Step2: Coefficient of determination
Step3: Partition into training set and test set
| <ASSISTANT_TASK:>
Python Code:
import pickle
import pandas as pd
!ls *.pickle # check
!curl -o "stations_projections.pickle" "http://mas-dse-open.s3.amazonaws.com/Weather/stations_projections.pickle"
data = pickle.load(open("stations_projections.pickle",'r'))
data.shape
data.head(1)
# break up the lists of coefficients separate columns
for col in [u'TAVG_coeff', u'TRANGE_coeff', u'SNWD_coeff']:
for i in range(3):
new_col=col+str(i+1)
data[new_col]=[e[i] for e in list(data[col])]
data.drop(labels=col,axis=1,inplace=True)
data.drop(labels='station',axis=1,inplace=True)
print data.columns
data.head(3)
from sklearn.linear_model import LinearRegression
# Compute score changes
def compute_scores(y_label,X_Train,y_Train,X_test,Y_test):
lg = LinearRegression()
lg.fit(X_Train,y_Train)
train_score = lg.score(X_Train,y_Train)
test_score = lg.score(X_test,Y_test)
print('R-squared(Coeff. of determination): Train:%.3f, Test:%.3f, Ratio:%.3f\n' % (train_score,test_score,(test_score/train_score)))
full=set(range(X_Train.shape[1])) #col index list
for i in range(X_Train.shape[1]):
L=list(full.difference(set([i]))) # fill in
L.sort()
r_train_X=X_Train[:,L]
r_test_X=X_test[:,L]
lg = LinearRegression()
lg.fit(r_train_X,y_Train)
r_train_score = lg.score(r_train_X,y_Train)
r_test_score = lg.score(r_test_X,Y_test)
print "removed",data.columns[i],
print "Score decrease: \tTrain:%5.3f" % (train_score-r_train_score),
print "\tTest: %5.3f " % (test_score-r_test_score)
from numpy.random import rand
N=data.shape[0]
train_i = rand(N)>0.5
Train = data.ix[train_i,:]
Test = data.ix[~train_i,:]
print data.shape,Train.shape,Test.shape
print Train.ix[:,:4].head()
from matplotlib import pyplot as plt
%matplotlib inline
def plot_regressions(X_test, y_test, clf):
print X_test.shape
print y_test.shape
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, clf.predict(X_test), color='blue',linewidth=3)
from sklearn.cross_validation import train_test_split
train_X = Train.ix[:,:4].values
test_X=Test.ix[:,:4].values
input_names=list(data.columns[:4])
for target in ["TAVG","TRANGE","SNWD"]:
for j in range(1,4):
y_label = target+"_coeff"+str(j)
train_y = Train[y_label]
test_y = Test[y_label]
lg = LinearRegression()
lg.fit(train_X,train_y)
print "\nTarget variable: ", y_label, '#'*40
print "Coeffs: ",\
' '.join(['%s:%5.2f ' % (input_names[i],lg.coef_[i]) for i in range(len(lg.coef_))])
compute_scores(y_label, train_X, train_y, test_X, test_y)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A simple output
Step2: The standard output stream
Step3: Normal output + standard output
Step4: The standard error stream is highlighted and displayed just below the code cell.
Step5: <div class="alert alert-info">
Step6: Special Display Formats
Step7: See also SVG support for LaTeX.
Step8: Image URLs
Step9: Math
Step10: Plots
Step11: Alternatively, the figure format(s) can also be chosen directly in the notebook
Step12: If you want to use PNG images, but with HiDPI resolution,
Step13: Pandas Dataframes
Step14: For LaTeX output, however, the plain text output is used by default.
Step15: This is not enabled by default because of
Step16: The longtable package is already used by Sphinx,
Step17: The above settings should have no influence on the HTML output, but the LaTeX output should now look nicer
Step19: Markdown Content
Step20: YouTube Videos
Step21: Interactive Widgets (HTML only)
Step22: A widget typically consists of a so-called "model" and a "view" into that model.
Step23: You can also link different widgets.
Step24: <div class="alert alert-info">
Step25: Unsupported Output Types
Step26: ANSI Colors
Step27: The following code showing the 8 basic ANSI colors is based on https
Step28: ANSI also supports a set of 256 indexed colors.
Step29: You can even use 24-bit RGB colors
| <ASSISTANT_TASK:>
Python Code:
# 2 empty lines before, 1 after
6 * 7
print('Hello, world!')
print('Hello, world!')
6 * 7
import sys
print("I'll appear on the standard error stream", file=sys.stderr)
print("I'll appear on the standard output stream")
"I'm the 'normal' output"
%%bash
for i in 1 2 3
do
echo $i
done
from IPython.display import Image
i = Image(filename='images/notebook_icon.png')
i
display(i)
from IPython.display import SVG
SVG(filename='images/python_logo.svg')
Image(url='https://www.python.org/static/img/python-logo-large.png')
Image(url='https://www.python.org/static/img/python-logo-large.png', embed=True)
Image(url='https://jupyter.org/assets/homepage/main-logo.svg')
from IPython.display import Math
eq = Math(r'\int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0)')
eq
display(eq)
from IPython.display import Latex
Latex(r'This is a \LaTeX{} equation: $a^2 + b^2 = c^2$')
%%latex
\begin{equation}
\int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0)
\end{equation}
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=[6, 3])
ax.plot([4, 9, 7, 20, 6, 33, 13, 23, 16, 62, 8]);
%config InlineBackend.figure_formats = ['png']
fig
%config InlineBackend.figure_formats = ['png2x']
fig
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0, 100, size=[5, 4]),
columns=['a', 'b', 'c', 'd'])
df
pd.set_option('display.latex.repr', True)
pd.set_option('display.latex.longtable', True)
pd.set_option('display.latex.escape', False)
df = pd.DataFrame(np.random.randint(0, 100, size=[10, 4]),
columns=[r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$'])
df
from IPython.display import Markdown
md = Markdown(
# Markdown
It *should* show up as **formatted** text
with things like [links] and images.
[links]: https://jupyter.org/
![Jupyter notebook icon](images/notebook_icon.png)
## Markdown Extensions
There might also be mathematical equations like
$a^2 + b^2 = c^2$
and even tables:
A | B | A and B
------|-------|--------
False | False | False
True | False | False
False | True | False
True | True | True
)
md
from IPython.display import YouTubeVideo
YouTubeVideo('9_OIs49m56E')
import ipywidgets as w
slider = w.IntSlider()
slider.value = 42
slider
slider
link = w.IntSlider(description='link')
w.link((slider, 'value'), (link, 'value'))
jslink = w.IntSlider(description='jslink')
w.jslink((slider, 'value'), (jslink, 'value'))
dlink = w.IntSlider(description='dlink')
w.dlink((slider, 'value'), (dlink, 'value'))
jsdlink = w.IntSlider(description='jsdlink')
w.jsdlink((slider, 'value'), (jsdlink, 'value'))
w.VBox([link, jslink, dlink, jsdlink])
tabs = w.Tab()
for idx, obj in enumerate([df, fig, eq, i, md, slider]):
out = w.Output()
with out:
display(obj)
tabs.children += out,
tabs.set_title(idx, obj.__class__.__name__)
tabs
%%javascript
var text = document.createTextNode("Hello, I was generated with JavaScript!");
// Content appended to "element" will be visible in the output area:
element.appendChild(text);
display({
'text/x-python': 'print("Hello, world!")',
'text/x-haskell': 'main = putStrLn "Hello, world!"',
}, raw=True)
print('BEWARE: \x1b[1;33;41mugly colors\x1b[m!', file=sys.stderr)
print('AB\x1b[43mCD\x1b[35mEF\x1b[1mGH\x1b[4mIJ\x1b[7m'
'KL\x1b[49mMN\x1b[39mOP\x1b[22mQR\x1b[24mST\x1b[27mUV')
text = ' XYZ '
formatstring = '\x1b[{}m' + text + '\x1b[m'
print(' ' * 6 + ' ' * len(text) +
''.join('{:^{}}'.format(bg, len(text)) for bg in range(40, 48)))
for fg in range(30, 38):
for bold in False, True:
fg_code = ('1;' if bold else '') + str(fg)
print(' {:>4} '.format(fg_code) + formatstring.format(fg_code) +
''.join(formatstring.format(fg_code + ';' + str(bg))
for bg in range(40, 48)))
formatstring = '\x1b[38;5;{0};48;5;{0}mX\x1b[1mX\x1b[m'
print(' + ' + ''.join('{:2}'.format(i) for i in range(36)))
print(' 0 ' + ''.join(formatstring.format(i) for i in range(16)))
for i in range(7):
i = i * 36 + 16
print('{:3} '.format(i) + ''.join(formatstring.format(i + j)
for j in range(36) if i + j < 256))
start = 255, 0, 0
end = 0, 0, 255
length = 79
out = []
for i in range(length):
rgb = [start[c] + int(i * (end[c] - start[c]) / length) for c in range(3)]
out.append('\x1b['
'38;2;{rgb[2]};{rgb[1]};{rgb[0]};'
'48;2;{rgb[0]};{rgb[1]};{rgb[2]}mX\x1b[m'.format(rgb=rgb))
print(''.join(out))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: YOUR ANSWER HERE
Step2: (c) What are the 20 most common words in the corpus and how often do they occur? What is the 50th most common word, the 100th and the 1000th and how often do they occur?
Step3: (d) How many different Part-of-speech tags are present in the corpus?
Step4: (e) Print a list of the 10 most commonly occurring POS tags in the data. For each of these POS tags, what are the 3 most common words that belong to that class?
Step5: (f) A single word may have several POS-tags. For example, record can be a both a noun (buy a record) or a verb (record a lecture). This make POS-tags extremely useful for disambiguation.
Step6: (g) Print some of these words with their multiple POS-tags. Do you understand the ambiguity? Use figure 10.1 mentioned above to interpret the POS-tags.
Step7: (h) Ambiguous words do not account for a great percentage of the vocabulary. Yet they are among the most commonly occuring words of the English language. What percentage of the dataset is ambiguous?
Step8: Exercise 1.2 (10 points, 5 per subquestion)
Step9: (b) What are the three POS tag categories that the most unseen words belong to?
Step10: 2. Language Models
Step11: Compare that to an ordinary dictionary
Step12: Other datatypes as default_factory
Step13: Converting an already existing dict
Step14: This doesn't work
Step15: Use a lambda to make the number 10 callable"
Step18: Clever use of a defaultdict can be the solution to the problem of data-storing in a smoothing $N$-gram pointed out above
Step19: Exercise 2.2 (5 points)
Step20: Inverse transform sampling in the words of Jurafsky and Martin
Step21: [Optional]
Step22: Excercise 2.3 (5 points)
Step23: YOUR ANSWERS HERE
Step24: YOUR ANSWERS HERE
Step25: YOUR ANSWERS HERE
Step26: Exercise 2.6 (5 points)
| <ASSISTANT_TASK:>
Python Code:
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
## YOUR CODE HERE ##
from collections import defaultdict
d = defaultdict(float)
d["new key"]
d = dict()
d["new key"]
d = defaultdict(int)
d["new key"]
d = defaultdict(list)
d["new key"]
d1 = {k: "value" for k in range(1, 11)}
d = defaultdict(float, d1) # convert it to a defaultdict
print(d[5])
print(d[100])
d = defaultdict(10)
d = defaultdict(lambda: 10)
d["new key"]
d = defaultdict(lambda: defaultdict(float))
d["new key"]
train_file = "ted-train.txt"
def read(fname, max_lines=np.inf):
Reads in the data in fname and returns it as
one long list of words. Also returns a vocabulary in
the form of a word2index and index2word dictionary.
data = []
# w2i will automatically keep a counter to asign to new words
w2i = defaultdict(lambda: len(w2i))
i2w = dict()
start = "<s>"
end = "</s>"
with open(fname, "r") as fh:
for k, line in enumerate(fh):
if k > max_lines:
break
words = line.strip().split()
# assign an index to each word
for w in words:
i2w[w2i[w]] = w # trick
sent = [start] + words + [end]
data.append(sent)
return data, w2i, i2w
def train_ngram(data, N, k=0):
Trains an n-gram language model with optional add-k smoothing
and additionaly returns the unigram model
:param data: text-data as returned by read
:param N: (N>1) the order of the ngram e.g. N=2 gives a bigram
:param k: optional add-k smoothing
:returns: ngram and unigram
ngram = defaultdict(Counter) # ngram[history][word] = #(history,word)
unpacked_data = [word for sent in data for word in sent]
unigram = defaultdict(float, Counter(unpacked_data)) # default prob is 0.0
## YOUR CODE HERE ##
return ngram, unigram
data, w2i, i2w = read(train_file)
# bigram, unigram = train_ngram(data, N=2, k=0)
# bigram_smoothed, unigram_smoothed = train_ngram(data, N=2, k=1)
data[2]
from random import random
P = [0.2,0.5,0.2,0.1]
def sample(P):
u = random() # uniformly random number between 0 and 1
p = 0
for i, p_i in enumerate(P):
if p > u:
return i # the first i s.t. p1 + ... + pi > u
p += p_i
print(sample(P))
print(Counter([sample(P) for i in range(1000)])) # check to see if the law of large numbers is still true
def generate_sent(lm, N):
## YOUR CODE HERE ##
raise NotImplementedError
### ANSWER ###
import pandas as pd
import seaborn as sns
def plot_bigram_dist(word, bigram, smoothbigram, k=30):
d = bigram[word]
ds = smoothbigram[word]
# sort the probabilities
d_sort = sorted(d.items(), reverse=True, key=lambda t: t[1])[0:k]
ds_sort = sorted(ds.items(), reverse=True, key=lambda t: t[1])[0:k]
_, probs = zip(*d_sort)
smooth_ws, smooth_probs = zip(*ds_sort)
# make up for the fact that in the unsmoothed case probs is generally less than k long
probs = probs + (0,) * (k-len(probs))
w_data = pd.DataFrame({"w": smooth_ws * 2,
"P({}|w)".format(word): probs + smooth_probs,
"smoothing": ["unsmoothed"]*k + ["smoothed"]*k})
fig, ax = plt.subplots(figsize=(10,10))
plt.xticks(rotation=90)
g = sns.barplot(ax=ax, x="w", y="P({}|w)".format(word), hue="smoothing",
data=w_data, palette="Blues_d")
## YOUR CODE HERE ##
## YOUR CODE HERE ##
### YOUR CODE HERE ###
### ANSWER HERE ###
### YOUR CODE HERE ###
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: after loading the data, extract the receptor names so that it is possible to form the seperate data subsets.
Step3: Now iterate through the list of receptors and extract the corresponding subset of training data from the full dataframe. Keep in mind that the number of examples in each set are not the same, therefore the average f1 of each subset is stored into a list for later visualization. A random forest is fit to each of the subsets using $k=5$-fold cross validation with the scoring metric set to the f1-score in order to capture presence type I (precision) and type II (recall) errors. Accuracy is not used due to the fact that the data is imbalanced and that a good accuracy score may be misleading regarding the performance of the classifier on correctly classifying positive training/testing examples.
Step4: Visualize the Results
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import time
import glob
import numpy as np
from scipy.stats import randint as sp_randint
from prettytable import PrettyTable
from sklearn.preprocessing import Imputer
from sklearn.model_selection import train_test_split, cross_val_score, RandomizedSearchCV
from sklearn.metrics import f1_score, accuracy_score, make_scorer
from sklearn.ensemble import RandomForestClassifier
load_data_t0 = time.clock()
df = pd.concat([pd.read_csv(filename, index_col=[1,0], na_values=['na'], engine='c', header=0) for filename in glob.glob("data/parser_output/csv/*.csv")],axis=0)
load_data_t1 = time.clock()
print ("data loaded in ~", ((load_data_t1 - load_data_t0)/60), "minutes.")
receptor_names = list(df.index.get_level_values(0).unique())
rforest_params = {"n_estimators": sp_randint(pow(2,5),pow(2,7))}
cv_score_list = []
outputTable = PrettyTable()
outputTable.field_names = ["receptor","N","%positive","Mean F1","Min F1","Max F1"]
for receptor in receptor_names:
receptor_df = df.iloc[df.index.get_level_values(0) == receptor]
X = Imputer().fit_transform(receptor_df.drop('label', axis=1).as_matrix())
y = pd.to_numeric(receptor_df['label']).as_matrix()
#rforest = RandomizedSearchCV(RandomForestClassifier(oob_score=True, class_weight='balanced'), rforest_params, cv = 3, scoring = make_scorer(f1_score),n_jobs=3)
rforest = RandomForestClassifier(oob_score=True, class_weight='balanced',n_estimators=100)
cv_score = cross_val_score(rforest,X,y,scoring='f1',cv=5)
cv_score_list.append(np.mean(cv_score))
outputTable.add_row([receptor,receptor_df.shape[0],(100*(y[y==1].shape[0]/y.shape[0])),np.mean(cv_score),np.min(cv_score),np.max(cv_score)])
del rforest
del X
del y
print(outputTable)
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=[15,6])
plt.xlabel("mean_f1")
sns.violinplot(x=cv_score_list, cut=0)
print ("Mean F1:",np.mean(cv_score_list),"\tMin F1:",np.min(cv_score_list),"\tMax F1:",np.max(cv_score_list))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the file, see what we're working with
Step2: Parse the table with BeautifulSoup
Step3: Decide how to target the table
Step4: Looping over the table rows
Step5: Write data to file
| <ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
import csv
# in a with block, open the HTML file
with open('mountain-goats.html', 'r') as html_file:
# .read() in the contents of a file -- it'll be a string
html_code = html_file.read()
# print the string to see what's there
print(html_code)
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
# use the type() function to see what kind of object `html_code` is
print(type(html_code))
# feed the file's contents (the string of HTML) to BeautifulSoup
# will complain if you don't specify the parser
soup = BeautifulSoup(html_code, 'html.parser')
# use the type() function to see what kind of object `soup` is
print(type(soup))
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
# by position on the page
# find_all returns a list of matching elements, and we want the second ([1]) one
# song_table = soup.find_all('table')[1]
# by class name
# => with `find`, you can pass a dictionary of element attributes to match on
# song_table = soup.find('table', {'class': 'song-table'})
# by ID
# song_table = soup.find('table', {'id': 'my-cool-table'})
# by style
song_table = soup.find('table', {'style': 'width: 95%;'})
print(song_table)
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
song_table = soup.find('table', {'style': 'width: 95%;'})
# find the rows in the table
# slice to skip the header row
song_rows = song_table.find_all('tr')[1:]
# loop over the rows
for row in song_rows:
# get the table cells in the row
song = row.find_all('td')
# assign them to variables
track, title, duration, artist, album = song
# use the .string attribute to get the text in the cell
print(track.string, title.string)
with open('mountain-goats.html', 'r') as html_file, open('mountain-goats.csv', 'w') as outfile:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
song_table = soup.find('table', {'style': 'width: 95%;'})
song_rows = song_table.find_all('tr')[1:]
# set up a writer object
writer = csv.DictWriter(outfile, fieldnames=['track', 'title', 'duration', 'artist', 'album'])
writer.writeheader()
for row in song_rows:
# get the table cells in the row
song = row.find_all('td')
# assign them to variables
track, title, duration, artist, album = song
# write out the dictionary to file
writer.writerow({
'track': track.string,
'title': title.string,
'duration': duration.string,
'artist': artist.string,
'album': album.string
})
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Clean up the data a bit
Step3: It looks like Portland!!!
Step4: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know
Step5: We chose our neighborhoods!
Step6: Create a function that will label each point with a number coresponding to it's neighborhood
Step7: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data.
Step8: Ok, lets put it through Decision Tree!
Step9: Wow! up to .87! That's our best yet! What if we add more trees???
Step10: Up to .88!
Step11: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range
Step12: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too.
| <ASSISTANT_TASK:>
Python Code:
with open('../pipeline/data/Day90ApartmentData.json') as f:
my_dict = json.load(f)
def listing_cleaner(entry):
print entry
listing_cleaner(my_dict['5465197037'])
type(dframe['bath']['5399866740'])
dframe.bath = dframe.bath.replace('shared',0.5)
dframe.bath = dframe.bath.replace('split',0.5)
dframe.smoking = dframe.smoking.replace(np.nan, 'smoking')
dframe.furnished = dframe.furnished.replace(np.nan,'not furnished')
dframe.wheelchair = dframe.wheelchair.replace(np.nan, 'not wheelchair accessible')
dframe.describe()
dframe.bed.unique()
from sklearn.preprocessing import Imputer, LabelEncoder
def meanimputer(column):
imp = Imputer(missing_values='NaN', strategy='mean', axis=1)
imp.fit(column)
X = imp.transform(column)
return X[0]
arr = np.array([np.nan, 'house', 'boat', 'houseboat', 'house', np.nan, 'house','houseboat'])
prac_df = DataFrame()
prac_df['arr'] = arr
prac_df['arr']
modeimputer(prac_df['arr'])
def modeimputer(column):
le = LabelEncoder()
column = le.fit_transform(column)
print le.classes_
print type(le.classes_[0])
print column
nan = le.transform([np.nan])[0]
print nan
print type(column)
column = list(column)
for _,i in enumerate(column):
if i == nan:
column[_] = np.nan
imp = Imputer(missing_values='NaN', strategy='most_frequent', axis=1)
imp.fit(column)
X = imp.transform(column)
for _,i in enumerate(X[0]):
if np.isnan(i):
X[_] = 0
X = X.astype(int)
Y = le.inverse_transform(X)
return Y
import pandas as pd
import numpy as np
from sklearn.base import TransformerMixin
class ModeImputer(TransformerMixin):
def __init__(self):
Impute missing values.
Columns of dtype object are imputed with the most frequent value
in column.
Columns of other types are imputed with mean of column.
Credit:http://stackoverflow.com/questions/25239958/
impute-categorical-missing-values-in-scikit-learn
def fit(self, X, y=None):
self.fill = pd.Series([X[c].value_counts().index[0]
if X[c].dtype == np.dtype('O') else X[c].mean() for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.fill)
data = [
['a', 1, 2],
['b', 1, 1],
['b', 2, 2],
[np.nan, np.nan, np.nan]
]
X = pd.DataFrame(data)
xt = ModeImputer().fit_transform(X)
print('before...')
print(X)
print('after...')
print(xt)
dframe = ModeImputer().fit_transform(dframe)
dframe.head()
dframe.describe(include = 'all')
dframe.bed.mean()
dframe.parking.unique()
u_dframe = DataFrame()
dframe['bath'] = meanimputer(dframe['bath'])
dframe['bed'] = meanimputer(dframe['bed'])
dframe['feet'] = meanimputer(dframe['feet'])
dframe['lat'] = meanimputer(dframe['lat'])
dframe['long'] = meanimputer(dframe['long'])
dframe.head()
dframe.describe(include='all')
data = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat')
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
from sklearn.cluster import KMeans
km = KMeans(n_clusters=40)
km.fit(data)
neighborhoods = km.cluster_centers_
%pylab inline
figure(1,figsize=(20,12))
plot([row[1] for row in data],[row[0] for row in data],'b.')
for i in km.cluster_centers_:
plot(i[1],i[0], 'g*',ms=25)
'''Note to Riley: come back and make it look pretty'''
neighborhoods = neighborhoods.tolist()
for i in enumerate(neighborhoods):
i[1].append(i[0])
print neighborhoods
def clusterer(X, Y,neighborhoods):
neighbors = []
for i in neighborhoods:
distance = ((i[0]-X)**2 + (i[1]-Y)**2)
neighbors.append(distance)
closest = min(neighbors)
return neighbors.index(closest)
neighborhoodlist = []
for i in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][i],dframe['long'][i],neighborhoods))
dframe['neighborhood'] = neighborhoodlist
dframe
from sklearn import preprocessing
def CategoricalToBinary(dframe,column_name):
le = preprocessing.LabelEncoder()
listy = le.fit_transform(dframe[column_name])
dframe[column_name] = listy
unique = dframe[column_name].unique()
serieslist = [list() for _ in xrange(len(unique))]
for column, _ in enumerate(serieslist):
for i, item in enumerate(dframe[column_name]):
if item == column:
serieslist[column].append(1)
else:
serieslist[column].append(0)
dframe[column_name+str(column)] = serieslist[column]
return dframe
pd.set_option('max_columns', 100)
dframe = CategoricalToBinary(dframe,'housingtype')
dframe = CategoricalToBinary(dframe,'parking')
dframe = CategoricalToBinary(dframe,'laundry')
dframe = CategoricalToBinary(dframe,'smoking')
dframe = CategoricalToBinary(dframe,'wheelchair')
dframe = CategoricalToBinary(dframe,'neighborhood')
dframe
dframe = dframe.drop('date',1)
dframe = dframe.drop('housingtype',1)
dframe = dframe.drop('parking',1)
dframe = dframe.drop('laundry',1)
dframe = dframe.drop('smoking',1)
dframe = dframe.drop('wheelchair',1)
dframe = dframe.drop('neighborhood',1)
dframe = dframe.drop('time',1)
columns=list(dframe.columns)
from __future__ import division
print len(dframe)
df2 = dframe[dframe.price < 10000][columns].dropna()
print len(df2)
print len(df2)/len(dframe)
price = df2[['price']].values
columns.pop(columns.index('price'))
features = df2[columns].values
from sklearn.cross_validation import train_test_split
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1, random_state=42)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
plt.scatter(forest_pred,price_test)
df2['predictions'] = reg.predict(df2[columns])
df2['predictions_diff'] = df2['predictions']-df2['price']
sd = np.std(df2['predictions_diff'])
sns.kdeplot(df2['predictions_diff'][df2['predictions_diff']>-150][df2['predictions_diff']<150])
sns.plt.xlim(-150,150)
data = df2[dframe.lat > 45.45][df2.lat < 45.6][df2.long < -122.4][df2.long > -122.8][df2['predictions_diff']>-150][df2['predictions_diff']<150]
plt.figure(figsize=(15,10))
plt.scatter(data = data, x = 'long',y='lat', c = 'predictions_diff',s=10,cmap='coolwarm')
dframe
print np.mean([1,2,34,np.nan])
def averager(dframe):
dframe = dframe.T
dframe.dropna()
averages = {}
for listing in dframe:
try:
key = str(dframe[listing]['bed'])+','+str(dframe[listing]['bath'])+','+str(dframe[listing]['neighborhood'])+','+str(dframe[listing]['feet']-dframe[listing]['feet']%50)
if key not in averages:
averages[key] = {'average_list':[dframe[listing]['price']], 'average':0}
elif key in averages:
averages[key]['average_list'].append(dframe[listing]['price'])
except TypeError:
continue
for entry in averages:
averages[entry]['average'] = np.mean(averages[entry]['average_list'])
return averages
averages = averager(dframe)
print averages
dframe['averages']= averages[str(dframe['bed'])+','+str(dframe['bath'])+','+str(dframe['neighborhood'])+','+str(dframe['feet']-dframe['feet']%50)]
dframe.T
reg = RandomForestRegressor(n_estimators = 100)
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
print r2_score(forest_pred, price_test)
print plt.scatter(pred,price_test)
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth = 5)
reg.fit(features_train, price_train)
print len(features_train[0])
columns = [str(x) for x in columns]
print columns
from sklearn.tree import export_graphviz
export_graphviz(reg,feature_names=columns)
def neighborhood_optimizer(dframe,neighborhood_number_range, counter_num):
XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5]
data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index]
r2_dict = []
for i in neighborhood_number_range:
counter = counter_num
average_accuracy_list = []
while counter > 0:
km = KMeans(n_clusters=i)
km.fit(data)
neighborhoods = km.cluster_centers_
neighborhoods = neighborhoods.tolist()
for x in enumerate(neighborhoods):
x[1].append(x[0])
neighborhoodlist = []
for z in dframe.index:
neighborhoodlist.append(clusterer(dframe['lat'][z],dframe['long'][z],neighborhoods))
dframecopy = dframe.copy()
dframecopy['neighborhood'] = Series((neighborhoodlist), index=dframe.index)
df2 = dframecopy[dframe.price < 10000][['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'price','neighborhood']].dropna()
features = df2[['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'neighborhood']].values
price = df2[['price']].values
features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1)
reg = RandomForestRegressor()
reg = reg.fit(features_train, price_train)
forest_pred = reg.predict(features_test)
forest_pred = np.array([[item] for item in forest_pred])
counter -= 1
average_accuracy_list.append(r2_score(forest_pred, price_test))
total = 0
for entry in average_accuracy_list:
total += entry
r2_accuracy = total/len(average_accuracy_list)
r2_dict.append((i,r2_accuracy))
print r2_dict
return r2_dict
neighborhood_number_range = [i for _,i in enumerate(range(2,31,2))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
r2_dict[:][0]
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
neighborhood_number_range = [i for _,i in enumerate(range(7,15))]
neighborhood_number_range
r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10)
print r2_dict
plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict])
r2_dict = neighborhood_optimizer(dframe,[10,11,12],25)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Globals
Step2: Helper functions
Step3: LSH Cosine Similarity Algorithm
Step4: Algorithm Description
| <ASSISTANT_TASK:>
Python Code:
import gzip
import tarfile
import numpy as np
import pandas as pd
import h5py as h5
import os
import glob
from sklearn import preprocessing
import math
import time
from scipy.spatial.distance import cosine
from itertools import combinations
# 1 million summary data. Takes long!
data_path = 'msd_summary_file.h5'
# subset(10000 songs) summary data
data_path = 'MillionSongSubset/AdditionalFiles/subset_msd_summary_file.h5'
features = ['duration', 'end_of_fade_in','key', 'loudness', 'mode', 'start_of_fade_out','tempo','time_signature']
debug_print = True
# Comment out, when debugging.
np.random.seed(42)
'''
Reads the data and the feature names and returns the track ids and the feature matrix.
The track_id field from the data is mandatory, therefore it will always be included
Args:
feature_names(list of strings): list containing the feature names that will be included in the feature matrix.
data(pandas.io.pytables.HDFStore table): table containing the data.
Returns:
(numpy.ndarray, numpy.ndarray): matrix(#samples x 1) of track_ids, feature matrix(#samples x #features).
'''
def get_feature_matrix(feature_names,data):
if 'track_id' in feature_names:
songs = np.asarray(data['/analysis/songs'][feature_names])
else:
songs = np.asarray(data['/analysis/songs'][['track_id'] + feature_names])
#indices = np.random.choice(range(np.size(songs,0)), sample_size)
return np.array(songs[:,0]),np.array(songs[:,1:],dtype=np.float64)
'''
Returns a vector with normal distributed {0,1} values.
Args:
n(int) : size of the vector.
Returns:
list(int): list of length n of normal distributed {0,1} values.
'''
def get_random_vector(n):
#rv = np.random.normal(0, 1, n)
#rv[rv < 0] = -1
#rv[rv > 0] = 1
#rv[rv == 0] = np.random.randint(0,2)*2-1
rv = 2*np.random.randint(0,2,n)-1
return rv
'''
Returns the cosine of the angle of two given vectors
Args:
a(numpy.ndarray): vector of real values.
b(numpy.ndarray): vector of real values.
Returns:
double: Returns the cosine of the angle of a and b.
'''
def cosine_angle(a,b):
return np.dot(a,b) / (np.linalg.norm(a) * np.linalg.norm(b))
'''
Returns the cosine distance between two vectors.
Args:
a(numpy.ndarray): vector of real values.
b(numpy.ndarray): vector of real values.
Returns:
double: Returns the cosine distance of a and b.
'''
def cosine_distance(a,b):
return 1.0-cosine_angle(a,b)
'''
Returns a matrix(#samples x #bands) with hash-values for the different song based on the respective bands.
Args:
Sketch(numpy.ndarray): Sketch matrix(#samples x #rows*#bands) with values in domain {0,1}.
r(int): number of rows
b(int): number of bands
Returns:
(numpy.ndarray, numpy.ndarray): Returns a matrix(#samples x #bands) with hash-values for the different song based on the respective bands.
'''
def get_hash_bands(Sketch, r,b):
twos = np.array([1<<i for i in range(r)])
Hashes = np.zeros((np.size(Sketch,0),b))
for i in range(b):
Hashes[:,i] = np.dot(Sketch[:,(r*i):(r*(i+1))], twos)
return Hashes.astype(np.uint64)
'''
Operations:
- Get data
- Build feature matrix
- 0-1 normalize it
track_ids: matrix(#samples x 1)
feature_matrix: matrix(#samples x #features)
'''
songs = pd.HDFStore(data_path)
track_ids, feature_matrix = get_feature_matrix(features, songs)
feature_matrix = preprocessing.scale(feature_matrix)
if debug_print:
print("Shape track_ids:",track_ids.shape)
print("Shape feature matrix:",feature_matrix.shape)
# data and algorithm parameters
'''
D = number of features
N = number of samples
b = number of bands
r = number of rows
eps = angle threshold(degrees)
'''
D = np.size(feature_matrix,1)
N = np.size(feature_matrix, 0)
b = 3
r = 64
eps = 2
'''
Operations:
- Generate matrix of random vectors with values in {-1,1}.
RV: matrix(#bands*#rows x n_features)
'''
RV = np.array([get_random_vector(D) for i in range(b*r)])
if debug_print:
print("Shape RV:",np.shape(RV))
print("Random vectors matrix RV:\n",RV)
'''
Operations:
- Generate sketch matrix, by performing
Clip sketch matrix to 0-1 range for hashing.
Dimensionality: n_samples x n_bands*n_rows
'''
Sketch = np.dot(feature_matrix, RV.T)
Sketch[Sketch < 0] = -1
Sketch[Sketch > 0] = 1
Sketch[Sketch == 0] = np.random.randint(0,2)*2-1
if debug_print:
print("Shape Sketch:",Sketch.shape)
print("Sketch:\n",Sketch)
# clip values of Sketch matrix in domain {0,1} to easily hash them.
Sketch[Sketch < 0] = 0
if debug_print:
print("Shape Binary Sketch:",Sketch.shape)
print("Binary Sketch:\n",Sketch)
hb = get_hash_bands(Sketch,r, b)
if debug_print:
print("Shape hb:",hb.shape)
print("hb:\n",hb)
'''
candidates(dict): Dictionary with key=(song_id,song_id), value=cosine_distance(song_id,song_id)
duplicates(list): List of tuples (songid, songid)
buckets(dict) : Dictionary with key=band_id, value=dict with key=hash_key, value = list of song_id
'''
candidates = {}
duplicates = []
buckets = { i : {} for i in range(b) }
start = time.time()
for i in range(b):
for j in range(N):
hash_key = hb[j,i]
if hash_key not in buckets[i]:
buckets[i][hash_key] = []
buckets[i][hash_key].append(j)
for candidates_list in buckets[i].values():
if len(candidates_list) > 1:
for _i in range(len(candidates_list)):
for _j in range(_i+1,len(candidates_list)):
songA = candidates_list[_i]
songB = candidates_list[_j]
if (songA,songB) not in candidates:
candidates[(songA,songB)] = cosine_distance(feature_matrix[songA,:],feature_matrix[songB,:])
cos_eps_dist = 1-math.cos(math.radians(eps))
for key in candidates.keys():
if candidates[key] < cos_eps_dist:
songA = key[0]
songB = key[1]
duplicates.append((songA,songB))
print("LSH Duration:", time.time() - start,"sec")
print("Nr. candidates:", len(candidates.keys()))
print("Nr. duplicates:",len(duplicates))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
| <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {word: index for index, word in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
rs = {".": "||period||",
",": "||comma||",
'"': "||quotation||",
";": "||semicolon||",
"!": "||exclamation||",
"?": "||question||",
"(": "||left_p||",
")": "||right_p||",
"--": "||dash||",
"\n": "||return||"
}
return rs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learing_rate = tf.placeholder(tf.float32, name='learing_rate')
return inputs, targets, learing_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def lstm_cell(lstm_size, keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
def get_init_cell(batch_size, rnn_size, num_layers=3, keep_prob=0.5):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# no dropoputs?
# TODO: Implement Function
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, 'initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
# init embedding matrix
# embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim], minval=-1, maxval=1))
# embed = tf.nn.embedding_lookup(embedding, input_data)
return tf.contrib.layers.embed_sequence(input_data, vocab_size, embed_dim)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
state = tf.identity(state, name='final_state')
return outputs, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embeded = get_embed(input_data, vocab_size, embed_dim)
outputs, state = build_rnn(cell, embeded)
wt_init = tf.truncated_normal_initializer(mean=0, stddev=0.1)
bias_init = tf.zeros_initializer()
logits = tf.contrib.layers.fully_connected(outputs,
vocab_size,
activation_fn=None,
weights_initializer=wt_init,
biases_initializer=bias_init)
return logits, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
valid_len = n_batches * batch_size * seq_length
inputs = int_text[:valid_len]
targets = int_text[1:valid_len] + [int_text[0]]
x = np.reshape(inputs, (batch_size, n_batches, seq_length))
y = np.reshape(targets, (batch_size, n_batches, seq_length))
out = []
for i in range(n_batches):
out.append([x[:,i], y[:,i]])
return np.array(out)
## from review
# num_batches = len(int_text) // (batch_size * seq_length)
# xdata = np.array(int_text[:n_batches * batch_size * seq_length])
# ydata = np.array(int_text[1:n_batches * batch_size * seq_length + 1])
# x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
# y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
# batches = np.array(list(zip(y_batches, x_batches)))
# return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
# Enough epochs to get near a minimum in the training loss,
# no real upper limit on this. Just need to make sure the
# training loss is low and not improving much with more training.
num_epochs = 200
# Batch Size
# Batch size is large enough to train efficiently, but small
# enough to fit the data in memory. No real “best” value here,
# depends on GPU memory usually.
batch_size = 128
# RNN Size
# Size of the RNN cells (number of units in the hidden layers)
# is large enough to fit the data well. Again, no real “best” value.
rnn_size = 256
# Number of layers
num_layers = 2
# Dropout
keep_prob = 0.7
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
# The sequence length (seq_length) here should be about
# the size of the length of sentences you want to generate.
# Should match the structure of the data.
# here about 10-12
seq_length = 32
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size, num_layers, keep_prob)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
ini_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, ini_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
# import numpy as np
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
chosen_id = np.random.choice(list(int_to_vocab.keys()), p=probabilities)
return int_to_vocab[chosen_id]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Logging
Step2: Initialization of the project
Step3: Mapping sequence --> structure
Step4: Downloading and ranking structures
Step5: Loading and aligning new sequences
Step6: Methods
Step7: Some additional methods
Step8: Mapping sequence residue numbers to structure residue numbers
Step9: Viewing structures
Step10: Saving
| <ASSISTANT_TASK:>
Python Code:
import sys
import logging
# Import the Protein class
from ssbio.core.protein import Protein
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROTEIN_ID = 'SRR1753782_00918'
PROTEIN_SEQ = 'MSKQQIGVVGMAVMGRNLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'
# Create the Protein object
my_protein = Protein(ident=PROTEIN_ID, root_dir=ROOT_DIR, pdb_file_type='mmtf')
# Load the protein sequence
# This sets the loaded sequence as the representative one
my_protein.load_manual_sequence(seq=PROTEIN_SEQ, ident='WT', write_fasta_file=True, set_as_representative=True)
# Mapping using BLAST
my_protein.blast_representative_sequence_to_pdb(seq_ident_cutoff=0.9, evalue=0.00001)
my_protein.df_pdb_blast.head()
# Download all mapped PDBs and gather the metadata
my_protein.download_all_pdbs()
my_protein.df_pdb_metadata.head(2)
# Set representative structures
my_protein.set_representative_structure()
my_protein.__dict__
# Input your mutated sequence and load it
mutated_protein1_id = 'N17P_SNP'
mutated_protein1_seq = 'MSKQQIGVVGMAVMGRPLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'
my_protein.load_manual_sequence(ident=mutated_protein1_id, seq=mutated_protein1_seq)
# Input another mutated sequence and load it
mutated_protein2_id = 'Q4S_N17P_SNP'
mutated_protein2_seq = 'MSKSQIGVVGMAVMGRPLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'
my_protein.load_manual_sequence(ident=mutated_protein2_id, seq=mutated_protein2_seq)
# Conduct pairwise sequence alignments
my_protein.pairwise_align_sequences_to_representative()
# View IDs of all sequence alignments
[x.id for x in my_protein.sequence_alignments]
# View the stored information for one of the alignments
my_alignment = my_protein.sequence_alignments.get_by_id('WT_Q4S_N17P_SNP')
my_alignment.annotations
str(my_alignment[0].seq)
str(my_alignment[1].seq)
# Summarize all the mutations in all sequence alignments
s,f = my_protein.sequence_mutation_summary(alignment_type='seqalign')
print('Single mutations:')
s
print('---------------------')
print('Mutation fingerprints')
f
import ssbio.databases.uniprot
this_examples_uniprot = 'P14062'
sites = ssbio.databases.uniprot.uniprot_sites(this_examples_uniprot)
my_protein.representative_sequence.features = sites
my_protein.representative_sequence.features
# Returns a dictionary mapping sequence residue numbers to structure residue identifiers
# Will warn you if residues are not present in the structure
structure_sites = my_protein.map_seqprop_resnums_to_structprop_resnums(resnums=[1,3,45],
use_representatives=True)
structure_sites
# View just the structure
view = my_protein.representative_structure.view_structure(recolor=True)
view
view.add_spacefill(selection='( :A ) and not hydrogen and 17', label_type='res', color='orange')
# Map the mutations on the visualization (scale increased) - will show up on the above view
my_protein.add_mutations_to_nglview(view=view, alignment_type='seqalign', scale_range=(4,7),
use_representatives=True)
# Add sites as shown above in the table to the view
my_protein.add_features_to_nglview(view=view, use_representatives=True)
import os.path as op
my_protein.save_json(op.join(my_protein.protein_dir, '{}.json'.format(my_protein.id)))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Trends in TOC
Step2: The data suggests that TOC increased rapidly from 1990 to around 2000, and then continued to increase more slowly.
| <ASSISTANT_TASK:>
Python Code:
# Select project
prj_grid = nivapy.da.select_resa_projects(eng)
prj_grid
# Select project
prj_df = prj_grid.get_selected_df()
prj_df
# Get stations for these projects
stn_df = nivapy.da.select_resa_project_stations(prj_df, eng)
print (len(stn_df), 'stations associated with the selected projects.')
stn_df.head()
# Map
nivapy.spatial.quickmap(stn_df,
popup='station_code',
aerial_imagery=True)
# Get available parameters
st_dt = '1970-01-01'
end_dt = '2019-01-01'
par_grid = nivapy.da.select_resa_station_parameters(stn_df,
st_dt,
end_dt,
eng)
par_grid
# Select relevant parameters
par_df = par_grid.get_selected_df()
par_df
# Get chem
wc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df,
par_df,
st_dt,
end_dt,
eng,
drop_dups=True,
lod_flags=False)
wc_df.head()
# Save for later
wc_df.to_csv('icpw_norway_chem.csv', index=False, encoding='utf-8')
# Load saved data
wc_df = pd.read_csv('icpw_norway_chem.csv', encoding='utf-8')
wc_df['sample_date'] = pd.to_datetime(wc_df['sample_date'])
# Choose site
stn = 'BIE01'
# Resample to monthly for chosen site
df = wc_df.query('station_code == @stn')
df.index = df['sample_date']
df = df.resample('M').mean().reset_index()
df.head()
# Get TOC
toc_df = df[['sample_date', 'TOC_mg C/l']].copy()
toc_df.columns = ['ds', 'y']
toc_df.head()
# Fit model
m = Prophet()
m.fit(toc_df)
# Predict the next 10 years, with uncertainty
future = m.make_future_dataframe(periods=120, freq='M')
fcst = m.predict(future)
fig = m.plot(fcst)
a = add_changepoints_to_plot(fig.gca(), m, fcst)
#plt.savefig(stn + '_TOC.png', dpi=300)
fig2 = m.plot_components(fcst)
# Get SO4
so4_df = df[['sample_date', 'SO4_mg/l']].copy()
so4_df.columns = ['ds', 'y']
so4_df.head()
# Fit model
so4_df['cap'] = so4_df['y'].max()
m = Prophet(growth='logistic')
m.fit(so4_df)
# Predict the next 10 years, with uncertainty
future = m.make_future_dataframe(periods=120, freq='M')
future['cap'] = so4_df['y'].max()
fcst = m.predict(future)
fig = m.plot(fcst)
a = add_changepoints_to_plot(fig.gca(), m, fcst)
plt.savefig(stn + '_SO4.png', dpi=300)
<END_TASK> |