path
stringlengths 8
399
| content_id
stringlengths 40
40
| detected_licenses
sequence | license_type
stringclasses 2
values | repo_name
stringlengths 6
109
| repo_url
stringlengths 25
128
| star_events_count
int64 0
52.9k
| fork_events_count
int64 0
7.07k
| gha_license_id
stringclasses 9
values | gha_event_created_at
timestamp[us] | gha_updated_at
timestamp[us] | gha_language
stringclasses 28
values | language
stringclasses 1
value | is_generated
bool 1
class | is_vendor
bool 1
class | conversion_extension
stringclasses 17
values | size
int64 317
10.5M
| script
stringlengths 245
9.7M
| script_size
int64 245
9.7M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
/robert_johnson_unit1_project.ipynb | f8e2e7d8c9d68f6a8996da501cd657dc2452b070 | [] | no_license | rejohnsonjr/General-Assembly-Projects | https://github.com/rejohnsonjr/General-Assembly-Projects | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 13,597 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://imgur.com/1ZcRyrc.png" style="float: left; margin: 20px; height: 55px">
#
# # Project 1: Python Coding Exercises
#
# _Authors: Joseph Nelson (DC)_
#
# ---
# The following code challenges are drawn from common exercises used in technical interviews. This project is more about turning ideas into Python code than it is about developing algorithms, so we have provided "pseudocode" for the more challenging problems.
# ### Challenge 1: Largest Palindrome
# A palindromic number reads the same both ways. For example, 1234321 is a palindrome. The largest palindrome made from the product of two two-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two three-digit numbers.
#
# Suggested algorithm:
#
# ```
# - Initialize a variable `result` to 0.
# - For each number A 100 through 999:
# - For each number B 100 through 999:
# - Multiply A and B.
# - Turn that product into a string (use `str` as a function).
# - Reverse the string (use `my_string[::-1]`).
# - If the string and its reverse are the same and the product is
# greater than `result`, set `result` to that new value.
# ```
# +
result = 0
product_list =[]
palindrome_list = []
A_range = range(100, 1000) # first range 100 to 999
B_range = range(100, 1000) # second range 100 to 999
for a in A_range:
for b in B_range:
product = (a*b) # this will cycle thru the numbers and find the product of both numbers
my_string = str(product) # this turns the product to a string
product_list.append(my_string) # adds the string to the product list
for my_string in product_list:
if (my_string) == (my_string[::-1]): # this checks if the product is a palindrome
palindrome = my_string # if product is a palindrome it is labled as palindrome
palindrome_list.append(palindrome) # palindrome is added to the palindrome list
for palindrome in palindrome_list:
if int(palindrome) >int(result): # this compares palindromes to decided which one is largest
result = palindrome # this assings the largest number to result
print(result) # the result is the largest palindrome
# -
# ### Challenge 2: Summation of Primes
# The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below 2,000.
#
# Suggested algorithm:
#
# ```
# - Initialize an empty list of primes.
#
# - For every number A from 2 to 2,000:
# - Set a variable `is_prime` to true.
# - For every number B in our list of primes:
# - If A divided by B gives no remainder, set `is_prime` to false.
# - Optional: Use the command `break` to end the loop over primes at this point.
# - If `is_prime` is still true, append A to our list of primes.
#
# - Add up the primes.
# ```
# +
list_of_primes = [] # where primes are stored
for A in range(2,2000): # first range from 2 to 1999
is_prime == True # looking for primes in first range
for B in range(2,2000): # second range from 2 1999
if A%B == 0: # look for 0 remainder of A/B
is_prime == False # sets prime to remander to zero
if is_prime == True:
list_of_primes.append(A) # add prime to list of primes
prime_total = sum(list_of_primes) # sums list of primes
print(prime_total) # final answer
# -
# ### Challenge 3: Multiples of 3 and 5
# If we list all of the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6, and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 and 5 below 1,000.
# +
natural_three = range(0,1000,3) # multiples of 3 below 1000
natural_five = range(0,1000,5) # multiples of 5 belwo 1000
sum_nat_five = sum(natural_five) # sum of multiples of 5 below 1000
sum_nat_three = sum(natural_three) # sum of multiples of 3 below 1000
challenge_three_ans = sum_nat_three + sum_nat_five # sum of the two sum of multiples
print(challenge_three_ans) # final answer
# -
# ### Challenge 4: String Compressor
# Write a function to perform basic string compression using the counts of repeated characters. (This is called run-length encoding.) For example, the string "aabcccccaaa" would become a2b1c5a3. If the “compressed” string would not become smaller than the original string, your method should return the original string. You can assume the string has only uppercase and lowercase letters (a–z). Specify whether your solution is case sensitive or case insensitive and what you would need to change to make it the other.
#
# Suggested algorithm:
#
# ```
# - Initialize an empty string to hold the compressed version of the input string
# - Initialize an empty string to hold the character we are currently tallying up
# - Initialize a counter variable to 0
#
# - For each letter in the input string:
# - If that letter matches the letter we are counting, increment the counter by 1
# - Otherwise:
# - If the counter value is greater than 0, add the letter we have been counting
# and its count to our compressed string
# - Update the letter we are counting
# - Set the counter to 1
#
# - Append the last letter we were counting and its count to our compressed string
# ```
#
# **Suggestion:** Test your function on a few sample inputs. Try to come up with "edge cases" that might make it fail (e.g. empty strings, strings with all the same character, strings containing non-alphabetic characters, etc.)
# +
rest_letters = '' #string for non-compressed leters
compressed_string = '' #string for counted leters
tallying_string = '' #string for total leters
count_var_a = 0 # count for first letter
count_var_o = 0 # count for second letter
count_var_c = 0 # count for thrid letter
your_statement = input(((str("Enter your statement of upper and lowercase letters only here: ")).strip()).lower()) # input string with spaces striped and made lower case
for i in your_statement:
if i == 'a' or i == 'A': # looks for a
compressed_string = compressed_string + i # adds a to compressed string
count_var_a = count_var_a + 1 # moves a counter
tallying_string = tallying_string + i # adds a to tallying string
elif i == 'o' or i == 'O': # looks for o
compressed_string = compressed_string + i # adds o to compressed string
count_var_o = count_var_o + 1 # moves o counter
tallying_string = tallying_string + i # adds o to tallying string
elif i == 'c'or i == 'C': # looks for c
compressed_string = compressed_string + i # adds c to compressed string
count_var_c = count_var_c + 1 # moves c counter
tallying_string = tallying_string + i # adds c to tallying string
else:
rest_letters = rest_letters + i # add letters to non-compresssed string
string_out = f"a{count_var_a} o{count_var_o} c{count_var_c} {rest_letters}" # final compressed sting
if len(your_statement.strip()) == 0: # will not let input string be empty
print("Please enter atleast one letter or number as your input")
elif len(your_statement.strip()) > len(string_out.strip()): # compares stings print final string if shorter
print (string_out.strip())
elif len(your_statement.strip()) < len(string_out.strip()): # compares strings print input sting if shorter
print(your_statement.strip())
# -
# ### Challenge 5: FizzBuzz
# Write a program that prints all of the numbers from 1 to 100. For multiples of 3, instead of the number, print "Fizz;" for multiples of 5, print "Buzz." For numbers that are multiples of both 3 and 5, print "FizzBuzz."
# +
fizz_buzz_list = [] # emptly list
whole_range = range(1, 101) # numbers 1 to 100
for i in whole_range:
if i%3 == 0 and i%5 == 0: # is number factor of 3 and 5
i = "FizzBuzz" # return FizzBuzz
fizz_buzz_list.append(i) # add FizzBuzz to list
elif i%5 == 0: # is number factor of 5
i = 'Buzz' # return Buzz
fizz_buzz_list.append(i) # add Buzz to list
elif i%3 == 0: # is number factor of 3
i = 'Fizz' # return Fizz
fizz_buzz_list.append(i) # add Fizz to list
else:
fizz_buzz_list.append(i) # add number to list
print(fizz_buzz_list) # final list
# -
| 9,345 |
/MLudemy/AdvancedSupervision.ipynb | ddd47391e8bd9340945eab8b4b971c369684ab0f | [] | no_license | phaneendhra-ch/Machine_Learning_ZTM | https://github.com/phaneendhra-ch/Machine_Learning_ZTM | 0 | 1 | null | null | null | null | Jupyter Notebook | false | false | .py | 90,392 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="XLXIojCUxnYp"
# # Qiskit Tutorialを触る
# Ref. -> [Tutorial](https://qiskit.org/documentation/intro_tutorial1.html)
# ---
# + [markdown] id="9nvQ2lneGqdR"
# ## install qiskit to Colaboratory
# ---
# + id="BHlArx43v1Oc"
# Ready to use Qiskit (for Google Colab)
# !pip install qiskit pylatexenc
# + [markdown] id="HoiPoZPiGmvG"
# ## import package
# ---
# + id="GKVGKhMdwp7k"
import numpy as np
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import plot_histogram
# + [markdown] id="qfEgipwsHPJG"
# ## initialize variables
# ---
# define two Qubits and two ClassicalBits
#
# Qbits : $|00\rangle$
#
# + id="JUVWPEAMHOgB"
circuit = QuantumCircuit(2, 2)
# + [markdown] id="OdJ8Ab6tIWly"
# ## add gates
# ---
# make Bell circuit.
# $B(|00\rangle) = (|00\rangle + |11\rangle)\frac{1}{\sqrt{2}}$
#
# 1. add Hadamard gate to q0.
# $H(|0\rangle)|0\rangle = \displaystyle \left((|0\rangle + |1\rangle)\frac{1}{\sqrt{2}}\right)|0\rangle$
# 2. add CNOT gate q0, q1.
# $\mathrm{CNOT}( H(|0\rangle) + |0\rangle)$
# $\displaystyle = \left((|0\rangle + |1)\rangle\frac{1}{\sqrt{2}}\right)|0\rangle$
# $\displaystyle = (|00\rangle + |11\rangle)\frac{1}{\sqrt{2}}$
#
# 3. measure qbit to classical bit.
# + id="4MkpFD0kIYxK"
circuit.h(0)
circuit.cx(0, 1)
circuit.measure([0, 1], [0, 1])
# + [markdown] id="Uns-H6tEI_qv"
# ## visualize the circuit
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 185} id="TGRfmh0fJCoX" outputId="1311b5fd-a8a2-42e6-bb89-678783bff7d1"
# change output for mpl (need pylatexenc)
circuit.draw(output='mpl')
# + [markdown] id="F9M_UbZFNkwg"
# ## simulation
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="kJPyhRigNmZQ" outputId="4ddc6547-225c-4015-ec18-13f69c079182"
simulator = QasmSimulator()
compiled_circuit = transpile(circuit, simulator)
job = simulator.run(compiled_circuit, shots=1000)
result = job.result()
counts = result.get_counts(circuit)
print(f"Total count for 00 and 11: {counts}")
# + [markdown] id="coHNHWE2OhIR"
# ## visualize result
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 332} id="JBMiebP5OnAy" outputId="2a1139d5-547f-46ed-f94f-09e3c64141a4"
plot_histogram(counts)
])
from sklearn import metrics
metrics.r2_score(test['traveltime'],preds)
sns.distplot(test['traveltime'])
sns.distplot(preds)
sns.plt.show()
# # Ok - so we ** can** get 0.2 on just linear regression
from sklearn.neural_network import MLPRegressor as NN
rgr = NN(activation='relu',learning_rate_init=0.00001,batch_size=100, solver='adam',\
hidden_layer_sizes=(80,80,80,80)).fit(train[features],train['traveltime'])
rgr
features = ['hour_7',
'hour_8',
'hour_9',
'hour_10',
'hour_11',
'hour_12',
'hour_13',
'hour_14',
'hour_15',
'hour_16',
'hour_17',
'hour_18',
'hour_19',
'hour_20',
'hour_21',
'hour_22',
'hour_23',
'day_0',
'day_1',
'day_2',
'day_3',
'day_4',
'day_5',
'day_6',
'month_1',
'month_2',
'month_3',
'month_4',
'month_5',
'month_6']
preds=rgr.predict(test[features])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
# +
# actually - this is kind of ok!
# -
sns.distplot(preds,color='green')
sns.distplot(test['traveltime'],color='red')
sns.distplot.show()
rgr = NN(activation='relu',learning_rate_init=0.00001,batch_size=100, solver='adam',\
hidden_layer_sizes=(80,80,80,80,80)).fit(train[features],train['traveltime'])
preds=rgr.predict(test[features])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
rgr = NN(activation='relu',learning_rate_init=0.00001,batch_size=100, solver='adam',\
hidden_layer_sizes=(80,80,80,80,80),\
max_iter=400).fit(train[features],train['traveltime'])
preds=rgr.predict(test[features])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
# +
# so doubling the number of iteration, only upped our r2_score by 0.1... And we still haven't matched
# the model that just predicts the average travel time
# -
rgr = NN(activation='relu',learning_rate_init=0.00001,batch_size=100, solver='adam',\
hidden_layer_sizes=(80,80,80,80,80,80),\
max_iter=400).fit(train[features],train['traveltime'])
preds=rgr.predict(test[features])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
# +
#worse with the extra layer!
# -
rgr = NN(activation='relu',learning_rate_init=0.00001,batch_size=100, solver='adam',\
hidden_layer_sizes=(80,80,80,80,80),\
max_iter=500).fit(train[features+['rain','temp','dewpt']],train['traveltime'])
preds=rgr.predict(test[features+['rain','temp','dewpt']])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
preds=rgr.predict(test[features+['rain']])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
# +
# these 6 or so cells aren't representative of the 2 hours I waited for various models to build.
# it seems quite difficult to push past the 0.2 r2 score mark. And that is that. C'est le science lugubre
# -
# # simple forest model - beat this etc
from sklearn.ensemble import RandomForestRegressor as rf
test['traveltime'].mean()
rgr = rf().fit(train[features],train['traveltime'])
preds=rgr.predict(test[features])
print(metrics.r2_score(test['traveltime'],preds))
print(((abs(test['traveltime']-preds)/test['traveltime'])*100).mean())
import seaborn as sns
sns.distplot(test['traveltime'])
sns.distplot(preds)
sns.show()
import seaborn as sns
sns.distplot(preds).show()
len(train)
features3=['actualtime_arr_from','rain','temp','hour_5',
'hour_6',
'hour_7',
'hour_8',
'hour_9',
'hour_10',
'hour_11',
'hour_12',
'hour_13',
'hour_14',
'hour_15',
'hour_16',
'hour_17',
'hour_18',
'hour_19',
'hour_20',
'hour_21',
'hour_22',
'hour_23',
'day_0',
'day_1',
'day_2',
'day_3',
'day_4',
'day_5',
'day_6']
print('hello')
df=stop_tools.random_stop_data()
to_concat = []
for day in df['day'].unique():
tf = df[df['day']==day]
for hour in tf['hour'].unique():
rf=tf[tf['hour']==hour]
rf['mean']=rf['traveltime'].mean()
to_concat.append(rf)
df=pd.concat(to_concat,axis=0)
preds=df['mean']; reals=df['traveltime']
metrics.r2_score(reals,preds)
((abs(reals - preds)/reals)*100).mean()
rain_split]
X_valid, y_valid = X[train_split:valid_split], y[train_split:valid_split]
X_test, y_test = X[valid_split:], y[valid_split:]
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
# Make predictions
y_preds = clf.predict(X_valid)
# Evaluate the classifier
baseline_metrics = evaluate_preds(y_valid, y_preds)
baseline_metrics
# +
np.random.seed(42)
# Create a second classifier
clf_2 = RandomForestClassifier(n_estimators=100)
clf_2.fit(X_train, y_train)
# Make predictions
y_preds_2 = clf_2.predict(X_valid)
# Evaluate the 2nd classifier
clf_2_metrics = evaluate_preds(y_valid, y_preds_2)
# -
# **HyperParameter Tuning using RandomizedSearchCV**
grid = {"n_estimators": [10, 100, 200, 500, 1000, 1200],
"max_depth": [None, 5, 10, 20, 30],
"max_features": ["auto", "sqrt"],
"min_samples_split": [2, 4, 6],
"min_samples_leaf": [1, 2, 4]}
# +
from sklearn.model_selection import RandomizedSearchCV, train_test_split
np.random.seed(42)
# Split into X & y
X = heart_disease_shuffled.drop("target", axis=1)
y = heart_disease_shuffled["target"]
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Set n_jobs to -1 to use all cores (NOTE: n_jobs=-1 is broken as of 8 Dec 2019, using n_jobs=1 works)
clf = RandomForestClassifier(n_jobs=1)
# Setup RandomizedSearchCV
rs_clf = RandomizedSearchCV(estimator=clf,
param_distributions=grid,
n_iter=20, # try 20 models total
cv=5, # 5-fold cross-validation
verbose=2) # print out results
# Fit the RandomizedSearchCV version of clf
rs_clf.fit(X_train, y_train);
# -
rs_clf.best_params_
# +
rs_y_preds = rs_clf.predict(X_test)
# Evaluate the predictions
rs_metrics = evaluate_preds(y_test, rs_y_preds)
# -
| 8,736 |
/Feature Extraction/Audio_Feature_Extraction.ipynb | 6af56d9437db59ae8d7bdb614d8f293eab0c2f5d | [] | no_license | pushpanshu0501/Speech-Emotion-Recognition--1 | https://github.com/pushpanshu0501/Speech-Emotion-Recognition--1 | 3 | 5 | null | 2021-07-25T19:03:22 | 2021-07-23T08:58:51 | Jupyter Notebook | Jupyter Notebook | false | false | .py | 1,320,163 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pipeline
#
# Data Collection
# |
# |
# CountVectorizer
# |
# |
# TfidfTransformer
# |
# |
# SGDClassifier and MultinomialNB
# |
# |
# GridSearchCV
# |
# |
# Model Performance
#
# ## 1. Data Collection
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset = 'train', shuffle = True)
twenty_train.target_names #prints all the categories
print("\n".join(twenty_train.data[0].split("\n")[:3])) #prints first line of the first data file
# ## 2. Data Preprocessing
# ### 2.1 CountVectorizer
#
#
# Convert a collection of text documents to a matrix of token counts.
# +
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
X_train_counts.shape
# -
count_vect.get_feature_names()
iter(X_train_counts)
print(X_train_counts.toarray())
# ### 2.2 TfidfTransformer
#
# Transform a count matrix to a normalized tf or tf-idf representation
# +
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
# -
# ## 3. Text Classification
#
# ### 3.1 MultinomialNB
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)
# +
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('vect',CountVectorizer()),
('tfidf',TfidfTransformer()),
('clf',MultinomialNB())])
text_clf = text_clf.fit(twenty_train.data, twenty_train.target)
# +
import numpy as np
twenty_test = fetch_20newsgroups(subset = 'test', shuffle = True)
predicted = text_clf.predict(twenty_test.data)
np.mean(predicted == twenty_test.target)
# -
# ### 3.2 SGDClassifier
# +
from sklearn.linear_model import SGDClassifier
text_clf_svm = Pipeline([('vect', CountVectorizer()),('tfidf',TfidfTransformer()),
('clf-svm',SGDClassifier(loss = 'hinge',
penalty = 'l2',
alpha = 1e-3,max_iter = 5,
random_state = 42))])
text_clf_svm = text_clf_svm.fit(twenty_train.data, twenty_train.target)
predicted_svm = text_clf_svm.predict(twenty_test.data)
np.mean(predicted_svm == twenty_test.target)
# -
# ## 4. GridSearch
#
# Find the best parameters
# ### 4.1 GridSearch for MultinomialNB
# +
# Grid Search
# Here, we are creating a list of parameters for which we would like to do performance tuning.
# All the parameters name start with the classifier name (remember the arbitrary name we gave).
# E.g. vect__ngram_range; here we are telling to use unigram and bigrams and choose the one which is optimal.
from sklearn.model_selection import GridSearchCV
parameters = {'vect__ngram_range':[(1,1),(1,2)],
'tfidf__use_idf':(True,False),
'clf__alpha':(1e-2,1e-3)}
# +
gs_clf = GridSearchCV(text_clf, parameters, n_jobs = -1)
gs_clf = gs_clf.fit(twenty_train.data, twenty_train.target)
# -
print(gs_clf.best_score_)
print(gs_clf.best_params_)
# Output for above should be: The accuracy has now increased to ~90.6% for the NB classifier (not so naive anymore! 😄)
# and the corresponding parameters are {‘clf__alpha’: 0.01, ‘tfidf__use_idf’: True, ‘vect__ngram_range’: (1, 2)}.
# ### 4.2 GridSearch for MultinomialNB
# +
from sklearn.model_selection import GridSearchCV
parameters_svm = {'vect__ngram_range':[(1,1),(1,2)],
'tfidf__use_idf':(True,False),
'clf-svm__alpha':(1e-2,1e-3)}
gs_clf_svm = GridSearchCV(text_clf_svm, parameters_svm, n_jobs = -1)
gs_clf_svm = gs_clf_svm.fit(twenty_train.data, twenty_train.target)
print(gs_clf_svm.best_score_)
print(gs_clf_svm.best_params_)
# +
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('vect',CountVectorizer(stop_words = 'english')),
('tfidf', TfidfTransformer()),
('clf',MultinomialNB())])
# +
# Try NLTK
import nltk
nltk.download()
# Removing stop words
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english", ignore_stopwords = True)
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer = super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc:([stemmer.stem(w) for w in analyzer(doc)])
stemmed_count_vect = StemmedCountVectorizer(stop_words = 'english')
text_mnb_stemmed = Pipeline([('vect', stemmed_count_vect),
('tfidf',TfidfTransformer()),
('mnb',MultinomialNB(fit_prior = False))])
text_mnb_stemmed = text_mnb_stemmed.fit(twenty_train.data, twenty_train.target)
predicted_mnb_stemmed = text_mnb_stemmed.predict(twenty_test.data)
np.mean(predicted_mnb_stemmed == twenty_test.target)
# -
# ## 5 Model Performance
# ### 5.1 Cross Validation
# +
from sklearn.model_selection import cross_val_score
cross_val_score(gs_clf,twenty_train.data, twenty_train.target, cv=3, scoring = "accuracy" )
# -
# All accuracy larger than 90%.
# ### 5.2 Confusion Matrix
# +
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
y_train_pred = cross_val_predict(gs_clf,twenty_train.data, twenty_train.target, cv=3)
confusion_matrix(twenty_train.target, y_train_pred)
# -
# * row represents the actual category
# * col represents the predicted category
#
processing, it is a representation of the short term power spectrum of a sound based on a linear cosine transform of a log power spectrum on a non-linear mel scale of frequency.*
#
# *It provides us enough frequency channels to analyze the audio.*
# + colab={"base_uri": "https://localhost:8080/", "height": 349} id="yAz879O1duuK" outputId="eb33f92f-7bae-44b7-916d-e27ffbaeb4ac"
plt.figure(figsize=(20,5))
x, sr = librosa.load(audio_path)
librosa.display.waveplot(x, sr=sr)
# + colab={"base_uri": "https://localhost:8080/", "height": 369} id="-x-XAx8teCBx" outputId="dfd8a5d4-c065-4de0-9ee7-31a61f2ae756"
# MFCC
plt.figure(figsize=(20,5))
mfccs = librosa.feature.mfcc(x, sr=sr)
print(mfccs.shape)
librosa.display.specshow(mfccs, sr=sr, x_axis='time')
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="9SqnpA-WeetC" outputId="aab7c41a-088b-4a58-8e46-ad4cea03af44"
plt.figure(figsize=(20,5))
librosa.display.specshow(mfccs, sr=sr, x_axis='log')
# + [markdown] id="DbJSx92JfkCn"
# ## ***Feature Scaling***
# *- Scaling the MFCCs such that each coefficient dimension has zero mean and unit variance.*
# + colab={"base_uri": "https://localhost:8080/"} id="7suHMyzyf0aM" outputId="f79e46db-b183-42ee-ea9d-d2e636ba9f0a"
mfccs = sklearn.preprocessing.scale(mfccs, axis=1)
print(mfccs.mean(axis=1))
print(mfccs.var(axis=1))
# + colab={"base_uri": "https://localhost:8080/", "height": 515} id="iTHqQs6QgFM0" outputId="0d24eb21-a5f9-477e-f906-367fe6607d42"
plt.figure(figsize=(20,8))
librosa.display.specshow(mfccs, sr=sr, x_axis='time')
# + colab={"base_uri": "https://localhost:8080/", "height": 515} id="-TK8UAITgSRy" outputId="4b544e0e-d232-4fe6-bc27-ee93edd3d603"
plt.figure(figsize=(20,8))
librosa.display.specshow(mfccs, sr=sr, x_axis='log')
# + [markdown] id="3ppalssajln1"
# ## ***Chroma Frequencies***
# *- Chroma features are an interesting and powerful representation for music audio in which the entire spectrum is projected onto 12 bins representing the 12 distinct semitones (or chroma) of the musical octave.*
# + colab={"base_uri": "https://localhost:8080/", "height": 75} id="4SH9UTAWj93-" outputId="46808a54-5810-417a-fbbd-ba2e04145095"
# Loading the file
x, sr = librosa.load(audio_path)
ipd.Audio(x, rate=sr)
# + colab={"base_uri": "https://localhost:8080/", "height": 369} id="2KzCTJMlkITg" outputId="313cec0e-14d5-4b46-8e1f-5314f831f2f4"
hop_length = 512
chromagram = librosa.feature.chroma_stft(x, sr=sr, hop_length=hop_length)
plt.figure(figsize=(15,5))
librosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=hop_length, cmap='coolwarm')
plt.colorbar()
plt.tight_layout()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 369} id="I1jsW6tbkzGU" outputId="a066f3d3-3ff9-4da0-9362-4afd181f3b3b"
# Using chroma energy distribution normalized statistics (CENS),
# typically used to identify similarity between different interpretations of the music given.
hop_length = 512
chromagram = librosa.feature.chroma_cens(x, sr=sr, hop_length=hop_length)
plt.figure(figsize=(15,5))
librosa.display.specshow(chromagram, x_axis='time', y_axis='chroma', hop_length=hop_length, cmap='coolwarm')
plt.colorbar()
plt.tight_layout()
plt.show()
| 9,604 |
/week3/week3_task2_fine_tuning_clean.ipynb | 86c258591c0e5601cec01d6ba348f1c1eac13fac | [] | no_license | DieMyst/intro-to-dl | https://github.com/DieMyst/intro-to-dl | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 19,162 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <a href="https://cocl.us/topNotebooksPython101Coursera">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
# </a>
# </div>
# <a href="https://cognitiveclass.ai/">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
# </a>
# <h1>Reading Files Python</h1>
# <p><strong>Welcome!</strong> This notebook will teach you about reading the text file in the Python Programming Language. By the end of this lab, you'll know how to read text files.</p>
# <h2>Table of Contents</h2>
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <ul>
# <li><a href="download">Download Data</a></li>
# <li><a href="read">Reading Text Files</a></li>
# <li><a href="better">A Better Way to Open a File</a></li>
# </ul>
# <p>
# Estimated time needed: <strong>40 min</strong>
# </p>
# </div>
#
# <hr>
# <h2 id="download">Download Data</h2>
# +
# Download Example file
# !wget -O ./Example1.txt https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt
# -
# <hr>
# <h2 id="read">Reading Text Files</h2>
# One way to read or write a file in Python is to use the built-in <code>open</code> function. The <code>open</code> function provides a <b>File object</b> that contains the methods and attributes you need in order to read, save, and manipulate the file. In this notebook, we will only cover <b>.txt</b> files. The first parameter you need is the file path and the file name. An example is shown as follow:
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadOpen.png" width="500" />
# The mode argument is optional and the default value is <b>r</b>. In this notebook we only cover two modes:
# <ul>
# <li><b>r</b> Read mode for reading files </li>
# <li><b>w</b> Write mode for writing files</li>
# </ul>
# For the next example, we will use the text file <b>Example1.txt</b>. The file is shown as follow:
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadFile.png" width="200" />
# We read the file:
# +
# Read the Example1.txt
example1 = "./Example1.txt"
file1 = open(example1, "r")
# -
# We can view the attributes of the file.
# The name of the file:
# +
# Print the path of file
file1.name
# -
# The mode the file object is in:
# +
# Print the mode of file, either 'r' or 'w'
file1.mode
# -
# We can read the file and assign it to a variable :
# +
# Read the file
FileContent = file1.read()
FileContent
# -
# The <b>/n</b> means that there is a new line.
# We can print the file:
# +
# Print the file with '\n' as a new line
print(FileContent)
# -
# The file is of type string:
# +
# Type of file content
type(FileContent)
# -
# We must close the file object:
# +
# Close file after finish
file1.close()
# -
# <hr>
# <h2 id="better">A Better Way to Open a File</h2>
# Using the <code>with</code> statement is better practice, it automatically closes the file even if the code encounters an exception. The code will run everything in the indent block then close the file object.
# +
# Open file using with
with open(example1, "r") as file1:
FileContent = file1.read()
print(FileContent)
# -
# The file object is closed, you can verify it by running the following cell:
# +
# Verify if the file is closed
file1.closed
# -
# We can see the info in the file:
# +
# See the content of file
print(FileContent)
# -
# The syntax is a little confusing as the file object is after the <code>as</code> statement. We also don’t explicitly close the file. Therefore we summarize the steps in a figure:
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadWith.png" width="500" />
# We don’t have to read the entire file, for example, we can read the first 4 characters by entering three as a parameter to the method **.read()**:
#
# +
# Read first four characters
with open(example1, "r") as file1:
print(file1.read(4))
# -
# Once the method <code>.read(4)</code> is called the first 4 characters are called. If we call the method again, the next 4 characters are called. The output for the following cell will demonstrate the process for different inputs to the method <code>read()</code>:
# +
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(4))
print(file1.read(4))
print(file1.read(7))
print(file1.read(15))
# -
# The process is illustrated in the below figure, and each color represents the part of the file read after the method <code>read()</code> is called:
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadChar.png" width="500" />
# Here is an example using the same file, but instead we read 16, 5, and then 9 characters at a time:
# +
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(16))
print(file1.read(5))
print(file1.read(9))
# -
# We can also read one line of the file at a time using the method <code>readline()</code>:
# +
# Read one line
with open(example1, "r") as file1:
print("first line: " + file1.readline())
# -
# We can use a loop to iterate through each line:
#
# +
# Iterate through the lines
with open(example1,"r") as file1:
i = 0;
for line in file1:
print("Iteration", str(i), ": ", line)
i = i + 1;
# -
# We can use the method <code>readlines()</code> to save the text file to a list:
# + jupyter={"outputs_hidden": true}
# Read all lines and save as a list
with open(example1, "r") as file1:
FileasList = file1.readlines()
# -
# Each element of the list corresponds to a line of text:
# +
# Print the first line
FileasList[0]
# +
# Print the second line
FileasList[1]
# +
# Print the third line
FileasList[2]
# -
# <hr>
# <h2>The last exercise!</h2>
# <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
# <hr>
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <h2>Get IBM Watson Studio free of charge!</h2>
# <p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
# </div>
# <h3>About the Authors:</h3>
# <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
# Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
# <hr>
# <p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
ine-tuning only last 50)
for layer in model.layers[:-50]:
layer.trainable = False
# -
# compile new model
model.compile(
loss='categorical_crossentropy', # we train 102-way classification
optimizer=keras.optimizers.adamax(lr=1e-2), # we can take big lr here because we fixed first layers
metrics=['accuracy'] # report accuracy during training
)
# fine tune for 2 epochs
model.fit_generator(
train_generator(tr_files, tr_labels),
steps_per_epoch=len(tr_files) // BATCH_SIZE,
epochs=2,
validation_data=train_generator(te_files, te_labels),
validation_steps=len(te_files) // BATCH_SIZE // 2,
callbacks=[keras_utils.TqdmProgressCallback()],
verbose=0
)
## GRADED PART, DO NOT CHANGE!
# Accuracy on validation set
test_accuracy = model.evaluate_generator(
train_generator(te_files, te_labels),
len(te_files) // BATCH_SIZE // 2
)[1]
grader.set_answer("wuwwC", test_accuracy)
print(test_accuracy)
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# That's it! Congratulations!
#
# What you've done:
# - prepared images for the model
# - implemented your own batch generator
# - fine-tuned the pre-trained model
| 9,596 |
/0. Preparations/0. Preparations.ipynb | 3e6796b1a6f13da5cc5aa5d787113e098da4de56 | [] | no_license | mrayy/PracticalML | https://github.com/mrayy/PracticalML | 5 | 2 | null | null | null | null | Jupyter Notebook | false | false | .py | 4,151 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Session 0: PML Tools Overview
#
# This manual contains an overview of the tools we will use across PML course.
#
# ## Github:
# Github (https://github.com/) is an opensource platform to share code, data, and almost any file, and its commonly used by developers to maintain their code. I will mainly use this platform to upload materials related to the PML course, and in order to make sure all students are capable to use it, this section will cover the basics of Github.
#
# First, to use github you don't need an account unless you want to start sharing your own materials or you want to modify other's materials. It is highly recommended to use it with your team members to manage your projects, and keep track of who did what.
#
# The materials of this class will be always uploaded to the following repository:
# https://github.com/mrayy/PracticalML
#
# I highly recommend you to hit "Watch" button in the upper right corner of the page in order to get notifications when new changes or new materials are added to this repository (You will need to create an account in this case).
#
# For managing your repository on your PC, you can use git tools, and most recommended is GitHub Desktop (https://desktop.github.com/) which is highly compatible with GitHub. After installing it, you can create new repositories and upload them to GitHub (after having an account), or downloading other's repositories using "Clone Repository" provided with the repository's URL. For example, you can download this repository to your pc using this URL: https://github.com/mrayy/PracticalML
#
# ## Processing:
# Processing (https://processing.org/) is a commonly used tool by designers and creative artists due to its simplicity and ease to use, as well as due to its large libraries integrated into it. We will use this tool during the course to provide interactive contents and input to our learning algorithms.
#
# ## Wekinator:
# Wekinator (http://www.wekinator.org/) is an opensource tool for ML, and is used due to its simplicity and ease to start with Machine Learning applications. It is used as mediator between your application and the ML model, and will receive the input and send back the output to your application.
#
# We will also use this tool initially to develop simple models and gesture recognition before we dive into more advance topics in ML.
# ### For Mac OSX Catalina Users
#
# To run camera samples, you need to do the following:
#
# - Download the latest video library from the following URL:
# <br>https://github.com/processing/processing-video/releases/download/r6-v2.0-beta4/video-2.0-beta4.zip
#
# - Unzip it to:
# <br> ~/Documents/processing/libraries
#
# ![](./images/videolib.png)
#
#
# - Open terminal and run the following commands:
#
# <pre>
# xattr -p com.apple.quarantine ~/Documents/processing/libraries/video/library/macosx/libavcodec.58.35.100.dylib
# xattr -w com.apple.quarantine "00c1;5dc1bfaa;Chrome;78F18F7D-3F71-4E55-8D58-BAB946AB4707" ~/Documents/processing/libraries/video/library/macosx/*.dylib
# xattr -w com.apple.quarantine "00c1;5dc1bfaa;Chrome;78F18F7D-3F71-4E55-8D58-BAB946AB4707" ~/Documents/processing/libraries/video/library/macosx/gstreamer-1.0/*.dylib
# </pre>
| 3,484 |
/05_Pandas_on_my_own/.ipynb_checkpoints/01_Pandas_Introducing_The_Dataset-checkpoint.ipynb | f6a81257205b74160c8efec3bcba6ba6868f26a6 | [] | no_license | emunozlorenzo/MasterDataScience | https://github.com/emunozlorenzo/MasterDataScience | 3 | 2 | null | null | null | null | Jupyter Notebook | false | false | .py | 16,753 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pandas For Data Science: Introducing The Datasets
# Based on Pandas Practices by Kevin Markham at PYCON CLEVELAND 2018
# #### Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# #### Reading Datastes
#
# - Dataset 1: Rhode Island Dataset from [Stanford Open Policing Project](https://openpolicing.stanford.edu/)
#
# - Dataset 2: [Ted Talks dataset from Kaggle Datasets](https://www.kaggle.com/rounakbanik/ted-talks)
ri = pd.read_csv('police.csv')
# +
# Let's see the first 5 rows
ri.head()
# +
# Rows and Columns
ri.shape
# +
# Data Types
ri.dtypes
# -
# #### NaN: Not a Number (Missing Values)
#
# _why can a value be missed in our Dataset?_
#
# - It was __not recorded__ at the time
# - __Data Corruption__
# - Redacted for __privacy__
# - It is __irrelevant__ for a patricular row
#
# _What is the value of having NaN values?_
#
# - You are be able to __distinguish__ between the real data and the missing data
# #### Finding Missing Values
ri.isnull().head()
# +
# The count of missing data in each of these columns
# Total Rows : 91741
ri.isnull().sum()
# -
| 1,398 |
/Array practice 1.ipynb | fde065553d26c9ac25e0d369e5e641f88ff34a09 | [] | no_license | Manish-Kumar-9782/Data-Structures-in-python | https://github.com/Manish-Kumar-9782/Data-Structures-in-python | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 9,998 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
list_of_words = ["Hello", "there.", "How", "are", "you", "doing?"]
list_of_words
check_for = ["How", "are"]
check_for
all(w in list_of_words for w in check_for)
| 432 |
/devel_notebooks/igraph_test_GN_and_N_fast.ipynb | 82f3f2323eaf1ca3a38134e5d7505c34d693386a | [] | no_license | riblidezso/graph_clust_alg_comparison | https://github.com/riblidezso/graph_clust_alg_comparison | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 48,529 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **[Machine Learning Course Home Page](https://www.kaggle.com/learn/machine-learning)**
#
# ---
#
# This exercise will test your ability to read a data file and understand statistics about the data.
#
# In later exercises, you will apply techniques to filter the data, build a machine learning model, and iteratively improve your model.
#
# The course examples use data from Melbourne. To ensure you can apply these techniques on your own, you will have to apply them to a new dataset (with house prices from Iowa).
#
# The exercises use a "notebook" coding environment. In case you are unfamiliar with notebooks, we have a [90-second intro video](https://www.youtube.com/watch?v=4C2qMnaIKL4).
#
# # Exercises
#
# Run the following cell to set up code-checking, which will verify your work as you go.
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex2 import *
print("Setup Complete")
# ## Step 1: Loading Data
# Read the Iowa data file into a Pandas DataFrame called `home_data`.
# +
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = pd.read_csv(iowa_file_path)
# Call line below with no argument to check that you've loaded the data correctly
step_1.check()
# -
# Lines below will give you a hint or solution code
#step_1.hint()
step_1.solution()
# ## Step 2: Review The Data
# Use the command you learned to view summary statistics of the data. Then fill in variables to answer the following questions
# Print summary statistics in next line
home_data.describe()
# +
# What is the average lot size (rounded to nearest integer)?
avg_lot_size =round(home_data["LotArea"].mean(),0)
# As of today, how old is the newest home (current year - the date in which it was built)
newest_home_age = 2021-home_data["YearBuilt"].max()
# Checks your answers
step_2.check()
# -
#step_2.hint()
step_2.solution()
# ## Think About Your Data
#
# The newest house in your data isn't that new. A few potential explanations for this:
# 1. They haven't built new houses where this data was collected.
# 1. The data was collected a long time ago. Houses built after the data publication wouldn't show up.
#
# If the reason is explanation #1 above, does that affect your trust in the model you build with this data? What about if it is reason #2?
#
# How could you dig into the data to see which explanation is more plausible?
#
# Check out this **[discussion thread](https://www.kaggle.com/learn-forum/60581)** to see what others think or to add your ideas.
#
# # Keep Going
#
# You are ready for **[Your First Machine Learning Model](https://www.kaggle.com/dansbecker/your-first-machine-learning-model).**
#
# ---
# **[Machine Learning Course Home Page](https://www.kaggle.com/learn/machine-learning)**
#
#
| 3,178 |
/PythonML_Practice.ipynb | 5921c345e28f3b0ee97af843e3fae1a4b5bfeb6d | [] | no_license | vijendranp/Python--Computational-Approach | https://github.com/vijendranp/Python--Computational-Approach | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 974 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vijendranp/Hello-World/blob/master/PythonML_Practice.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="3TeXv5t2Y3ZC" colab_type="code" colab={}
| 570 |
/src/Henry/Gmail Analysis/Chi-squared, time prediction and linear regression/.ipynb_checkpoints/SARIMA-checkpoint.ipynb | 7532c60ce2c7250301dcd12b70028c32daa936dd | [] | no_license | UNCG-CSE/G_Suite_Metrics | https://github.com/UNCG-CSE/G_Suite_Metrics | 0 | 4 | null | 2019-12-12T21:47:23 | 2019-12-12T21:16:54 | Jupyter Notebook | Jupyter Notebook | false | false | .py | 2,673,838 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Played with SARIMA, pmdarima, and fb prophet libraries. For true and accurate prediction, look at Testing Prophet with Multiplicity
# +
from dateutil.parser import parse
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
import seaborn as sns
import pandas as pd
from scipy.stats import norm, kde, kstest, stats
from numpy import inf
import itertools
# %matplotlib inline
import warnings
# %matplotlib inline
sns.set()
pd.set_option('display.max_rows', 2000)
pd.set_option('display.width', 1000)
data_path = "C:\\Users\\Henry\\Documents\\405-DataScience\\G_Suite_Metrics\\data\\Gmail\\cbt.csv"
fa = pd.read_csv(data_path)
data_path_write = 'C:\\Users\\Henry\\Documents\\405-DataScience\\G_Suite_Metrics\\data\\Gmail\\'
# -
fa['day'] = pd.to_datetime(fa['day'])
fa['day']= fa.day.dt.date
fa.set_index('day', inplace=True)
fa.index.freq = 'D'
# +
# noo fa = fa1[['time', 'emails_received']].copy()
#fa['time'] = pd.to_datetime(fa['time'], utc = True)
#fa['time']= fa.time.dt.date
#fa.set_index('time', inplace=True)
# -
fa.size
fa.index.freq = 'D'
fa
train, test = fa.iloc[:1268], fa.iloc[1268:]
train = train.asfreq('D')
test = test.asfreq('D')
def find_best_sarima(train, eval_metric):
p = d = q = range(0, 2)
pdq = list(itertools.product(p, d, q))
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
counter = 0
myDict = {}
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
counter += 1
mod = sm.tsa.statespace.SARIMAX(train,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
myDict[counter] = [results.aic, results.bic, param, param_seasonal]
except:
continue
dict_to_df = pd.DataFrame.from_dict(myDict,orient='index')
if eval_metric == 'aic':
best_run = dict_to_df[dict_to_df[0] == dict_to_df[0].min()].index.values
best_run = best_run[0]
elif eval_metric == 'bic':
best_run = dict_to_df[dict_to_df[1] == dict_to_df[1].min()].index.values
best_run = best_run[0]
model = sm.tsa.statespace.SARIMAX(train,
order=myDict[best_run][2],
seasonal_order=myDict[best_run][3],
enforce_stationarity=False,
enforce_invertibility=False).fit()
best_model = {'model':model,
'aic':model.aic,
'bic':model.bic,
'order':myDict[best_run][2],
'seasonal_order':myDict[best_run][3]}
return best_model
# +
#best = find_best_sarima(train, 'bic')
best = sm.tsa.statespace.SARIMAX(train, order=(1,1,1),seasonal_order=(2,0,0,365))
# -
best = best.fit()
best
test['emails_received'].index[0]
#import datetime as dt
test['emails_received'].index[-1]
#pred= best['model'].predict(start= test.index[0] ,end = test.index[-1])
pred= best.predict(end = test.index[-1])
pred
pred.plot()
# +
plt.figure(figsize=(22, 10))
plt.plot(train.index, train, label='Train')
plt.plot(pred.index, pred, label='SARIMA', color='r')
plt.plot(test.index, test, label='Test', color='k')
plt.legend(loc='best', fontsize='xx-large')
plt.show()
# -
from matplotlib import pyplot
pyplot.plot(test)
#pyplot.plot(pred.index, color='red')
pyplot.plot(pred)
pyplot.plot(test.values)
from matplotlib import pyplot
pyplot.plot(fa)
pyplot.plot(predictions, color='red')
plt.figure(figsize=(200,10))
pyplot.show()
from pmdarima.arima import auto_arima
stepwise_model = auto_arima(train, start_p=1, start_q=1,
max_p=2, max_q=2, m=12,
start_P=0, seasonal=True,
d=1, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
train = train.fillna(0)
fa.size
stepwise_model.summary()
pred= stepwise_model.predict()
plt.plot(pred)
plt.plot(test)
from fbprophet import Prophet
fa['day'] = pd.to_datetime(fa['day'])
fa['day']= fa.day.dt.date
fa.index.freq = 'D'
df_new = fa.rename(columns={"day": "ds", "emails_received": "y"})
m = Prophet(daily_seasonality=True)
m.fit(df_new) # df is a pandas.DataFrame with 'y' and 'ds' columns
future = m.make_future_dataframe(periods=365)
e = m.predict(future)
m.plot_components(e)
e
e['yhat']
| 5,260 |
/dashboard.ipynb | 1e7374e7b23ef2d6c8314b800850b88d6ce446cf | [] | no_license | kielwheat/SF_realestate_analysis_PyViz | https://github.com/kielwheat/SF_realestate_analysis_PyViz | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 6,219,403 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Can I make one page-sized plot that contains all the interesting detail?
# +
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as fits
import os
import glob
from scipy.interpolate import interp1d
from scipy.io.idl import readsav
from astropy.table import Table
from astropy.io import ascii
import astropy.units as u
import astropy.constants as const
from astropy.modeling import models, fitting
from craftroom import resample
from matplotlib import cm
from astropy.convolution import convolve, Box1DKernel
#matplotlib set up
# %matplotlib inline
from matplotlib import rcParams
rcParams["figure.figsize"] = (14, 5)
rcParams["font.size"] = 20
# -
t1data = Table.read('../../common/quicksaves/2MASS-J23062928-0502285_basic.ecsv')
w, f, e = t1data['WAVELENGTH'], t1data['FLUX'], t1data['ERROR']
m1data = Table.read('saved_models/trappist-1_model_const_res_v07.ecsv')
m1w, m1f = m1data['WAVELENGTH'], m1data['FLUX']
# +
plt.figure(figsize=(12, 16))
plt.subplot(311)
labely = 1.7e-14
labelfac = 1.05
plt.errorbar((15,50),(labely, labely), yerr= [[0.05*labely, 0.05*labely],[0,0]], c ='k')
plt.annotate('XMM', (20, labelfac*labely))
plt.errorbar((50,100),(labely, labely), yerr= [[0.05*labely, 0.05*labely],[0,0]], c ='k')
plt.annotate('APEC', (55, labelfac*labely))
plt.errorbar((100,1060),(labely, labely), yerr= [[0.05*labely, 0.05*labely],[0,0]], c ='k')
plt.annotate('DEM', (250, labelfac*labely))
plt.errorbar((1068, 5692),(labely, labely), yerr= [[0.05*labely, 0.05*labely],[0,0]], c ='k')
plt.annotate('HST', (2000, labelfac*labely))
plt.errorbar((5692, 54963),(labely, labely), yerr= [[0.05*labely, 0.05*labely],[0,0]], c ='k')
plt.annotate('PHOENIX', (11000, labelfac*labely))
#plt.step(w[f>0], f[f>0],label ='TRAPPIST-1 (M8)', where='mid')
plt.step(w, f,label ='TRAPPIST-1 (M8)', where='mid')
plt.xscale('log')
#plt.yscale('log')
#plt.xlim(100, 1000)
#plt.xlim(90, 10000)
#ax.fill_between([0,900],0,1, facecolor='#99ccff')
#plt.annotate('X-ray/EUV: Thermosphere heating/removal', (12, 1e-22))
#ax.fill_between([900,2000],0,1, facecolor='#99ff66')
#ax.fill_between([2000,4000],0,1, facecolor='#ffff99')
#plt.annotate('FUV/NUV:\n Photochemistry', (1000, 1e-22))
#ax.fill_between([4000,60000],0,1, facecolor='#ff9933')
#plt.annotate('Visible/IR: \n Atmosphere \& surface heating', (4100, 1e-22))
plt.xlim(11, 110000)
plt.ylim(-0.1e-14, 2.1e-14)
#plt.ylim(1e-18, 0.2e-11)
#plt.xlabel('Wavelength (\AA)')
plt.ylabel('Flux (erg s$^{-1}$cm$^{-2}$\AA$^{-1}$)')
#plt.tight_layout()
plt.subplot(312)
# -
# Cross sections
cs_path = '/home/david/work/muscles/cross_sections/'
cs = glob.glob('{}*_cs.txt'.format(cs_path))
cs
# +
def make_mol_label(name): #turns molcule filenames into formated labels
#print(name)
label = r''
for l in name:
if l.isalpha():
label +=l.upper()
elif l.isnumeric():
label += '$_{{{}}}$'.format(l)
return label
for section in cs:
print(section)
mol = np.genfromtxt(section, names=True)
filename = os.path.split(section)[1]
label = make_mol_label(filename[:-7])
molcs = convolve(mol['Total'],Box1DKernel(5))
plt.plot(mol['Lambda'], molcs, label=labels)
plt.legend()
plt.xscale('log')
#plt.yscale('log')
#plt.ylim(1e-17)
plt.xlim(11, 110000)
# -
| 3,586 |
/Open CV Information.ipynb | d5c83ddd5bc6529f34f7acb09811e7c34faf1159 | [] | no_license | RoodraKanwar/OpenCV | https://github.com/RoodraKanwar/OpenCV | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 23,310 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
#
# # READS IMAGE
# +
img = cv2.imread('puppy.jpg')
#TO PRINT THE IMAGE, USE THIS
cv2.imshow("Output",img)
# 0 Means the window will not close and 1 means Millisecond here
cv2.waitKey(0)
# -
# # FOR VIDEO RUNNING
# +
import cv2
#Capture Video in variable
cap = cv2.VideoCapture("sample.mp4")
while True:
#Here img reads the frame per second and success tells us whether the frame was stored successfully in img
success, img = cap.read()
cv2.imshow("Video",img)
if cv2.waitKey(0) & 0xFF ==ord('q'):
break
# -
# # FOR WEBCAM
#
# +
import cv2
cap = cv2.VideoCapture(0)
#3 sets the width
cap.set(3,640)
#4 sets the height
cap.set(4,480)
#10 sets the brightness
cap.set(10,50)
while True:
success, img = cap.read()
cv2.imshow("Video",img)
k = cv2.waitKey(30) & 0xff
if k==27:
break
cap.release()
# -
# # Convert Color Channels
#
# +
import cv2
img = cv2.imread("puppy.jpg")
#Kernel Imported
#Converts Color
imggray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Converts the image into Blurred Image wtih 7x7 size..NOTE @@@@ Keep the value odd such as 7,7 or 5,5 or 3,3
imgblur = cv2.GaussianBlur(imggray,(7,7),0)
#To find edges of our image
imgcanny = cv2.Canny(img,100,100)
cv2.imshow("Gray Image",imggray)
cv2.imshow("Blur Image",imgblur)
cv2.imshow("Edges of Image",imgcanny)
cv2.waitKey(0)
# +
#To increase image dialation, you need a KERNEL(MATRIX), so import NUMPY
import numpy as np
#np.uint8 means a maximum of 256 values
kernel = np.ones((5,5),np.uint8)
#Dilate is used to increase the thickness of features
imgdialation = cv2.dilate(imgcanny,kernel,iterations=1)
#Erode is used to limit the thickness of features
imgerode = cv2.erode(imgdialation,kernel,iterations=1)
cv2.imshow("Dialation Image",imgdialation)
cv2.imshow("Eroded Image",imgerode)
# -
# # RESIZING AND CROPPING
# +
import cv2
img = cv2.imread('puppy.jpg')
print(img.shape)
imgres = cv2.resize(img,(300,200))
print(imgres.shape)
# HEIGHT PARAMETER AND THEN WIDTH PARTAMETER
imgcrop = img[0:200,150:200]
cv2.imshow("Image",imgres)
cv2.imshow("Image",img)
cv2.imshow("Cropped Image",imgcrop)
cv2.waitKey(0)
# -
# # NAMING PICTURES
#
# +
import cv2
import numpy as np
#WITH 512 and 512, it is grayscale image and adding 3 makes is coloured
img = np.zeros((512,512,3),np.uint8)
#255 in first shows blue scale image.
#img[:] = 255,0,0
#this shows image in a box
img[200:300,100:300] = 255,0,0
#(0,0) represents starting pt of line, (300,300) represents end pt of life and ((0,255,0),3) represents color
cv2.line(img,(0,0),(300,300),(0,255,0),3)
# If you want a line till the end, use this......img.shape[1] represents width and img.shape[0] represents height
cv2.line(img,(0,0),(img.shape[1],img.shape[0]),(0,255,0),3)
#(0,0) represents starting, (250,350) represents end, (0,0,255) represents color and ,5 represents thickness
cv2.rectangle(img,(0,0),(250,350),(0,0,255),5)
cv2.rectangle(img,(0,0),(250,350),(0,0,255),cv2.FILLED)
#(400,50) is mid point of circle, 30 is radius, (255,255,0) is color and 5 is thickness
cv2.circle(img,(400,50),30,(255,255,0),5)
#(300,100)pt. to start text, font, format, color and thickness
cv2.putText(img," OPENCV ",(300,100),cv2.FONT_HERSHEY_COMPLEX,1,(255,255,0),1)
cv2.imshow("Image",img)
cv2.waitKey(0)
# -
# # Warp Perspective
# Basically this helps pick out a particular coordinates of the image from the original image
# +
import cv2
import numpy as np
img = cv2.imread("puppy.jpg")
width,height = 250,350
pts1 = np.float32([[5,150],[10,200],[50,150],[60,250]])
pts2 = np.float32([[0,0],[width,0],[0,height],[width,height]])
matrix = cv2.getPerspectiveTransform(pts1,pts2)
imgoutput = cv2.warpPerspective(img,matrix,(width,height))
cv2.imshow("Image",img)
cv2.imshow('Output',imgoutput)
cv2.waitKey(0)
# -
# # Joining Images
# +
import cv2
import numpy as np
img = cv2.imread("puppy.jpg")
imgHor = np.hstack((img,img))
imgVer = np.vstack((img,img))
cv2.imshow("Stacked Horizontal Image",imgHor)
cv2.imshow("Stacked Vertical Image",imgVer)
cv2.waitKey(0)
# -
# # Color Detection
#
# +
import cv2
import numpy as np
def empty(a):
pass
def stackImages(scale,imgArray):
rows = len(imgArray)
cols = len(imgArray[0])
rowsAvailable = isinstance(imgArray[0], list)
width = imgArray[0][0].shape[1]
height = imgArray[0][0].shape[0]
if rowsAvailable:
for x in range ( 0, rows):
for y in range(0, cols):
if imgArray[x][y].shape[:2] == imgArray[0][0].shape [:2]:
imgArray[x][y] = cv2.resize(imgArray[x][y], (0, 0), None, scale, scale)
else:
imgArray[x][y] = cv2.resize(imgArray[x][y], (imgArray[0][0].shape[1], imgArray[0][0].shape[0]), None, scale, scale)
if len(imgArray[x][y].shape) == 2: imgArray[x][y]= cv2.cvtColor( imgArray[x][y], cv2.COLOR_GRAY2BGR)
imageBlank = np.zeros((height, width, 3), np.uint8)
hor = [imageBlank]*rows
hor_con = [imageBlank]*rows
for x in range(0, rows):
hor[x] = np.hstack(imgArray[x])
ver = np.vstack(hor)
else:
for x in range(0, rows):
if imgArray[x].shape[:2] == imgArray[0].shape[:2]:
imgArray[x] = cv2.resize(imgArray[x], (0, 0), None, scale, scale)
else:
imgArray[x] = cv2.resize(imgArray[x], (imgArray[0].shape[1], imgArray[0].shape[0]), None,scale, scale)
if len(imgArray[x].shape) == 2: imgArray[x] = cv2.cvtColor(imgArray[x], cv2.COLOR_GRAY2BGR)
hor= np.hstack(imgArray)
ver = hor
return ver
path = 'puppy.png'
cv2.namedWindow("TrackBars")
cv2.resizeWindow("TrackBars",640,240)
cv2.createTrackbar("Hue Min","TrackBars",0,179,empty)
cv2.createTrackbar("Hue Max","TrackBars",19,179,empty)
cv2.createTrackbar("Sat Min","TrackBars",110,255,empty)
cv2.createTrackbar("Sat Max","TrackBars",240,255,empty)
cv2.createTrackbar("Val Min","TrackBars",153,255,empty)
cv2.createTrackbar("Val Max","TrackBars",255,255,empty)
while True:
img = cv2.imread(path)
imgHSV = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
h_min = cv2.getTrackbarPos("Hue Min","TrackBars")
h_max = cv2.getTrackbarPos("Hue Max", "TrackBars")
s_min = cv2.getTrackbarPos("Sat Min", "TrackBars")
s_max = cv2.getTrackbarPos("Sat Max", "TrackBars")
v_min = cv2.getTrackbarPos("Val Min", "TrackBars")
v_max = cv2.getTrackbarPos("Val Max", "TrackBars")
print(h_min,h_max,s_min,s_max,v_min,v_max)
lower = np.array([h_min,s_min,v_min])
upper = np.array([h_max,s_max,v_max])
mask = cv2.inRange(imgHSV,lower,upper)
imgResult = cv2.bitwise_and(img,img,mask=mask)
# cv2.imshow("Original",img)
# cv2.imshow("HSV",imgHSV)
# cv2.imshow("Mask", mask)
# cv2.imshow("Result", imgResult)
imgStack = stackImages(0.6,([img,imgHSV],[mask,imgResult]))
cv2.imshow("Stacked Images", imgStack)
cv2.waitKey(1)
# -
# # Picture Stacking Function
#
def stackImages(scale,imgArray):
rows = len(imgArray)
cols = len(imgArray[0])
rowsAvailable = isinstance(imgArray[0], list)
width = imgArray[0][0].shape[1]
height = imgArray[0][0].shape[0]
if rowsAvailable:
for x in range ( 0, rows):
for y in range(0, cols):
if imgArray[x][y].shape[:2] == imgArray[0][0].shape [:2]:
imgArray[x][y] = cv2.resize(imgArray[x][y], (0, 0), None, scale, scale)
else:
imgArray[x][y] = cv2.resize(imgArray[x][y], (imgArray[0][0].shape[1], imgArray[0][0].shape[0]), None, scale, scale)
if len(imgArray[x][y].shape) == 2: imgArray[x][y]= cv2.cvtColor( imgArray[x][y], cv2.COLOR_GRAY2BGR)
imageBlank = np.zeros((height, width, 3), np.uint8)
hor = [imageBlank]*rows
hor_con = [imageBlank]*rows
for x in range(0, rows):
hor[x] = np.hstack(imgArray[x])
ver = np.vstack(hor)
else:
for x in range(0, rows):
if imgArray[x].shape[:2] == imgArray[0].shape[:2]:
imgArray[x] = cv2.resize(imgArray[x], (0, 0), None, scale, scale)
else:
imgArray[x] = cv2.resize(imgArray[x], (imgArray[0].shape[1], imgArray[0].shape[0]), None,scale, scale)
if len(imgArray[x].shape) == 2: imgArray[x] = cv2.cvtColor(imgArray[x], cv2.COLOR_GRAY2BGR)
hor= np.hstack(imgArray)
ver = hor
return ver
# # Shape Detection and Contour
# +
import cv2
import numpy as np
def stackImages(scale,imgArray):
rows = len(imgArray)
cols = len(imgArray[0])
rowsAvailable = isinstance(imgArray[0], list)
width = imgArray[0][0].shape[1]
height = imgArray[0][0].shape[0]
if rowsAvailable:
for x in range ( 0, rows):
for y in range(0, cols):
if imgArray[x][y].shape[:2] == imgArray[0][0].shape [:2]:
imgArray[x][y] = cv2.resize(imgArray[x][y], (0, 0), None, scale, scale)
else:
imgArray[x][y] = cv2.resize(imgArray[x][y], (imgArray[0][0].shape[1], imgArray[0][0].shape[0]), None, scale, scale)
if len(imgArray[x][y].shape) == 2: imgArray[x][y]= cv2.cvtColor( imgArray[x][y], cv2.COLOR_GRAY2BGR)
imageBlank = np.zeros((height, width, 3), np.uint8)
hor = [imageBlank]*rows
hor_con = [imageBlank]*rows
for x in range(0, rows):
hor[x] = np.hstack(imgArray[x])
ver = np.vstack(hor)
else:
for x in range(0, rows):
if imgArray[x].shape[:2] == imgArray[0].shape[:2]:
imgArray[x] = cv2.resize(imgArray[x], (0, 0), None, scale, scale)
else:
imgArray[x] = cv2.resize(imgArray[x], (imgArray[0].shape[1], imgArray[0].shape[0]), None,scale, scale)
if len(imgArray[x].shape) == 2: imgArray[x] = cv2.cvtColor(imgArray[x], cv2.COLOR_GRAY2BGR)
hor= np.hstack(imgArray)
ver = hor
return ver
def getContours(img):
#The 'RETR_EXTERNAL' finds the outer contours
contours,hierarchy = cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
#This finds an iteration for contours in canny images searched through all contours
for cnt in contours:
area = cv2.contourArea(cnt)
print(area)
#Till here it draws a contour around images
#If area is > 500, then it discards the shapes
if area>500:
cv2.drawContours(imgContour, cnt, -1, (255, 0, 0), 3)
#Finds the perimeter of contours and the 'True' means the contours are closed.
peri = cv2.arcLength(cnt,True)
#print(peri)
#Gives the corner points of Contours and 'True' means that the contours are closed.
approx = cv2.approxPolyDP(cnt,0.02*peri,True)
#The length of 'approx' means the overall length of contour by taking end points
print(len(approx))
#This gives a bounding box around the contours
objCor = len(approx)
x, y, w, h = cv2.boundingRect(approx)
if objCor ==3: objectType ="Tri"
elif objCor == 4:
aspRatio = w/float(h)
if aspRatio >0.98 and aspRatio <1.03: objectType= "Square"
else:objectType="Rectangle"
elif objCor>4: objectType= "Circles"
else:objectType="None"
#This puts a rectangle contour at the particular coordinate
cv2.rectangle(imgContour,(x,y),(x+w,y+h),(0,255,0),2)
#The "(x+(w//2)-10)" and "(y+(h//2)-10)" are used to select the coordinates where text starts
cv2.putText(imgContour,objectType,
(x+(w//2)-10,y+(h//2)-10),cv2.FONT_HERSHEY_COMPLEX,0.7,
(0,0,0),2)
path = 'shapes.png'
img = cv2.imread(path)
#copies the original image to be passed to image contour
imgContour = img.copy()
imgGray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray,(7,7),1)
imgCanny = cv2.Canny(imgBlur,50,50)
getContours(imgCanny)
imgBlank = np.zeros_like(img)
imgStack = stackImages(0.5,([img,imgGray,imgBlur],
[imgCanny,imgContour,imgBlank]))
cv2.imshow("Stack", imgStack)
cv2.waitKey(0)
# -
# # Face Detection
# +
import cv2
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
img = cv2.imread('1.jpg')
imgGray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imgGray,1.1,4)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
cv2.imshow("Result",img)
cv2.waitKey(0)
# -
| 13,030 |
/Notebooks/EnisBerk/TransferLearningVggish.ipynb | acc646e571d5dff9cf5538d299cf250f291ad9f8 | [] | no_license | Yunhua468/Audio-Visual-Emotion-and-Sentiment-Research | https://github.com/Yunhua468/Audio-Visual-Emotion-and-Sentiment-Research | 3 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 7,524 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="DYd3hHcU3HZw" colab_type="code" colab={}
# !wget https://zenodo.org/api/files/c8f9b6fe-82ac-481c-ad9c-12b5581cb4b4/Audio_Song_Actors_01-24.zip
# !wget https://zenodo.org/api/files/c8f9b6fe-82ac-481c-ad9c-12b5581cb4b4/Audio_Speech_Actors_01-24.zip
# !unzip -q -d ./song Audio_Song_Actors_01-24.zip
# !unzip -q -d ./speech Audio_Speech_Actors_01-24.zip
# + [markdown] id="GSbSeXk43OP3" colab_type="text"
# Modality (01 = full-AV, 02 = video-only, 03 = audio-only).
#
# Vocal channel (01 = speech, 02 = song).
#
# Emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised).
#
# Emotional intensity (01 = normal, 02 = strong). NOTE: There is no strong intensity for the 'neutral' emotion.
#
# Statement (01 = "Kids are talking by the door", 02 = "Dogs are sitting by the door").
#
# Repetition (01 = 1st repetition, 02 = 2nd repetition).
#
# Actor (01 to 24. Odd numbered actors are male, even numbered actors are female).
# + [markdown] id="yUf4FwvSuj8Z" colab_type="text"
# ### here you need to download files from github repo, I did not add it since I sync my local to colab
# + id="F1zb2qOlz_f5" colab_type="code" colab={}
# !unzip -q Audio-Visual-Emotion-and-Sentiment-Research-audio.zip
# !mv Audio-Visual-Emotion-and-Sentiment-Research-audio ./Audio-Visual-Emotion-and-Sentiment-Research/
# + id="Yr8r99PzuZkY" colab_type="code" colab={}
# %% codecell
# pydub is required and not in colab already
# !pip install soundfile
# %% codecell
# get model files and place in assets folder
# !wget https://max-assets-prod.s3.us-south.cloud-object-storage.appdomain.cloud/max-audio-classifier/1.0.0/assets.tar.gz
# !tar -xzvf assets.tar.gz
# !mv classifier_model.h5 ./Audio-Visual-Emotion-and-Sentiment-Research/assets/
# !mv vggish_pca_params.npz ./Audio-Visual-Emotion-and-Sentiment-Research/assets/
# !mv vggish_model.ckpt ./Audio-Visual-Emotion-and-Sentiment-Research/assets/
# + id="mutkoRT9YrlT" colab_type="code" outputId="b0f4bcf3-efb1-4ff6-cf4e-6cfe3a0f663b" colab={"base_uri": "https://localhost:8080/", "height": 51}
# %tensorflow_version 1.x
# # %cd ./Audio-Visual-Emotion-and-Sentiment-Research/
import os
import sys
module_path = os.path.abspath(os.path.join('./Audio-Visual-Emotion-and-Sentiment-Research/Scripts'))
print(module_path)
if module_path not in sys.path:
sys.path.append(module_path)
# + id="pz68sxjjwfSK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f9a92c1c-be87-4de5-ffab-be09652cc7c4"
# %% codecell
import tensorflow as tf
from models_api import VggishModelWrapper
import pre_process_func
Vgg=VggishModelWrapper()
# + id="SM5Jpeuslq6k" colab_type="code" colab={}
import glob
file_list=glob.glob("./speech/**/*.wav")+glob.glob("./song/**/*.wav")
raw={}
post={}
for wavFile in file_list:
sound = pre_process_func.pre_process(wavFile)
raw_embeddings,post_processed_embed =Vgg.generate_embeddings(sound)
filename=wavFile.split("/")[-1]
raw[filename]=raw_embeddings
post[filename]=post_processed_embed
# + id="SCufMjyL7_Xz" colab_type="code" colab={}
# + id="8Klqlly2r5EW" colab_type="code" colab={}
import pickle
embeddings={"raw":raw,"post":post}
with open('embeddings.dat', 'wb') as outfile:
pickle.dump(embeddings, outfile, protocol=pickle.HIGHEST_PROTOCOL)
# + id="is8UHE__y68h" colab_type="code" colab={}
import pickle
with open('embeddings.dat', 'rb') as file:
embeddings=pickle.load(file)
# + id="sPcfDAaHy_3s" colab_type="code" colab={}
# + id="qDgWA-s-zIwb" colab_type="code" colab={}
| 3,834 |
/hw_webinar_1.ipynb | e44b0dfea3baf7d93085610edfe9f2b67b586889 | [] | no_license | olhaexe/RecSys | https://github.com/olhaexe/RecSys | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 5,785 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Домашнее задание - 1
# Заполните пропуски #your_code для функций ниже
# +
import numpy as np
recommended_list = [143, 156, 1134, 991, 27, 1543, 3345, 533, 11, 43] #id товаров
bought_list = [521, 32, 143, 991]
prices_recommended = [100, 90, 10, 450, 50, 37, 99, 120, 34, 100]
prices_bought = [110, 190, 100, 450]
# -
def hit_rate_at_k(recommended_list, bought_list, k=5):
flags = np.isin(bought_list, recommended_list[:k])
hit_rate = (flags.sum() > 0) * 1
return hit_rate
def money_precision_at_k(recommended_list, bought_list, prices_recommended, k=5):
bought_list = np.array(bought_list)
recommended_list = np.array(recommended_list)
prices_recommended = np.array(prices_recommended)
bought_list = bought_list # Тут нет [:k] !!
recommended_list = recommended_list[:k]
prices_recommended = prices_recommended[:k]
flags = np.isin(recommended_list, bought_list)
precision = (flags*prices_recommended).sum() / prices_recommended.sum()
return precision
# +
def recall_at_k(recommended_list, bought_list, k=5):
bought_list = np.array(bought_list)
recommended_list = np.array(recommended_list[:k])
flags = np.isin(bought_list, recommended_list)
recall = flags.sum() / len(bought_list)
return recall
def money_recall_at_k(recommended_list, bought_list, prices_recommended, prices_bought, k=5):
bought_list = np.array(bought_list)
prices_bought = np.array(prices_bought)
recommended_list = np.array(recommended_list[:k])
prices_recommended = np.array(prices_recommended[:k])
flags = np.isin(bought_list, recommended_list)
recall = (flags*prices_bought).sum() / prices_bought.sum()
return recall
# -
def reciprocal_rank(recommended_list, bought_list):
flags = np.isin(recommended_list, bought_list)
if sum(flags) == 0:
return 0
sum_ = 0
count = 0
for i in range(1, len(flags)+1):
if flags[i-1]:
sum_ += 1/i
count += 1
result = sum_ / count
return result
hit_rate_at_k(recommended_list, bought_list, k=5)
money_precision_at_k(recommended_list, bought_list, prices_recommended, k=5)
recall_at_k(recommended_list, bought_list, k=5)
money_recall_at_k(recommended_list, bought_list, prices_recommended, prices_bought, k=5)
reciprocal_rank(recommended_list, bought_list)
| 2,719 |
/Aula7_Analise_Exploratoria.ipynb | 68d7293545fa546ad5103df4517b5d14a776e1e0 | [] | no_license | clerisson-san/Analise_de_dados_com_Python_e_Pandas_by_DIO | https://github.com/clerisson-san/Analise_de_dados_com_Python_e_Pandas_by_DIO | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 162,232 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/clerisson-san/Analise_de_dados_com_Python_e_Pandas_by_DIO/blob/main/Aula7_Analise_Exploratoria.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="G5HpRApza9UR"
#Importando as bibliotecas
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn")
# + id="3qLVp0Z_bUXq" colab={"base_uri": "https://localhost:8080/", "height": 361} outputId="02568dba-b32d-4da5-ed91-6c4c51d9e972"
#Upload do arquivo
from google.colab import files
arq = files.upload()
# + id="Rpxs2yU0ba23"
#Criando nosso DataFrame
df = pd.read_excel("AdventureWorks.xlsx")
# + id="DPOEg0MikIXG" colab={"base_uri": "https://localhost:8080/", "height": 496} outputId="fcb255e3-9b0b-44a8-c2e9-5b9b54e78cc0"
#Visualizando as 5 primeiras linhas
df.head()
# + id="UCJpu--kK9wo" colab={"base_uri": "https://localhost:8080/"} outputId="5cb5fda3-300f-4452-d9c6-ca040b32fa98"
#Quantidade de linhas e colunas
df.shape
# + id="P9S1i8o1lUu-" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="6197dad1-cbfe-444a-a212-1bfa12541ebd"
#Verificando os tipos de dados
df.dtypes
# + id="duheNX1GlhWw" colab={"base_uri": "https://localhost:8080/"} outputId="5b094161-87b4-40bf-c1d7-08674c42d770"
#Qual a Receita total?
df["Valor Venda"].sum()
# + id="IHop-35BlyDO"
#Qual o custo Total?
df["custo"] = df["Custo Unitário"].mul(df["Quantidade"]) #Criando a coluna de custo
# + id="3fy4QmNLmMWd" colab={"base_uri": "https://localhost:8080/", "height": 496} outputId="65336711-ca9f-4d81-c30a-627dd499f4d7"
df.head(5)
# + id="Uj7LTfyumqcn" colab={"base_uri": "https://localhost:8080/"} outputId="5b655a13-006c-49d9-cd89-5419d0a31c5f"
#Qual o custo Total?
round(df["custo"].sum(), 2)
# + id="dcL7yq6dm6-R"
#Agora que temos a receita e custo e o total, podemos achar o Lucro total
#Vamos criar uma coluna de Lucro que será Receita - Custo
df["lucro"] = df["Valor Venda"] - df["custo"]
# + id="AESBzwFuqgy4" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="a832d8aa-bbee-41c6-e823-5844dd61890c"
df.head(1)
# + id="odfh78ayqpN4" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8e29504f-0eb5-4bc7-8312-ae1df3d7de5f"
#Total Lucro
round(df["lucro"].sum(),2)
# + id="dOlaVDsFqv-t"
#Criando uma coluna com total de dias para enviar o produto
df["Tempo_envio"] = df["Data Envio"] - df["Data Venda"]
# + id="xzf6mIH5r3vy" colab={"base_uri": "https://localhost:8080/", "height": 496} outputId="a318dad1-f885-4809-fa49-a650858d8e2a"
df.head(5)
# + [markdown] id="tYKqnysZthDh"
# **Agora, queremos saber a média do tempo de envio para cada Marca, e para isso precisamos transformar a coluna Tempo_envio em númerica**
# + id="eUAJwu45uVV-"
#Extraindo apenas os dias
df["Tempo_envio"] = (df["Data Envio"] - df["Data Venda"]).dt.days
# + id="MngNW5dZxjh_" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="84238e66-8470-4300-b651-91bbfc119829"
df.head(1)
# + id="k9le4YEvxlow" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4fa5b1a9-e7c4-43ba-f074-7e539c3faf8e"
#Verificando o tipo da coluna Tempo_envio
df["Tempo_envio"].dtype
# + id="VtCqhtr60byy" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="8f08f2ff-50b9-40c3-b103-153a19a4e335"
#Média do tempo de envio por Marca
df.groupby("Marca")["Tempo_envio"].mean()
# + [markdown] id="I1sg7kwKjuU1"
# **Missing Values**
# + id="a26UV-kTjmog" colab={"base_uri": "https://localhost:8080/"} outputId="6a58ad81-8003-4a58-ae16-8d6cd80156e4"
#Verificando se temos dados faltantes
df.isnull().sum()
# + [markdown] id="Mh40m00N0lQE"
# **E, se a gente quiser saber o Lucro por Ano e Por Marca?**
# + id="7CPhZjrJ00a1" colab={"base_uri": "https://localhost:8080/"} outputId="252524b6-442f-4dbc-a64f-d97f93c7c07d"
#Vamos Agrupar por ano e marca
df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum()
# + id="kZ3lxKGabXeq"
pd.options.display.float_format = '{:20,.2f}'.format
# + id="knQfX6NC3GMc" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="e93b984f-4a3b-4d96-c4be-bd32c80d32b0"
#Resetando o index
lucro_ano = df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum().reset_index()
lucro_ano
# + id="0xu9qx1x4WM6" colab={"base_uri": "https://localhost:8080/"} outputId="97e9bc9e-f824-48f4-f5d4-068ffcb5be76"
#Qual o total de produtos vendidos?
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=False)
# + id="Ov8qN2bI56NI" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="ac4fb254-6e32-4acd-fead-b04082766781"
#Gráfico Total de produtos vendidos
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=True).plot.barh(title="Total Produtos Vendidos")
plt.xlabel("Total")
plt.ylabel("Produto");
# + id="qFQBaeXNcMd4" colab={"base_uri": "https://localhost:8080/", "height": 307} outputId="9dab4b40-57a9-41b4-e08a-c01b390f6b98"
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.bar(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita");
# + id="4-FPJ5dP5saX" colab={"base_uri": "https://localhost:8080/"} outputId="47d54856-ee18-43ae-d624-1cb0e05c8bda"
df.groupby(df["Data Venda"].dt.year)["lucro"].sum()
# + id="qEjCs7y77966"
#Selecionando apenas as vendas de 2009
df_2009 = df[df["Data Venda"].dt.year == 2009]
# + id="GiL4JRnU_LSf" colab={"base_uri": "https://localhost:8080/", "height": 496} outputId="2c5fb3db-a399-4272-af7c-e0ac7a0d295e"
df_2009.head()
# + id="xaH-Ym6h_SG9" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="40a45664-3afc-4db7-9f8c-a7edf3adde38"
df_2009.groupby(df_2009["Data Venda"].dt.month)["lucro"].sum().plot(title="Lucro x Mês")
plt.xlabel("Mês")
plt.ylabel("Lucro");
# + id="8HDLr3pp_hqf" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="688d9fdf-e0cf-412c-fb07-7136bf3893d9"
df_2009.groupby("Marca")["lucro"].sum().plot.bar(title="Lucro x Marca")
plt.xlabel("Marca")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
# + id="xguSC8ya_mr7" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="d3d6957f-8f78-4c60-9621-c4350d58fef0"
df_2009.groupby("Classe")["lucro"].sum().plot.bar(title="Lucro x Classe")
plt.xlabel("Classe")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
# + id="IbO8CjekDdbk" colab={"base_uri": "https://localhost:8080/"} outputId="ee36832e-e5de-4db2-d89a-984b5da54e30"
df["Tempo_envio"].describe()
# + id="yVBuChl7D-LK" colab={"base_uri": "https://localhost:8080/", "height": 347} outputId="4a661f48-67f7-414e-9993-941548179c6d"
#Gráfico de Boxplot
plt.boxplot(df["Tempo_envio"]);
# + id="AAso8LU5GiFN" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="e12a92e9-9dc6-4921-9331-fde660a0b6a2"
#Histograma
plt.hist(df["Tempo_envio"]);
# + id="hkxhLlATHMN3" colab={"base_uri": "https://localhost:8080/"} outputId="5ddd41fd-17d8-44e7-bb18-466b0851a161"
#Tempo mínimo de envio
df["Tempo_envio"].min()
# + id="qg1q3fAKIDtM" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9f4667ac-1557-4591-9fdc-5436c14099c5"
#Tempo máximo de envio
df['Tempo_envio'].max()
# + id="BiOyhekfIgLb" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="d7f75506-4083-4989-b510-2930cd10a432"
#Identificando o Outlier
df[df["Tempo_envio"] == 20]
# + id="xL5IKMeeLI6v"
df.to_csv("df_vendas_novo.csv", index=False)
# + id="NLtTuecu62_h"
| 7,761 |
/start.ipynb | dbda13c6603f1d4b3e763c30dd573484cacb869c | [] | no_license | BetterWill/Test_01 | https://github.com/BetterWill/Test_01 | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 55,641 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# select aa.asset_seriesName_showTitle_channelName, session_videoType, count(1) ct, avg(aa.asset_contentLengthMs / 60000) as cl, APPROX_QUANTILES(session_playTimeMs, 100)[offset(80)] / 60000 AS pt80, APPROX_QUANTILES(session_playTimeMs, 100)[offset(81)] / 60000 AS pt81, APPROX_QUANTILES(session_playTimeMs, 100)[offset(82)] / 60000 AS pt82, APPROX_QUANTILES(session_playTimeMs, 100)[offset(83)] / 60000 AS pt83, APPROX_QUANTILES(session_playTimeMs, 100)[offset(84)] / 60000 AS pt84, APPROX_QUANTILES(session_playTimeMs, 100)[offset(85)] / 60000 AS pt85, APPROX_QUANTILES(session_playTimeMs, 100)[offset(86)] / 60000 AS pt86, APPROX_QUANTILES(session_playTimeMs, 100)[offset(87)] / 60000 AS pt87, APPROX_QUANTILES(session_playTimeMs, 100)[offset(88)] / 60000 AS pt88, APPROX_QUANTILES(session_playTimeMs, 100)[offset(89)] / 60000 AS pt89, APPROX_QUANTILES(session_playTimeMs, 100)[offset(90)] / 60000 AS pt90, APPROX_QUANTILES(session_playTimeMs, 100)[offset(91)] / 60000 AS pt91, APPROX_QUANTILES(session_playTimeMs, 100)[offset(92)] / 60000 AS pt92, APPROX_QUANTILES(session_playTimeMs, 100)[offset(93)] / 60000 AS pt93, APPROX_QUANTILES(session_playTimeMs, 100)[offset(94)] / 60000 AS pt94, APPROX_QUANTILES(session_playTimeMs, 100)[offset(95)] / 60000 AS pt95, APPROX_QUANTILES(session_playTimeMs, 100)[offset(96)] / 60000 AS pt96, APPROX_QUANTILES(session_playTimeMs, 100)[offset(97)] / 60000 AS pt97, APPROX_QUANTILES(session_playTimeMs, 100)[offset(98)] / 60000 AS pt98, APPROX_QUANTILES(session_playTimeMs, 100)[offset(99)] / 60000 AS pt99, APPROX_QUANTILES(session_playTimeMs, 100)[offset(100)] / 60000 AS pt100 from (select asset_seriesName_showTitle_channelName, session_playTimeMs, asset_contentLengthMs, session_videoType, session_date from sony_sne.customer_view_by_uniqueid)aa where aa.session_date > '2019-10-01' and aa.session_videoType = 'VOD' group by 1, 2 order by 3 desc
# +
# %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import pylab
pylab.rcParams['figure.figsize'] = (30.0, 15.0) # 显示大小
# +
data_path = './data/cl_data.csv'
data = pd.read_csv(data_path)
data
# -
data.columns
data_vod = data[data['session_videoType'] == 'VOD']
data_live = data[data['session_videoType'] == 'LIVE']
data_vod
data_live
# ## Test Accuracy in different APPROX_QUANTILES
data.columns.values
for ix in range (4, 25):
print(ix, data.columns.values[ix], ' percentile estimate ', pd.Series(data['cl'] - data.iloc[:, ix]).sum())
# # Live
for ix in range (4, 25):
print(ix, data_live.columns.values[ix], ' percentile estimate ', pd.Series(data_live['cl'] - data.iloc[:, ix]).sum())
for ix in range (4, 25):
print(ix, data_live.columns.values[ix], ' percentile estimate ', pd.Series((data_live['cl'] - data_live.iloc[:, ix]) * data_live['ct']).sum()/100000)
# # VoD
for ix in range (4, 25):
print(ix, data_vod.columns.values[ix], ' percentile estimate ', pd.Series(data_vod['cl'] - data.iloc[:, ix]).sum())
for ix in range (4, 25):
print(ix, data_vod.columns.values[ix], ' percentile estimate ', pd.Series((data_vod['cl'] - data_vod.iloc[:, ix]) * data_vod['ct']).sum()/100000)
est_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
| 3,568 |
/Playground/MNIST_random_classification.ipynb | f79331bfc086d4d892fcd10f7b43f0722b226f43 | [] | no_license | Black3rror/AI | https://github.com/Black3rror/AI | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 39,465 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/content%2Fw2d5_tutorial_revs/tutorials/W2D5_ReinforcementLearning/W2D5_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="YL2_mjdRyaSg"
# # Neuromatch Academy: Week 2, Day 5, Tutorial 1
# # Learning to Predict
#
# __Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause
#
# __Content reviewers:__ Byron Galbraith and Michael Waskom
#
# + [markdown] colab_type="text" id="JTnwS5ZOyaSm"
# ---
#
# # Tutorial objectives
#
# In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect if Dopamine represents a "canonical" model-free RPE.
#
# At the end of this tutorial:
# * You will learn to use the standard tapped delay line conditioning model
# * You will understand how RPEs move to CS
# * You will understand how variability in reward size effects RPEs
# * You will understand how differences in US-CS timing effect RPEs
# + cellView="code" colab={} colab_type="code" id="Tr8AjcsFyaSh"
# Imports
import numpy as np
import matplotlib.pyplot as plt
# + cellView="form" colab={} colab_type="code" id="OjLe4R9uumHw"
#@title Figure settings
import ipywidgets as widgets # interactive display
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form" colab={} colab_type="code" id="UBN49ga4u7_n"
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
# + [markdown] colab_type="text" id="0q72Sto0S2F5"
# ---
# # Section 1: TD-learning
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 518} colab_type="code" id="3-im6zneSRW7" outputId="d2302f76-1f1d-477e-a3f5-eafc27aab0d4"
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
# + [markdown] colab_type="text" id="HB2U9wCNyaSo"
# __Environment:__
#
# - The agent experiences the environment in episodes or trials.
# - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero.
# - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation
# - Within each episode, the agent is presented a CS and US (reward).
# - For each exercise, we use a different CS-US contingency.
# - The agent's goal is to learn to predict expected rewards from each state in the trial.
#
#
# __Definitions:__
#
# 1. Returns:
# \begin{align}
# G_{t} = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + ... = \sum \limits_{k = 1}^{\infty} \gamma^{k-1} r_{t+k}
# \end{align}
#
# 2. Value:
# \begin{align}
# V(s_{t}) = \mathbb{E} [ G_{t} | s_{t}] = \mathbb{E} [r_{t+1} + \gamma V_{t+1} | s_{t}]
# \end{align}
#
# 3. TD-error:
# \begin{align}
# \delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})
# \end{align}
#
# 4. Value updates:
# \begin{align}
# V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}
# \end{align}
#
# + [markdown] colab_type="text" id="jMwFbZ-GvLhc"
# ## Exercise 1: TD-learning with guaranteed rewards
#
# Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the CS. Save TD-errors over learning so we can visualize them -- you're going to need to compute them anyway.
#
# Use the provided code to estimate the value function.
# + colab={} colab_type="code" id="mr6FG_wFu0RD"
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
# + colab={"base_uri": "https://localhost:8080/", "height": 429} colab_type="code" id="byxMuD1LxTVc" outputId="da1f9e77-97ec-456e-b6e5-3e64166ea982"
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
# + [markdown] colab_type="text" id="ROKwAo1FKAqe"
# ## Interactive Demo 1: US to CS Transfer
#
# During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.
#
# Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line).
#
# Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 461, "referenced_widgets": ["1c791e2f446e4dd285c8967347be734b", "6cad7f364ffd4927926d0f6a08d6213f", "89c0229375a84de185c4f0114578ed09", "bbdcd8d2c92f425aa8c20f7e47e5e727", "495a849ed7304404ab2ba9e8bb9cc8ad", "02fd572f843e4138aee41d6f3b058790", "884533e643b54b4f80ea5023d9477626"]} colab_type="code" id="2UwSevZG82-M" outputId="e211b531-7b61-4a79-e0c6-f60e2a5b3504"
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
# + [markdown] colab_type="text" id="XZd8QkhKcHBQ"
# ## Interactive Demo 2: Learning Rates and Discount Factors
#
# Our TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.
#
# Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?
#
# The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?
#
# Use the widget to test your hypotheses.
#
#
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 494, "referenced_widgets": ["77f998f3160140bfbcaaa6b3b981acb2", "41fdf67eefee4b4e807f85d23ed6c81a", "58cdf8f357e24943b1479cce0b1eb639", "7b34445f59344edcad2c064fece1f150", "40524e90c7c74da283515bc84ce2543c", "0b0b91f8fba14251b27ee28c70133854", "96b07ee8588947e081d696fdc94bcb1b", "a50c7b2a97834fc89e738d5b1f6fce23", "f2a024230f434794a0c83818e580c68f", "219655c48ebb44fca9d24225400ae3c6"]} colab_type="code" id="C_9pB-UhkfHy" outputId="a8819e1d-d39a-48a4-8942-296d27dee278"
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
# + colab={} colab_type="code" id="fSoRYk0DiAQA"
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
# + [markdown] colab_type="text" id="Eruos0VKGaG1"
# ---
# # Section 2: TD-learning with varying reward magnitudes
#
# In the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior.
#
# + [markdown] colab_type="text" id="d7jLqcoQtnLW"
# ## Interactive Demo 3: Match the Value Functions
#
# First, will replace the environment with one that dispenses multiple rewards. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted rewards of 6 or 14 units (both equally likely). Can you find another pair of rewards, both equally likely, that exactly match this value function?
#
# Hints:
# * Carefully consider the definition of the value function $V$. This can be solved analytically.
# * There is no need to change $\alpha$ or $\gamma$.
# * Due to the randomness, there may be a small amount of variation.
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 493, "referenced_widgets": ["f14971dd12e84e9bb60f9f272652eb5f", "d98882811dcd4146adcdfcf09ccd467a", "5249f8b304b94b3394df785b2beca0bf", "b38b54b44e834478b4314ac53cebe443", "31617e91bd4745efb2c61d9fb6901a2c", "384645659aa04240a52857f63bec5aa1", "16d857923dc34a27878dab9d95a50033", "356b0a68c8b7496da9b1082d904d7a69", "852b766db6d34532968ccfc1ae6f2025", "db943755bdf84d8ea51ff58be7fdebc1"]} colab_type="code" id="ebkTe4kbHImq" outputId="17d844d3-a4f4-463f-ff47-be0991d03040"
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
# + [markdown] colab_type="text" id="MYPXCECE2w1G"
# ## Section 2.1 Examining the TD Error
#
# Run the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
# + colab={"base_uri": "https://localhost:8080/", "height": 421} colab_type="code" id="TlCisR8rHK43" outputId="4055d6ba-44d4-449e-ac02-88a5a9ccc809"
plot_tde_trace(TDE_multi)
# + colab={} colab_type="code" id="cgKx5wTk3hy-"
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
# + [markdown] colab_type="text" id="4Gi7Q1AFGeTU"
# ---
# # Section 3: TD-learning with probabilistic rewards
#
# In this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual.
#
# Run the cell below to simulate. How does this compare with the previous experiment?
#
# Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
# + colab={"base_uri": "https://localhost:8080/", "height": 429} colab_type="code" id="LSn9SKRzLZ1t" outputId="4fa6612c-9a39-4eaa-a9ed-4189e1120425"
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
# + colab={} colab_type="code" id="Q6gf1HdZ9rFw"
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
# + [markdown] colab_type="text" id="CTABhuUPuoEW"
# ---
# # Summary
#
# In this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. The takehope less
#
# This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning.
#
# However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next!
# + [markdown] colab_type="text" id="vk8qckyuTK0O"
# # Bonus
# + [markdown] colab_type="text" id="s0IQkuPayaS5"
# ## Exercise 2: TD-reset
#
# In this exercise we will implement a commonly used heuristic used in modeling activity of dopamine neurons, TD-reset.
# Implement TD-learning as in previous exercises, but set TD-error to zero on all steps after reward (US).
#
# 1. Plot value function and TD-errors.
# 2. Can you explain how the reset is changing the TD-errors and value function?
# + colab={} colab_type="code" id="Ni9r7_csLwQr"
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with the TD-reset update rule
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD learning with the TD-reset update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD learning with the TD-reset update rule")
########################################################################
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = ...
else:
TDE_reset[state] = ...
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE_reset
# Uncomment these two lines to visualize your results
# env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
# p_reward=0.8)
# V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
# learning_summary_plot(V_reset, TDE_reset)
# + colab={"base_uri": "https://localhost:8080/", "height": 429} colab_type="code" id="gjtFVHslLyMb" outputId="a456cef3-c2a5-4056-e9f1-e22a4a12e674"
#to_remove solution
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with reset
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = 0
else:
TDE_reset[state] = reward + gamma * V[next_state] - V[state]
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += alpha * TDE_reset[state,n] * is_delay
# Update state
state = next_state
return V, TDE_reset
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V_reset, TDE_reset)
# + [markdown] colab_type="text" id="4KpOTPk_JT1l"
# ## Exercise 3: Removing the CS
#
# In Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?
# This phenomena often fools people attempting to train animals--beware!
# + colab={} colab_type="code" id="gFQBlEkAJzOx"
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
| 30,013 |
/book_code/ch4.ipynb | 8ba4ffaa971fb0eb650776233f15dc0614a5c6b6 | [] | no_license | jackmoody11/statistical-rethinking | https://github.com/jackmoody11/statistical-rethinking | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 197,676 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''venv'': venv)'
# name: python_defaultSpec_1597955304258
# ---
# +
# Book Code 4.5
import matplotlib.pyplot as plt
import numpy as np
log_big = np.array([np.log(np.product(1 + np.random.uniform(0, 0.5, 12))) for _ in range(1000)])
plt.hist(log_big)
plt.show()
# The code below ends up being non-normal because the multiplication process numbers away from 1
# does not behave like adding while multiplying numbers close to 1 does act like adding
big = np.array([np.product(1 + np.random.uniform(0, 2, 12)) for _ in range(1000)])
plt.hist(big)
plt.show()
# -
# Book Code 4.14
sample_mu = np.random.normal(178, 20, 1000)
sample_sigma = np.random.uniform(0, 50, 1000)
prior_h = np.random.normal(sample_mu, sample_sigma, 1000)
plt.hist(prior_h)
# Book Code 4.15
sample_mu = np.random.normal(178, 100, 1000)
prior_h = np.random.normal(sample_mu, sample_sigma, 1000)
plt.hist(prior_h)
# +
import matplotlib.pyplot as plt
# 4M1
n_simulations = 1000
mu = np.random.normal(0, 10, n_simulations)
sigma = np.random.exponential(1)
y = np.random.normal(mu, sigma)
plt.hist(y)
# -
# # 4M3
#
#
# $y_i \sim \text{Normal}(\mu, \sigma)$
#
# $\mu_{i} \sim a + b \cdot x$
#
# $a \sim \text{Normal}(0, 10)$
#
# $b \sim \text{Uniform}(0, 1)$
#
# $\sigma \sim \text{Exponential}(1)$
# # 4M4
#
# $y_i \sim \text{Normal}(\mu, \sigma)$: Will model using linear regression
#
# $\mu_{i} \sim a + b \cdot x$: Expect average height to depend on year
#
# $a \sim \text{Normal}(0, 1)$: Coefficient for average height may vary depending on data and can be negative
#
# $b \sim \text{Log-Normal}(0, 1)$: Expect height of all students to increase over the course of the 3 years
#
# $\sigma \sim \text{Uniform}(0, 20)$: Variance must be positive. Uniform distribution helps us keep variance positive.
# +
# 4H1
import pandas as pd
kung = pd.read_csv("../data/Howell1.csv", sep=";")
kung = kung[kung.age >= 18]
kung
# -
kung.loc[:, ['weight', 'height']].plot.scatter(x='weight', y='height')
| 2,203 |
/chapter3/ch3.ipynb | d3e2500390fdacd6b72768475141899c6ac631ad | [] | no_license | h-asano/nlp100knock | https://github.com/h-asano/nlp100knock | 1 | 0 | null | 2017-01-29T06:35:14 | 2017-01-29T06:28:02 | null | Jupyter Notebook | false | false | .py | 33,645 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # 第3章: 正規表現
# ---
# Wikipediaの記事を以下のフォーマットで書き出したファイル[jawiki-country.json.gz](http://www.cl.ecei.tohoku.ac.jp/nlp100/data/jawiki-country.json.gz)がある.
# - 1行に1記事の情報がJSON形式で格納される
# - 各行には記事名が"title"キーに,記事本文が"text"キーの辞書オブジェクトに格納され,そのオブジェクトがJSON形式で書き出される
# - ファイル全体はgzipで圧縮される
#
# 以下の処理を行うプログラムを作成せよ.
# ## 20. JSONデータの読み込み
# Wikipedia記事のJSONファイルを読み込み,「イギリス」に関する記事本文を表示せよ.問題21-29では,ここで抽出した記事本文に対して実行せよ.
# !wget 'http://www.cl.ecei.tohoku.ac.jp/nlp100/data/jawiki-country.json.gz'
# +
# %%file q20.py
import json
import sys
for line in sys.stdin:
wiki_dict = json.loads(line)
if wiki_dict['title'] == 'イギリス':
print(wiki_dict.get('text'))
# -
# !gzcat jawiki-country.json.gz | python q20.py > uk.txt
# !head uk.txt
# ### gzipコマンド
# !gzip -dc jawiki-country.json.gz | python q20.py | head
# ### python
# +
# %%file q20_2.py
import gzip
import json
import sys
with gzip.open('jawiki-country.json.gz') as fi:
for line in fi:
wiki_dict = json.loads(line.decode('utf-8'))
if wiki_dict['title'] == 'イギリス':
print(wiki_dict.get('text'))
# -
# !python q20_2.py | head
# ## 21. カテゴリ名を含む行を抽出
# 記事中でカテゴリ名を宣言している行を抽出せよ.
# +
# %%file q21.py
import sys
for line in sys.stdin:
lowered = line.lower()
if lowered.startswith('[[category'):
print(line.rstrip())
# -
# !python q21.py < uk.txt
# ## 22. カテゴリ名の抽出
# 記事のカテゴリ名を(行単位ではなく名前で)抽出せよ.
# +
# %%file q22.py
import sys
for line in sys.stdin:
print(line.lower().lstrip("[[category:").rstrip("]]\n"))
# -
# !python q21.py < uk.txt | python q22.py
# ## 23. セクション構造
# 記事中に含まれるセクション名とそのレベル(例えば"== セクション名 =="なら1)を表示せよ.
# +
# %%file q23.py
# ==国名== → 国名 レベル1
import sys
for line in sys.stdin:
if line.startswith('=='):
sec_name = line.strip('= \n')
level = int(line.count('=')/2 - 1)
print(sec_name, 'レベル'+str(level))
# -
# !python q23.py < uk.txt | head
# ## 24. ファイル参照の抽出
# 記事から参照されているメディアファイルをすべて抜き出せ.
# +
# %%file q24.py
# [[File:Battle of Waterloo 1815.PNG| のようになっている
import re
import sys
pat = re.compile(r'([fF]ile:|ファイル:)(?P<filename>.+?)\|')
for line in sys.stdin:
for m in pat.finditer(line):
print(m.group('filename'))
# -
# !python q24.py < uk.txt | head
# ## 25. テンプレートの抽出
# 記事中に含まれる「基礎情報」テンプレートのフィールド名と値を抽出し,辞書オブジェクトとして格納せよ.
# +
# %%file q25.py
"""
{{基礎情報 国
|略名 = イギリス
|日本語国名 = グレートブリテン及び北アイルランド連合王国
|公式国名 = {{lang|en|United Kingdom of Great Britain and Northern Ireland}}<ref>英語以外での正式国名:<br/>
*{{lang|gd|An Rìoghachd Aonaichte na Breatainn Mhòr agus Eirinn mu Thuath}}([[スコットランド・ゲール語]])<br/>
*{{lang|cy|Teyrnas Gyfunol Prydain Fawr a Gogledd Iwerddon}}([[ウェールズ語]])<br/>
"""
import sys
import json
def main():
dic = extract_baseinf(sys.stdin)
sys.stdout.write(json.dumps(dic, ensure_ascii=False))
for k, v in dic.items():
print(k, v, file=sys.stderr)
def extract_baseinf(fi):
baseinf = {}
isbaseinf = False
for line in fi:
if isbaseinf:
if line.startswith('}}'):
return baseinf
elif line[0] == '|':
templis = line.strip('|\n').split('=')
key = templis[0].strip()
value = "=".join(templis[1:])
baseinf[key] = value
else:
baseinf[key] += '\n{}'.format(line.rstrip('\n'))
elif line.startswith('{{基礎情報'):
isbaseinf = True
if __name__ == '__main__':
main()
# -
# !python q25.py < uk.txt > uk_baseinf.dict.json
# ## 26. 強調マークアップの除去
# 25の処理時に,テンプレートの値からMediaWikiの強調マークアップ(弱い強調,強調,強い強調のすべて)を除去してテキストに変換せよ(参考: [マークアップ早見表](https://ja.wikipedia.org/wiki/Help:%E6%97%A9%E8%A6%8B%E8%A1%A8)).
# +
# %%file q26.py
"""
'が2, 3, 5個連続していた場合、'を取り除く
"""
import json
import re
import sys
from pprint import pprint
def main():
dic = json.loads(sys.stdin.read())
dic = no_emphasis(dic)
pprint(dic)
# sys.stdout.write(json.dumps(dic))
def no_emphasis(dic):
for key, value in dic.items():
for n in (5, 3, 2):
eliminated = value.split("'" * n)
div, mod = divmod(len(eliminated), 2)
if div > 0 and mod != 0:
value = ''.join(eliminated)
dic[key] = value
break
# print(key, value,file=sys.stderr)
return dic
if __name__ == '__main__':
main()
# -
# !python q26.py < uk_baseinf.dict.json
# ## 27. 内部リンクの除去
# 26の処理に加えて,テンプレートの値からMediaWikiの内部リンクマークアップを除去し,テキストに変換せよ(参考: [マークアップ早見表](https://ja.wikipedia.org/wiki/Help:%E6%97%A9%E8%A6%8B%E8%A1%A8)).
# +
# %%file q27.py
"""
[[記事名]]
[[記事名|表示文字]]
[[記事名#節名|表示文字]]
"""
import json
import re
import sys
from pprint import pprint
from q26 import no_emphasis
def main():
dic = json.loads(sys.stdin.read())
dic = no_emphasis(dic)
dic = eliminate_link(dic)
pprint(dic)
# sys.stdout.write(json.dumps(dic))
def eliminate_link(dic):
pat = re.compile(r"""
\[\[ # [[
([^|]+\|)* # 記事名| この部分は無かったり繰り返されたりする
([^]]+) # 表示文字 patにマッチした部分をこいつに置換する
\]\] # ]]
""", re.VERBOSE)
for key, value in dic.items():
value = pat.sub(r'\2', value)
dic[key] = value
return dic
if __name__ == '__main__':
main()
# -
# !python q27.py < uk_baseinf.dict.json
# ## 28. MediaWikiマークアップの除去
# 27の処理に加えて,テンプレートの値からMediaWikiマークアップを可能な限り除去し,国の基本情報を整形せよ.
# +
# %%file q28.py
import json
import sys
from pprint import pprint
from pypandoc import convert
def main():
dic = json.loads(sys.stdin.read())
for key, value in dic.items():
dic[key] = convert(value, 'plain', format='mediawiki').rstrip()
pprint(dic)
if __name__ == '__main__':
main()
# -
# !python q28.py < uk_baseinf.dict.json
# ## 29. 国旗画像のURLを取得する
# テンプレートの内容を利用し,国旗画像のURLを取得せよ.(ヒント: [MediaWiki](https://www.mediawiki.org/wiki/API:Main_page/ja) APIの[imageinfo](https://www.mediawiki.org/wiki/API:Properties/ja#imageinfo_.2F_ii)を呼び出して,ファイル参照をURLに変換すればよい)
# +
# %%file q29.py
import sys
import json
import requests
baseinf = json.loads(sys.stdin.read())
r = requests.get('https://commons.wikimedia.org/w/api.php',
{'action': 'query', 'prop': 'imageinfo', 'iiprop': 'url',
'format': 'json', 'titles': 'File:{}'.format(baseinf['国旗画像'])})
data = r.json()
print(data['query']['pages']['347935']['imageinfo'][0]['url'])
# -
# !python q29.py < uk_baseinf.dict.json
f_filtered.dictamen != 1338]
df_filtered = df_filtered [df_filtered.dictamen != 133]
###duplicados de la resolucion 20
df_filtered = df_filtered [df_filtered.dictamen != 42594380]
df_filtered = df_filtered [df_filtered.dictamen != 526]
df_filtered = df_filtered [df_filtered.dictamen != 436]
df_filtered = df_filtered [df_filtered.dictamen != 406]
###duplicados de la resolucion 19
df_filtered = df_filtered [df_filtered.dictamen != 699]
df_filtered = df_filtered [df_filtered.dictamen != 419]
###duplicados de la resolucion 149
df_filtered = df_filtered [df_filtered.dictamen != 439]
###duplicados de la resolucion 218
df_filtered = df_filtered [df_filtered.dictamen != 1060]
###duplicados de la resolucion 778
df_filtered = df_filtered [df_filtered.dictamen != 57603276]
###duplicados de la resolucion 655
df_filtered = df_filtered [df_filtered.dictamen != 57603276]
###duplicados de la resolucion 45
df_filtered = df_filtered [df_filtered.dictamen != 734]
df_filtered = df_filtered [df_filtered.dictamen != 445]
df_filtered = df_filtered [df_filtered.dictamen != 72]
###duplicados de la resolucion 53
df_filtered = df_filtered [df_filtered.dictamen != 688]
###duplicados de la resolucion 361
df_filtered = df_filtered [df_filtered.dictamen != 102]
###duplicados de la resolucion 124
df_filtered = df_filtered [df_filtered.dictamen != 513]
df_filtered = df_filtered [df_filtered.dictamen != 818]
df_filtered = df_filtered [df_filtered.dictamen != 556]
df_filtered = df_filtered [df_filtered.dictamen != 172]
df_filtered = df_filtered [df_filtered.dictamen != 14]
# +
#para limpiar
#38784640 esta hay que pedirla por correo.
##df_filtered.loc[df_filtered['Nro. de Resolucion'] == 595]
# +
##df_tv.loc[df_tv['Resol. No.'] == '595']
# -
df_tv. shape
df_filtered.shape
df_filtered. Industria. unique()
plt.figure(figsize=(8,10))
df_filtered.groupby(df_filtered.Fecha.dt.year).size().plot(kind='barh')
plt.ylabel('Años')
plt.xlabel('Cantidad')
plt.title('Analisis Base de datos')
plt.show()
##por conducta
df_filtered.groupby(['Tipo de Conducta/ Concentracion 1']).size().plot(kind='barh')
plt.ylabel('Tipo de conducta')
plt.xlabel('Cantidad')
plt.title('Conductas segun bd CNDC')
plt.show()
# +
texto = df_filtered[df_filtered ['Tipo de Conducta/ Concentracion 1'].notnull()]['Tipo de Conducta/ Concentracion 1'].to_string().lower()
a,b = 'áéíóúü','aeiouu'
trans = str.maketrans(a,b)
texto = texto.translate(trans)
texto = re.sub('_', ' ', texto)
from wordcloud import WordCloud
# Generate a word cloud image
wordcloud = WordCloud(stopwords=stopwords, background_color='#faf5f2', width=650, height=350, max_font_size=100, max_words=220, colormap="inferno", collocations=False, normalize_plurals=False).generate(texto)
# Display the generated image:
# the matplotlib way:
plt.figure( figsize=(10,5))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
# -
| 9,592 |
/08-homework/03 - School Board Minutes.ipynb | cb165838eea25989f82ff5a76d288a89a45a848b | [] | no_license | AmyOKruk/foundations_homework | https://github.com/AmyOKruk/foundations_homework | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 12,518 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # School Board Minutes
#
# Scrape all of the school board minutes from http://www.mineral.k12.nv.us/pages/School_Board_Minutes
#
# Save a CSV called `minutes.csv` with the date and the URL to the file. The date should be formatted as YYYY-MM-DD.
#
# **Bonus:** Download the PDF files
#
# **Bonus 2:** Use [PDF OCR X](https://solutions.weblite.ca/pdfocrx/index.php) on one of the PDF files and see if it can be converted into text successfully.
#
# * **Hint:** If you're just looking for links, there are a lot of other links on that page! Can you look at the link to know whether it links or minutes or not? You'll want to use an "if" statement.
# * **Hint:** You could also filter out bad links later on using pandas instead of when scraping
# * **Hint:** If you get a weird error that you can't really figure out, you can always tell Python to just ignore it using `try` and `except`, like below. Python will try to do the stuff inside of 'try', but if it hits an error it will skip right out.
# * **Hint:** Remember the codes at http://strftime.org
# * **Hint:** If you have a date that you've parsed, you can use `.dt.strftime` to turn it into a specially-formatted string. You use the same codes (like %B etc) that you use for converting strings into dates.
#
# ```python
# try:
# blah blah your code
# your code
# your code
# except:
# pass
# ```
# +
import requests
from bs4 import BeautifulSoup
import pandas as pd
# -
response = requests.get('http://www.mineral.k12.nv.us/pages/School_Board_Minutes')
doc = BeautifulSoup(response.text)
# +
import datetime
minutes = doc.find_all('a',text = re.compile("\d, \d{4}"))
data = []
for doc in minutes:
row = {}
date = doc.string
row['File']=('http://www.mineral.k12.nv.us'+doc.get('href'))
row['Date']=(date)
data.append(row)
# -
df = pd.DataFrame(data=data)
df['Date'] = pd.to_datetime(df['Date'])
df
# +
# df.to_csv(r'minutes.csv')
| 2,195 |
/Old_Photo_restore.ipynb | 8a8f6b411b45c27ec28e54b893b786c372072c6d | [
"Apache-2.0"
] | permissive | abheeshtroy/old-photo-restore-simplified | https://github.com/abheeshtroy/old-photo-restore-simplified | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 578,068 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/abheeshtroy/old-photo-restore-simplified/blob/main/Old_Photo_restore.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Vkkr1Sq6t2lM"
# #◢ Bringing Old Photos Back to Life
# + [markdown] id="ypb6kal06Tb1"
# This is a reference implementation of paper Bringing Old Photos Back to Life, CVPR2020 (Oral) by Ziyu Wan1, Bo Zhang2, Dongdong Chen3, Pan Zhang4, Dong Chen2, Jing Liao1, Fang Wen2 City University of Hong Kong, Microsoft Research Asia, Microsoft Cloud AI, 4 USTC
#
# Through this colab notebook we have tried to modified the original implementation so that this can be easily used by any person with limited technical knowledge. And this appears like a web page for restoring their personal photos. Click on the play button at the corner of each code block. There are two code blocks. The first one is to download pre trained models, hence it takes some time. This first block needs to be executed only once for a session. The second block has to be played for each image.
#
# This notebook is created by shtadal ghosh and abheesht Roy
# + [markdown] id="Ubc05fcKzk90"
# #◢ Block One run only once for single session Click on the below left cornner play botton to start
#
# + id="32jCofdSr8AW" colab={"base_uri": "https://localhost:8080/"} outputId="f32c62a8-62f6-44ff-df45-a0f3d6bc3786"
#Clone the repo
# !git clone https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life.git photo_restoration
# pull the syncBN repo
# %cd photo_restoration/Face_Enhancement/models/networks
# !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
# !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
# %cd ../../../
# %cd Global/detection_models
# !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
# !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
# %cd ../../
# download the landmark detection model
# %cd Face_Detection/
# !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
# !bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
# %cd ../
# download the pretrained model
# %cd Face_Enhancement/
# !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
# !unzip checkpoints.zip
# %cd ../
# %cd Global/
# !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
# !unzip checkpoints.zip
# %cd ../
# ! pip install -r requirements.txt
# %cd /content/photo_restoration/
input_folder = "test_images/old"
output_folder = "output"
import os
basepath = os.getcwd()
input_path = os.path.join(basepath, input_folder)
output_path = os.path.join(basepath, output_folder)
if not os.path.exists(output_path):
os.mkdir(output_path)
# + [markdown] id="BvC6STmMVWPV"
# #◢ Block TWO run every time for selecting each image Click on the below left cornner play botton to start
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgZG8gewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwoKICAgICAgbGV0IHBlcmNlbnREb25lID0gZmlsZURhdGEuYnl0ZUxlbmd0aCA9PT0gMCA/CiAgICAgICAgICAxMDAgOgogICAgICAgICAgTWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCk7CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPSBgJHtwZXJjZW50RG9uZX0lIGRvbmVgOwoKICAgIH0gd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCk7CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 1000} id="l-gQku6A5jmu" outputId="ebb1a691-eb2f-4908-83b0-376f5137de37"
# !rm -rf /content/photo_restoration/test_images/old/*
# !rm -rf /content/photo_restoration/output/*
# %cd /content/photo_restoration/test_images/old/
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
review_text=(uploaded[fn])
# %cd /content/photo_restoration
image_name=str(fn)
parts = image_name.split(".")
print(parts[0])
print(image_name)
# !python run.py --input_folder /content/photo_restoration/test_images/old/ --output_folder /content/photo_restoration/output/ --GPU 0 --with_scratch
out_path='/content/photo_restoration/output/final_output/'+parts[0]+'.png'
in_path='/content/photo_restoration/test_images/old/'+image_name
import numpy as np
import PIL
#imi=PIL.Image.open(in_path)
#print("Original Image")
#imi
#download the enhanced image
files.download(out_path)
imo=PIL.Image.open(out_path)
print("Enhanced Image")
imo
| 11,606 |
/Learn_Python/Udemy_Curso_Basico/dev_09_zejercicio_01.ipynb | bf3c23c377fb665d9fdb126e6766930fd1188656 | [] | no_license | apizarroe/Personal_Code | https://github.com/apizarroe/Personal_Code | 0 | 1 | null | null | null | null | Jupyter Notebook | false | false | .py | 1,132 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# ---
# +
##################Ejercicio
#Crear un modulo "modulo1.py"
#Añadir la clase "Coche" creada en un ejercicio anterior al "modulo1"
#Añadir la funcion lambda "media" creada en un ejercicio anterior al "modulo1"
#
#Crear un programa en Python "programa1.py"
#Importar desde el modulo "modulo1" antes creado
#Crear un objeto "coche1" al instanciar la clase "Coche"
#Mediante print mostrar las caracteristicas del coche
#Calcular la media de 3 notas y mostrar el resultado con print
#
#Ejecutar el programa "programa.py" y ver el resultado
inear
# For output layer, when we want real values $[-\infty, +\infty]$
# $ \large A(x) = x $
A = lambda x : x
plt.plot(X, A(X))
# ## Sigmoid
Same as in logistic regression. Good for probabilities and scores in the output layer. Differentiable. Slow convergence. Vanishing problem
Being in the interval $[0,1]$, gradients become smaller in the inital layers due to chain rule application in the calculation of gradients.
# $ \large A(x) = \frac{1}{1 + e^{-x}} $
A = lambda x : 1/(1+ math.e**(-x))
plt.plot(X, A(X));
# ## Hyperbolic tangent
# Range -1 to 1. Sometimes better than sigmoid (Take care of Garson/Olden algorithms for importance extraction)
# $ \large A(x) = \frac{2}{1 + e^{-2x}} - 1 $
A = lambda x : 2/(1+ math.e**(-2*x)) - 1
plt.plot(X, A(X));
# ## Relu
Interesting idea. Most popular in Deep Learning. Avoids activating all neurons at the same time. Dying relu problem. Not differentiable.
# $ \large A(x) = \begin{cases}
# y=0 & x < 0 \\
# y=x & otherwise \\
# \end{cases} $
A = lambda x : np.select([x<0, x>=0], [0 , x])
plt.plot(X, A(X));
# ## Leaky Relu
Attempt to mitigate the dying relu problem.
# $ \large A(x) = \begin{cases}
# y= \frac{1}{\alpha} x & x < 0 \\
# y=x & otherwise \\
# \end{cases} $
a = 0.1
A = lambda x : np.select([x<0, x>=0], [a*x , x])
plt.plot(X, A(X));
# ## Elu
# Same idea of Relu, still piecewise so derivatives have to be calculated separatedly.
# $ \large A(x) = \begin{cases}
# y= \alpha (e^x-1) & x < 0 \\
# y=x & otherwise \\
# \end{cases} $
a = 1
A = lambda x : np.select([x<0, x>=0], [a*(math.e**(x)-1) , x])
plt.plot(X, A(X));
# ## Softplus
The idea of relu but smoothing the discontinuity making it differentiable.
# $ \large A(x) = \log(1 + e^x) $
A = lambda x : math.log(1+ math.e**(x))
y = [A(x) for x in X]
plt.plot(X, y);
# ## Softmax
# Normalized exponential function. Takes a vector of real inputs and converts into probabilities
# Good for classification purposes.
# $ \large A(x) = \frac{e^{x_i}}{sum_{j=1}^J e^{x_j}} $
# ## Credits & Links
# https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044
| 2,868 |
/Ekstraksi Fitur 2/.ipynb_checkpoints/Extraction Feature WIth Morphological-checkpoint.ipynb | 65eb3dc46aaa1d095b0ee10e67ab827b63e2a6c9 | [] | no_license | ragilhadi/pkl-textdetection-bpnn | https://github.com/ragilhadi/pkl-textdetection-bpnn | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 1,528,443 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
def show_image(img):
fig = plt.figure(figsize=(10,10))
axes = fig.add_subplot(111)
axes.imshow(img, cmap='gray')
kernel = np.ones((5,5), dtype=np.uint8)
kernel
image = cv.imread('sudoku.jpg', 0)
show_image(image)
sobelx = cv.Sobel(image, cv.CV_64F, 0, 1)
show_image(sobelx)
sobely = cv.Sobel(image, cv.CV_64F, 1, 0)
show_image(sobely)
sobel_final = cv.addWeighted(sobelx, 0.5, sobely, 0.5, 0)
show_image(sobel_final)
sobel_morph = cv.morphologyEx(sobel_final, cv.MORPH_GRADIENT, kernel)
show_image(sobel_morph)
toUrl": "", "userId": "09581223866808255815"}, "user_tz": -330} id="IWhJoYg0mBfX" outputId="9fdc5ae2-57e3-47ff-982b-5423dc49b544"
# %cd /content/drive/My Drive/sih
# + colab={} colab_type="code" id="VRzBAoQjmHP7"
# !python bin.py
# + colab={} colab_type="code" id="bcqU3qgxmJSy"
from keras.models import model_from_json
from sklearn.externals import joblib
from collections import deque
class Trajectory:
def __init__(self,modelPath='architecture.json',weightsPath='weights.h5',scalerPath = "scaler.pkl"):
'''
modelPath to be like = 'architecture.json'
weightsPath to be like = 'weights.h5'
'''
json_file = open(modelPath, 'r')
loaded_model_json = json_file.read()
json_file.close()
self.model = model_from_json(loaded_model_json)
self.model.load_weights(weightsPath)
self._look_back = 5
self.scaler = joblib.load(scalerPath)
def predictValues(self,x,lenPred=1):
'''
x is the small sequence
shape should be (no. of rows,5,3)
-1,3
Look back is fixed to be 5 and
3 is the params in each sequence
lenPred Should be how many further points you want
'''
pred = []
x = np.array(x)
x = self.scaler.transform(x)
x.reshape(-1,5,3)
x = deque(x)
assert(len(x) == self._look_back)
for i in range(lenPred):
# print("x before",np.array(x).shape)
x_prime = self.model.predict(np.array(x).reshape((-1,self._look_back,3)))
x.popleft()
x.extend([x_prime])
# print("X_prime",x_prime.shape)
# print(x)
# print("x after",np.array(x).shape)
pred.append(x_prime)
pred = np.array(pred)
pred = pred.reshape((-1,3))
pred = self.scaler.inverse_transform(pred)
return pred
def predict(self,x,lenPred):
x = deque(x)
pred = []
for i in range(lenPred):
xTemp = self.predictValues(x)
x.popleft()
x.extend(xTemp)
pred.append(xTemp)
return pred
# + colab={} colab_type="code" id="ddBDJ76UnIfU"
import numpy as np
traj = Trajectory(modelPath="architecture.json",weightsPath="weights.h5",scalerPath = "scaler.pkl")
x = [[[-13954, -9455, 27],[-13925, -9255, 71],[-14265, -9055, 117],[-14615, -8853, 163],[-14974, -8651, 207]]]
x = np.array(x).reshape(-1,3)
# print(x)
val = traj.predict(x,20)
# + colab={"base_uri": "https://localhost:8080/", "height": 369} colab_type="code" executionInfo={"elapsed": 2114, "status": "ok", "timestamp": 1579957373548, "user": {"displayName": "Shivam Aggarwal", "photoUrl": "", "userId": "09581223866808255815"}, "user_tz": -330} id="9SQ1xhtMnYCl" outputId="2a9ec975-cf5b-4bf6-c7d6-64ba8e2a7978"
val
# + colab={"base_uri": "https://localhost:8080/", "height": 369} colab_type="code" executionInfo={"elapsed": 1320, "status": "ok", "timestamp": 1579957374933, "user": {"displayName": "Shivam Aggarwal", "photoUrl": "", "userId": "09581223866808255815"}, "user_tz": -330} id="dBbKdjSInfd3" outputId="4178e449-5e0b-43f9-b94f-efde1d8165a7"
val
# + colab={} colab_type="code" id="Wk7VA4ctsPpj"
lfrom math import *
import math
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" executionInfo={"elapsed": 1433, "status": "ok", "timestamp": 1579955779730, "user": {"displayName": "Shivam Aggarwal", "photoUrl": "", "userId": "09581223866808255815"}, "user_tz": -330} id="gjsY7XhZn0qR" outputId="4ea0d81b-c33f-44f6-dfa3-141e29a78219"
#Let Missile Location Coordinates
traj_x = -16494
traj_y = -7800
traj_z = 200
#Let prediction Location Coordinates
traj1_x =-16387.025
traj1_y = -9707.095
traj1_z = 80.613
#let Actual Cordinates of Air Object
traj_currx = -14974
traj_curry = -8651
traj_currz = 207
#Let Velocity of Target Shooting Missile
initial_velocity = 410
distance_x = traj_x - traj1_x
distance_y = traj_y - traj1_y
distance_z = traj_x - traj1_z
theta = math.atan(distance_x / distance_y)
phi = math.atan(distance_z / distance_x)
#Let Acceleration Due To Gravity is
g = 9.8
#Time needed by Object to move at predicted Position
Time_needed = 5
#if (Time_needed_to reach predicted point == Time Needed by MIssile to hit the object) therfore we can shoot at that point of time.
Distance = sqrt((traj1_x - traj_x)**2 + (traj1_y - traj_y)**2 + (traj1_z - traj_z)**2)
print(Distance)
print(initial_velocity * math.cos(theta)*math.sin(phi)*Time_needed - (0.5*g*((Time_needed)**2)))
if abs(Distance - (initial_velocity * math.cos(theta)*math.sin(phi)*Time_needed - (0.5*g*((Time_needed)**2)))) <= 15:
print("We can shoot")
else:
print("Wait For the Next Interception")
print(initial_velocity * math.cos(theta)*math.sin(phi)*Time_needed - (0.5*g*((Time_needed)**2)))
# + colab={} colab_type="code" id="_VEa0SrnsVPg"
| 5,751 |
/homework/Day_002_HW.ipynb | ce66bd7d434cadb246b5a9f30c04ddc7a3dea89a | [] | no_license | wtchenzf/2nd-ML100Days | https://github.com/wtchenzf/2nd-ML100Days | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 7,894 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 練習時間
# 資料的操作有很多,接下來的馬拉松中我們會介紹常被使用到的操作,參加者不妨先自行想像一下,第一次看到資料,我們一般會想知道什麼訊息?
#
# #### Ex: 如何知道資料的 row 數以及 column 數、有什麼欄位、多少欄位、如何截取部分的資料等等
#
# 有了對資料的好奇之後,我們又怎麼通過程式碼來達成我們的目的呢?
#
# #### 可參考該[基礎教材](https://bookdata.readthedocs.io/en/latest/base/01_pandas.html#DataFrame-%E5%85%A5%E9%97%A8)或自行 google
import os
import numpy as np
import pandas as pd
# 設定 data_path
dir_data = './data/'
f_app = os.path.join('./data/','application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
# ### 如果沒有想法,可以先嘗試找出剛剛例子中提到的問題的答案
# #### 資料的 row 數以及 column 數
app_train.shape
# #### 列出所有欄位
app_train.columns
# #### 截取部分資料
idx=app_train.iloc[0:10,0:5]
idx
# #### 還有各種數之不盡的資料操作,重點還是取決於實務中遇到的狀況和你想問的問題,在馬拉松中我們也會陸續提到更多例子
| 1,010 |
/code/jl-files/STARAS-master/Networks/Sensibility_NetDensity/runs/20171112_215319_Sim_Watts_Density/Hex_Stack.ipynb | 5a0f640f52b50df14f7528c8211131c18ccdfdbf | [] | no_license | ccg-esb/MSW3D | https://github.com/ccg-esb/MSW3D | 0 | 0 | null | null | null | null | Jupyter Notebook | false | false | .py | 434,745 | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.15.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Hexbin and Stack Plotter
CC = 0.2
# We call the libraries and the data from the experiment
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import os
if not os.path.exists("Figures"):
os.makedirs("Figures")
# -
mydata = np.genfromtxt("out_watts",delimiter="\t")
TotalDensity = [i[0] for i in mydata] #Total bacterial density
Suc = [i[1] for i in mydata] #Susceptible strain total density
Res = [i[2] for i in mydata] #Resistant strain total density
Rel_Freq = [i[2]/(i[1]+i[2]) for i in mydata] #Resistant Relative Frequency
Suc_win = [i[3] for i in mydata] #Number of wins of Susceptible strain
Res_win = [i[4] for i in mydata] #Number of wins of Resistant strain
R_S_winratio = [i[4]/(i[3]+i[4]) for i in mydata]#node-to-node Resistant WinRatio
Netdensity = [i[6] for i in mydata] #Network density
clustc = [i[7] for i in mydata] #Network clustering coefficient
clo_centr = [i[8] for i in mydata] #Closeness centrality
ant_deg = [i[9] for i in mydata] #Antibiotic source connectivity
bet_centr = [i[10] for i in mydata] #Betweeness centrality
NamesDict = {'TotDen':'Total Bacterial Density','SucDen': 'Susceptible Bacteria Density',
'ResDen': 'Resistant Bacteria Density','Rel_Freq': 'Resistant Relative Frequency',
'SucWin': 'Susceptible Node Winners', 'ResWin':'Resistant Node Winners', 'R_S_winratio':'Ratio of winners R/S',
'NetDen':'Network Density','ClustC':'Global Clustering Coefficient','CloCentr':'Closeness Centrality',
'ant_deg':'Connectivity of Antibiotic Source',
'BetCentr':'Betweenness Centrality'}
d = {'TotDen':TotalDensity,'SucDen':Suc,'ResDen':Res,'Rel_Freq':Rel_Freq,
'SucWin':Suc_win, 'ResWin':Res_win, 'R_S_winratio':R_S_winratio,
'NetDen':Netdensity,'ClustC':clustc,'CloCentr':clo_centr,'AntDeg':ant_deg,
'BetCentr':bet_centr}
df = pd.DataFrame(d)
# We generate the function for plotting many comparisons
def plot_comparison(colx,coly):
x = np.array(df[colx])
y = np.array(df[coly])
xmin = x.min()
xmax = x.max()
ymin = y.min()
ymax = y.max()
hb = plt.hexbin(df[colx], df[coly], gridsize=40,bins='log', cmap='inferno')
plt.axis([xmin, xmax, ymin, ymax])
plt.title("Effect of the "+NamesDict[colx]+" on the "+NamesDict[coly]+ " (NS_CC"+str(CC)+")",y=1.08)
plt.xlabel(NamesDict[colx])
plt.ylabel(NamesDict[coly])
cb = plt.colorbar(hb)
cb.set_label('log(counts)')
plt.savefig("Figures/"+"SenND_"+NamesDict[colx]+"_on_"+NamesDict[coly]+"_CC"+str(CC)+".png", bbox_inches='tight')
plt.show()
# ### Network Density vs Total Bacterial Density
plot_comparison('NetDen','TotDen')
# ### Density vs R-Relative Frequency
plot_comparison('NetDen','Rel_Freq')
# ### Clustering Coefficent vs R-Relative Frequency
plot_comparison('ClustC','Rel_Freq')
# ### Relationship between Node Winners and S/R Ratio
plot_comparison('ResWin','Rel_Freq')
# ### Centrality measures
plot_comparison('CloCentr','TotDen')
plot_comparison('CloCentr','Rel_Freq')
# ## Analysis of Relative Frequency AND Density
def stackplot_column(column):
sorted_df = df.sort(column)
plt.stackplot(sorted_df[column],sorted_df.ResDen,sorted_df.SucDen,colors=['red','blue'])
plt.title("Effect of "+NamesDict[column]+" on the metapopulation",y=1.08)
plt.ylabel("Bacterial Density")
plt.xlabel(NamesDict[column])
plt.legend(['Resistant','Susceptible'])
plt.savefig("Figures/"+"SenND_StackPlot_"+NamesDict[column]+"_on_Populations_CC0.2.png")
plt.show()
# ### Effect of Net Density on Population
stackplot_column('NetDen')
# ### Effect of Clustering Coefficient on Populations
stackplot_column('ClustC')
# ### Effect of Closeness Centrality on populations
stackplot_column('CloCentr')
# ### Effect of Closeness Centrality on populations
stackplot_column('BetCentr')
# ### Effect of Network Density on Populations
stackplot_column('NetDen')
e caraterísticas)
keras.layers.Dense(300, activation="relu",name='Hidden1'), # Hidden layer 1
keras.layers.Dense(100, activation="relu",name='Hidden2'), # Hidden layer 2
keras.layers.Dense(10, activation="softmax",name='Output') # Output layer (la cual depende de la aplicación, para este caso son 10 porque es el número de labels)
])
# + id="7-1yEWjkcRl-"
# Creación de la estrucutura interna de la Red
keras.backend.clear_session() # Reiniciar todo lo que se ha utilizado
#np.random.seed(42) # Semillas fijas
tf.random.set_seed(42) # Modelo de inicialización de valores internos de la red neuronal
# + id="pZQECFS6cX-D" colab={"base_uri": "https://localhost:8080/"} outputId="7a5fd7c6-bb6c-4816-e2b0-d23f7356d06f"
# Impresión de la estrucura de la Red Neuronal
model.summary()
# + id="gzgFYVPEcdLG" colab={"base_uri": "https://localhost:8080/", "height": 465} outputId="056f4c40-a958-4717-999f-f5e257f983e8"
#keras.utils.plot_model(model, "my_mnist_model.png", show_shapes=True)
tf.keras.utils.plot_model(model)
# + id="7_bSy92dchvz" colab={"base_uri": "https://localhost:8080/"} outputId="4990e1f4-eb03-4445-c1f8-8c16497e9544"
model.compile(loss="sparse_categorical_crossentropy", # Función de costo a derivar
optimizer="sgd", # Optimización: gradiente descendente
metrics=["accuracy"]) # Monitoriar la medida de desempeño (no lo deriva-> solo para ver la tarea final)
history = model.fit(X_train, # Conjunto de entrenamiento
y_train, # Etiquetas
epochs=30, # Epocas: número de veces que se ejecutaran el algoritmo de backpropagation
validation_data=(X_valid, y_valid)) # Conjunto de validación (evaluar el desempeño)
# + id="IDn2PtLacnzm" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="3c23e65e-8529-499d-bf9c-916df8b16eaa"
hpd = pd.DataFrame(history.history)
hpd.plot()
plt.title('Todas las métricas de rendmiento', c='r')
plt.grid(True)
plt.show()
# + id="kzLqRDXccp1J" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="9eb815fa-3b3a-43ea-c203-ba162ee41bf5"
def label_(numero_label):
if numero_label == 0:
label = 'Impar'
if numero_label == 1:
label = 'Par'
return label
ii = 1000
pe = model.predict(X_test[ii][np.newaxis,:,:])
probabilidad = 'Clasificación: ' + str(pe.argmax())
plt.title(probabilidad, c='r')
plt.imshow(X_test[ii])
plt.show()
# + [markdown] id="bVAw0yYc1yc1"
# #**Deep Learning (functional model)**
# + id="Es7Ye4g_4MC2" colab={"base_uri": "https://localhost:8080/"} outputId="2e93f7f7-827b-4b04-c821-6d8ff9ed1a6e"
Q1 = 300
Q2 = 100
l1 = 1e-3 # Valor de regualización 1
l2 = 1e-3 # Valor de regualización 1
input = tf.keras.layers.Input(shape=(X_train.shape[1],X_train.shape[2]), name='Entrada') # Entrada de imagenes normales
flatten = tf.keras.layers.Flatten(input_shape=(X_train.shape[1], X_train.shape[2]))(input) # Capa de pre-procesamiento para la entrada 1
# Primera capa de de la Red Neuronal
h1 = tf.keras.layers.Dense(Q1, # Número de neuronas
activation='relu', # Función de activación
name='h1', # Nombre de la neurona
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=l1,l2=l2)
)(flatten)
# Segunda capa de de la Red Neuronal
h2 = tf.keras.layers.Dense(Q2, # Número de neuronas
activation='relu', # Función de activación
name='h2', # Nombre de la neurona
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=l1,l2=l2)
)(h1)
output = tf.keras.layers.Dense(10,activation="softmax",name='outAMC')(h2) # Salida multiclase (se escoge una sotfmax para múlticlase)
model_fun = tf.keras.Model(inputs=input, outputs=output)
model_fun.summary()
# + id="24Y9DTes5P_n"
# Entrenamiento y compilación de la Red Neuronal
model_fun.compile(loss=["sparse_categorical_crossentropy",tf.keras.losses.BinaryCrossentropy()], # Tipo de función de costo: custom_loss(), custom_loss(), "sparse_categorical_crossentropy"
optimizer=tf.keras.optimizers.SGD(learning_rate=1e-3), # Sintonización del optimizador
metrics=["accuracy"] # Métrica de seguimiento: f1, precision, recall, crossentropy
)
# + id="d_FGddw95U3z" colab={"base_uri": "https://localhost:8080/"} outputId="c736586e-dab2-41ea-a0af-6876ccd87924"
history = model_fun.fit(X_train, y_train, # Conjunto de entrenamiento
epochs=30, # Número de epocas
batch_size=32, # Número de lotes: 32, 64, 128, 256
validation_data=(X_valid, y_valid) # Selección del conjunto de validación
)
# + id="Nz47yfFPRCCm" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="a2b4deac-834a-4630-c618-d5bcea9ccbf9"
hpd = pd.DataFrame(history.history)
hpd.plot()
plt.title('Todas las métricas de rendmiento', c='r')
plt.grid(True)
plt.show()
# + id="XAVO6eOZ7Xlc"
model_PCA = tf.keras.Model(inputs=input, outputs=model_fun.get_layer('h2').output)
tf.keras.utils.plot_model(model_PCA)
ReductionDimension_PCA = model_PCA.predict(X_train)
# + id="aSckNGes8ZxK" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="da16532b-722d-44f7-a5c4-977e70a06e47"
pca = PCA(n_components = 2)
tranformed_pca = pca.fit_transform(ReductionDimension_PCA)
plt.figure()
plt.scatter(tranformed_pca[:,0], tranformed_pca[:,1], s=1, c=y_train)
plt.colorbar()
plt.xticks(c='r')
plt.yticks(c='r')
plt.show()
# + id="qxVNtrLi8tu4"
pred_ = np.round(model_fun.predict(X_test))
pred_NN = np.array([x.argmax() for x in pred_])
# + id="kd4ghhBl8x3r" colab={"base_uri": "https://localhost:8080/"} outputId="a5cc7e71-8484-49ef-9665-5c3304144c56"
print("Accuracy :", accuracy_score(y_test, pred_NN))
print(confusion_matrix(list(y_test), list(pred_NN)))
print(classification_report(y_test, pred_NN, target_names=class_names))
| 10,955 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 41