markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv' apple = pd.read_csv(url) apple.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 4. Check out the type of the columns
apple.dtypes
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 5. Transform the Date column as a datetime type
apple.Date = pd.to_datetime(apple.Date) apple['Date'].head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 6. Set the date as the index
apple = apple.set_index('Date') apple.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 7. Is there any duplicate dates?
# NO! All are unique apple.index.is_unique
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
apple.sort_index(ascending = True).head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 9. Get the last business day of each month
apple_month = apple.resample('BM').mean() apple_month.head()
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 10. What is the difference in days between the first day and the oldest
(apple.index.max() - apple.index.min()).days
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 11. How many months in the data we have?
apple_months = apple.resample('BM').mean() len(apple_months.index)
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
# makes the plot and assign it to a variable appl_open = apple['Adj Close'].plot(title = "Apple Stock") # changes the size of the graph fig = appl_open.get_figure() fig.set_size_inches(13.5, 9)
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise
**Colab RDP** : Remote Desktop to Colab InstanceUsed Google Remote Desktop & Ngrok Tunnel> **Warning : Not for Cryptocurrency Mining** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-running computations. Users who use Colaboratory for long-running computations may be temporarily restricted in the type of hardware made available to them, and/or the duration that the hardware can be used for. We encourage users with high computational needs to use Colaboratory’s UI with a local runtime. Please note that using Colaboratory for cryptocurrency mining is disallowed entirely, and may result in being banned from using Colab altogether.Google Colab can give you Instance with 12GB of RAM and GPU for 12 hours (Max.) for Free users. Anyone can use it to perform Heavy Tasks.To use other similiar Notebooks use my Repository **[Colab Hacks](https://github.com/PradyumnaKrishna/Colab-Hacks)**
#@title **Create User** #@markdown Enter Username and Password username = "user" #@param {type:"string"} password = "root" #@param {type:"string"} print("Creating User and Setting it up") # Creation of user ! sudo useradd -m $username &> /dev/null # Add user to sudo group ! sudo adduser $username sudo &> /dev/null # Set password of user to 'root' ! echo '$username:$password' | sudo chpasswd # Change default shell from sh to bash ! sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd print("User Created and Configured") #@title **RDP** #@markdown It takes 4-5 minutes for installation #@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication CRP = "" #@param {type:"string"} def CRD(): with open('install.sh', 'w') as script: script.write("""#! /bin/bash b='\033[1m' r='\E[31m' g='\E[32m' c='\E[36m' endc='\E[0m' enda='\033[0m' printf "\n\n$c$b Loading Installer $endc$enda" >&2 if sudo apt-get update &> /dev/null then printf "\r$g$b Installer Loaded $endc$enda\n" >&2 else printf "\r$r$b Error Occured $endc$enda\n" >&2 exit fi printf "\n$g$b Installing Chrome Remote Desktop $endc$enda" >&2 { wget https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb sudo dpkg --install chrome-remote-desktop_current_amd64.deb sudo apt install --assume-yes --fix-broken } &> /dev/null && printf "\r$c$b Chrome Remote Desktop Installed $endc$enda\n" >&2 || { printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; } sleep 3 printf "$g$b Installing Desktop Environment $endc$enda" >&2 { sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes xfce4 desktop-base sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/xfce4-session" > /etc/chrome-remote-desktop-session' sudo apt install --assume-yes xscreensaver sudo systemctl disable lightdm.service } &> /dev/null && printf "\r$c$b Desktop Environment Installed $endc$enda\n" >&2 || { printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; } sleep 3 printf "$g$b Installing Google Chrome $endc$enda" >&2 { wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo dpkg --install google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken } &> /dev/null && printf "\r$c$b Google Chrome Installed $endc$enda\n" >&2 || printf "\r$r$b Error Occured $endc$enda\n" >&2 sleep 3 printf "$g$b Installing other Tools $endc$enda" >&2 if sudo apt install nautilus nano -y &> /dev/null then printf "\r$c$b Other Tools Installed $endc$enda\n" >&2 else printf "\r$r$b Error Occured $endc$enda\n" >&2 fi sleep 3 printf "\n$g$b Installation Completed $endc$enda\n\n" >&2""") ! chmod +x install.sh ! ./install.sh # Adding user to CRP group ! sudo adduser $username chrome-remote-desktop &> /dev/null # Finishing Work ! su - $username -c """$CRP""" print("Finished Succesfully") try: if username: if CRP == "" : print("Please enter authcode from the given link") else: CRD() except NameError: print("username variable not found") print("Create a User First") #@title **Google Drive Mount** #@markdown Google Drive used as Persistance HDD for files.<br> #@markdown Mounted at `user` Home directory inside drive folder #@markdown (If `username` variable not defined then use root as default). def MountGDrive(): from google.colab import drive ! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1 mount = """from os import environ as env from google.colab import drive env['CLOUDSDK_CONFIG'] = '/content/.config' drive.mount('{}')""".format(mountpoint) with open('/content/mount.py', 'w') as script: script.write(mount) ! runuser -l $user -c "python3 /content/mount.py" try: if username: mountpoint = "/home/"+username+"/drive" user = username except NameError: print("username variable not found, mounting at `/content/drive' using `root'") mountpoint = '/content/drive' user = 'root' MountGDrive() #@title **SSH** (Using NGROK) ! pip install colab_ssh --upgrade &> /dev/null from colab_ssh import launch_ssh, init_git from IPython.display import clear_output #@markdown Copy authtoken from https://dashboard.ngrok.com/auth ngrokToken = "" #@param {type:'string'} def runNGROK(): launch_ssh(ngrokToken, password) clear_output() print("ssh", username, end='@') ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))" try: if username: pass elif password: pass except NameError: print("No user found using username and password as 'root'") username='root' password='root' if ngrokToken == "": print("No ngrokToken Found, Please enter it") else: runNGROK() #@title Package Installer { vertical-output: true } run = False #@param {type:"boolean"} #@markdown *Package management actions (gasp)* action = "Install" #@param ["Install", "Check Installed", "Remove"] {allow-input: true} package = "wget" #@param {type:"string"} system = "apt" #@param ["apt", ""] def install(package=package, system=system): if system == "apt": !apt --fix-broken install > /dev/null 2>&1 !killall apt > /dev/null 2>&1 !rm /var/lib/dpkg/lock-frontend !dpkg --configure -a > /dev/null 2>&1 !apt-get install -o Dpkg::Options::="--force-confold" --no-install-recommends -y $package !dpkg --configure -a > /dev/null 2>&1 !apt update > /dev/null 2>&1 !apt install $package > /dev/null 2>&1 def check_installed(package=package, system=system): if system == "apt": !apt list --installed | grep $package def remove(package=package, system=system): if system == "apt": !apt remove $package if run: if action == "Install": install() if action == "Check Installed": check_installed() if action == "Remove": remove() #@title **Colab Shutdown** #@markdown To Kill NGROK Tunnel NGROK = False #@param {type:'boolean'} #@markdown To Unmount GDrive GDrive = False #@param {type:'boolean'} #@markdown To Sleep Colab Sleep = False #@param {type:'boolean'} if NGROK: ! killall ngrok if GDrive: with open('/content/unmount.py', 'w') as unmount: unmount.write("""from google.colab import drive drive.flush_and_unmount()""") try: if user: ! runuser $user -c 'python3 /content/unmount.py' except NameError: print("Google Drive not Mounted") if Sleep: ! sleep 43200
_____no_output_____
MIT
Colab RDP/Colab RDP.ipynb
Apon77/Colab-Hacks
Dev Original method
# you need to download these from cellphonedb website / github and replace the path accordingly dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/' metafile = dat+'test_meta.txt' countfile = dat+'test_counts.txt' statistical_analysis(meta_filename=metafile, counts_filename=countfile) pd.read_csv('./out/pvalues.csv')
_____no_output_____
MIT
scanpy_cellphonedb.ipynb
stefanpeidli/cellphonedb
scanpy API test official cellphonedb example data
# you need to download these from cellphonedb website / github and replace the path accordingly dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/' metafile = dat+'test_meta.txt' countfile = dat+'test_counts.txt' bdata=sc.AnnData(pd.read_csv(countfile, sep='\t',index_col=0).values.T, obs=pd.read_csv(metafile, sep='\t',index_col=0), var=pd.DataFrame(index=pd.read_csv(countfile, sep='\t',index_col=0).index.values)) outs=cphdb.statistical_analysis_scanpy(bdata, bdata.var_names, bdata.obs_names, 'cell_type') outs['pvalues'] # the output is also saved to bdata.uns['cellphonedb_output'] bdata
_____no_output_____
MIT
scanpy_cellphonedb.ipynb
stefanpeidli/cellphonedb
crop xml manually change the line and sample values in the xml to match (n_lines, n_samples)
eis_xml = expatbuilder.parse(eis_xml_filename, False) eis_dom = eis_xml.getElementsByTagName("File_Area_Observational").item(0) dom_lines = eis_dom.getElementsByTagName("Axis_Array").item(0) dom_samples = eis_dom.getElementsByTagName("Axis_Array").item(1) dom_lines = dom_lines.getElementsByTagName("elements")[0] dom_samples = dom_samples.getElementsByTagName("elements")[0] total_lines = int( dom_lines.childNodes[0].data ) total_samples = int( dom_samples.childNodes[0].data ) total_lines, total_samples
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
crop image
dn_size_bytes = 4 # number of bytes per DN n_lines = 60 # how many to crop to n_samples = 3 start_line = 1200 # point to start crop from start_sample = 1200 image_offset = (start_line*total_samples + start_sample) * dn_size_bytes line_length = n_samples * dn_size_bytes buffer_size = n_lines * total_samples * dn_size_bytes with open(eis_filename, 'rb') as f: f.seek(image_offset) b_image_data = f.read() b_image_data = np.frombuffer(b_image_data[:buffer_size], dtype=np.uint8) b_image_data.shape b_image_data = np.reshape(b_image_data, (n_lines, total_samples, dn_size_bytes) ) b_image_data.shape b_image_data = b_image_data[:,:n_samples,:] b_image_data.shape image_data = [] for i in range(n_lines): image_sample = [] for j in range(n_samples): dn_bytes = bytearray(b_image_data[i,j,:]) dn = struct.unpack( "<f", dn_bytes ) image_sample.append(dn) image_data.append(image_sample) image_data = np.array(image_data) image_data.shape plt.figure(figsize=(10,10)) plt.imshow(image_data, vmin=0, vmax=1) crop = "_cropped" image_fn, image_ext = os.path.splitext(eis_filename) mini_image_fn = image_fn + crop + image_ext mini_image_bn = os.path.basename(mini_image_fn) if os.path.exists(mini_image_fn): os.remove(mini_image_fn) with open(mini_image_fn, 'ab+') as f: b_reduced_image_data = image_data.tobytes() f.write(b_reduced_image_data)
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
crop times csv table
import pandas as pd # assumes csv file has the same filename with _times appended eis_csv_fn = image_fn + "_times.csv" df1 = pd.read_csv(eis_csv_fn) df1 x = np.array(df1) y = x[:n_lines, :] df = pd.DataFrame(y) df crop = "_cropped" csv_fn, csv_ext = os.path.splitext(eis_csv_fn) crop_csv_fn = csv_fn + crop + csv_ext crop_csv_bn = os.path.basename(crop_csv_fn) crop_csv_bn # write to file if os.path.exists(crop_csv_fn): os.remove(crop_csv_fn) df.to_csv( crop_csv_fn, header=False, index=False )
_____no_output_____
CC0-1.0
isis/notebooks/crop_eis.ipynb
gknorman/ISIS3
Cryptocurrency Clusters
%matplotlib inline #import dependencies from pathlib import Path import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.cluster import KMeans
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Data Preparation
#read data in using pandas df = pd.read_csv('Resources/crypto_data.csv') df.head(10) df.dtypes #Discard all cryptocurrencies that are not being traded.In other words, filter for currencies that are currently being traded. myfilter = (df['IsTrading'] == True) trading_df = df.loc[myfilter] trading_df = trading_df.drop('IsTrading', axis = 1) trading_df #Once you have done this, drop the IsTrading column from the dataframe. #Remove all rows that have at least one null value. trading_df.dropna(how = 'any', inplace = True) trading_df #Filter for cryptocurrencies that have been mined. That is, the total coins mined should be greater than zero. myfilter2 = (trading_df['TotalCoinsMined'] >0) final_df = trading_df.loc[myfilter2] final_df #In order for your dataset to be comprehensible to a machine learning algorithm, its data should be numeric. #Since the coin names do not contribute to the analysis of the data, delete the CoinName from the original dataframe. CoinName = final_df['CoinName'] Ticker = final_df['Unnamed: 0'] final_df = final_df.drop(['Unnamed: 0','CoinName'], axis = 1) final_df # convert the remaining features with text values, Algorithm and ProofType, into numerical data. #To accomplish this task, use Pandas to create dummy variables. final_df['TotalCoinSupply'] = final_df['TotalCoinSupply'].astype(float) final_df = pd.get_dummies(final_df) final_df
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Examine the number of rows and columns of your dataset now. How did they change? There were 71 unique algorithms and 25 unique prooftypes so now we have 98 features in the dataset which is quite large.
#Standardize your dataset so that columns that contain larger values do not unduly influence the outcome. scaled_data = StandardScaler().fit_transform(final_df) scaled_data
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Dimensionality Reduction Creating dummy variables above dramatically increased the number of features in your dataset. Perform dimensionality reduction with PCA. Rather than specify the number of principal components when you instantiate the PCA model, it is possible to state the desired explained variance. For this project, preserve 90% of the explained variance in dimensionality reduction. How did the number of the features change?
# Applying PCA to reduce dimensions # Initialize PCA model pca = PCA(.90) # Get two principal components for the iris data. data_pca = pca.fit_transform(scaled_data) pca.explained_variance_ratio_ #df with the principal components (columns) pd.DataFrame(data_pca)
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Next, further reduce the dataset dimensions with t-SNE and visually inspect the results. In order to accomplish this task, run t-SNE on the principal components: the output of the PCA transformation. Then create a scatter plot of the t-SNE output. Observe whether there are distinct clusters or not.
# Initialize t-SNE model tsne = TSNE(learning_rate=35) # Reduce dimensions tsne_features = tsne.fit_transform(data_pca) # The dataset has 2 columns tsne_features.shape # Prepare to plot the dataset # Visualize the clusters plt.scatter(tsne_features[:,0], tsne_features[:,1]) plt.show()
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Cluster Analysis with k-Means Create an elbow plot to identify the best number of clusters.
#Use a for-loop to determine the inertia for each k between 1 through 10. #Determine, if possible, where the elbow of the plot is, and at which value of k it appears. inertia = [] k = list(range(1, 11)) for i in k: km = KMeans(n_clusters=i) km.fit(data_pca) inertia.append(km.inertia_) # Define a DataFrame to plot the Elbow Curve elbow_data = {"k": k, "inertia": inertia} df_elbow = pd.DataFrame(elbow_data) plt.plot(df_elbow['k'], df_elbow['inertia']) plt.xticks(range(1,11)) plt.xlabel('Number of clusters') plt.ylabel('Inertia') plt.show() # Initialize the K-Means model model = KMeans(n_clusters=10, random_state=0) # Train the model model.fit(scaled_data) # Predict clusters predictions = model.predict(scaled_data) # Create return DataFrame with predicted clusters final_df["cluster"] = pd.Series(model.labels_) plt.figure(figsize = (18,12)) plt.scatter(final_df['TotalCoinsMined'], final_df['TotalCoinSupply'], c=final_df['cluster']) plt.xlabel('TotalCoinsMined') plt.ylabel('TotalCoinSupply') plt.show() plt.figure(figsize = (18,12)) plt.scatter(final_df['TotalCoinsMined'], final_df['TotalCoinSupply'], c=final_df['cluster']) plt.xlabel('TotalCoinsMined') plt.ylabel('TotalCoinSupply') plt.xlim([0, 250000000]) plt.ylim([0, 250000000]) plt.show()
_____no_output_____
ADSL
unsupervised ML crypto.ipynb
dmtiblin/UR-Unsupervised-Machine-Learning-Challenge
Our best model - Catboost with learning rate of 0.7 and 180 iterations. Was trained on 10 files of the data with similar distribution of the feature user_target_recs (among the number of rows of each feature value). We received an auc of 0.845 on the kaggle leaderboard Mount Drive
from google.colab import drive drive.mount("/content/drive")
Mounted at /content/drive
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Installations and Imports
# !pip install scikit-surprise !pip install catboost # !pip install xgboost import os import pandas as pd # import xgboost # from xgboost import XGBClassifier # import pickle import catboost from catboost import CatBoostClassifier
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Global Parameters and Methods
home_path = "/content/drive/MyDrive/RS_Kaggle_Competition" def get_train_files_paths(path): dir_paths = [ os.path.join(path, dir_name) for dir_name in os.listdir(path) if dir_name.startswith("train")] file_paths = [] for dir_path in dir_paths: curr_dir_file_paths = [ os.path.join(dir_path, file_name) for file_name in os.listdir(dir_path) ] file_paths.extend(curr_dir_file_paths) return file_paths train_file_paths = get_train_files_paths(home_path)
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Get Data
def get_df_of_many_files(paths_list, number_of_files): for i in range(number_of_files): path = paths_list[i] curr_df = pd.read_csv(path) if i == 0: df = curr_df else: df = pd.concat([df, curr_df]) return df sample_train_data = get_df_of_many_files(train_file_paths[-21:], 10) # sample_train_data = pd.read_csv(home_path + "/10_files_train_data") sample_val_data = get_df_of_many_files(train_file_paths[-10:], 3) # sample_val_data = pd.read_csv(home_path+"/3_files_val_data") # sample_val_data.to_csv(home_path+"/3_files_val_data")
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Preprocess data
train_data = sample_train_data.fillna("Unknown") val_data = sample_val_data.fillna("Unknown") train_data import gc del sample_val_data del sample_train_data gc.collect()
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Scale columns
# scale columns from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler scaling_cols= ["empiric_calibrated_recs", "empiric_clicks", "empiric_calibrated_recs", "user_recs", "user_clicks", "user_target_recs"] scaler = StandardScaler() train_data[scaling_cols] = scaler.fit_transform(train_data[scaling_cols]) val_data[scaling_cols] = scaler.transform(val_data[scaling_cols]) train_data val_data = val_data.drop(columns=["Unnamed: 0.1"]) val_data
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Explore Data
sample_train_data test_data from collections import Counter user_recs_dist = test_data["user_recs"].value_counts(normalize=True) top_user_recs_count = user_recs_dist.nlargest(200) print(top_user_recs_count) percent = sum(top_user_recs_count.values) percent print(sample_train_data["user_recs"].value_counts(normalize=False)) print(test_data["user_recs"].value_counts()) positions = top_user_recs_count def sample(obj, replace=False, total=1500000): return obj.sample(n=int(positions[obj.name] * total), replace=replace) sample_train_data_filtered = sample_train_data[sample_train_data["user_recs"].isin(positions.keys())] result = sample_train_data_filtered.groupby("user_recs").apply(sample).reset_index(drop=True) result["user_recs"].value_counts(normalize=True) top_user_recs_train_data = result top_user_recs_train_data not_top_user_recs_train_data = sample_train_data[~sample_train_data["user_recs"].isin(top_user_recs_train_data["user_recs"].unique())] not_top_user_recs_train_data["user_recs"].value_counts() train_data = pd.concat([top_user_recs_train_data, not_top_user_recs_train_data]) train_data["user_recs"].value_counts(normalize=False) train_data = train_data.drop(columns = ["user_id_hash"]) train_data = train_data.fillna("Unknown") train_data
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Train the model
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn import metrics X_train = train_data.drop(columns=["is_click"], inplace=False) y_train = train_data["is_click"] X_val = val_data.drop(columns=["is_click"], inplace=False) y_val = val_data["is_click"] from catboost import CatBoostClassifier # cat_features_inds = [1,2,3,4,7,8,12,13,14,15,17,18] encode_cols = [ "user_id_hash", "target_id_hash", "syndicator_id_hash", "campaign_id_hash", "target_item_taxonomy", "placement_id_hash", "publisher_id_hash", "source_id_hash", "source_item_type", "browser_platform", "country_code", "region", "gmt_offset"] # model = CatBoostClassifier(iterations = 50, learning_rate=0.5, task_type='CPU', loss_function='Logloss', cat_features = encode_cols) model = CatBoostClassifier(iterations = 180, learning_rate=0.7, task_type='CPU', loss_function='Logloss', cat_features = encode_cols, eval_metric='AUC')#, depth=6, l2_leaf_reg= 10) """ All of our tries with catboost (only the best of them were uploaded to kaggle): results: all features, all rows of train fillna = Unknown logloss 100 iterations learning rate 0.5 10 files: 0.857136889762303 | bestTest = 0.4671640673 0.857136889762303 logloss 100 iterations learning rate 0.4 10 files: bestTest = 0.4676805926 0.856750110976787 logloss 100 iterations learning rate 0.55 10 files: bestTest = 0.4669830858 0.8572464626142212 logloss 120 iterations learning rate 0.6 10 files: bestTest = 0.4662084678 0.8577564702279399 logloss 150 iterations learning rate 0.7 10 files: bestTest = 0.4655981391 0.8581645278496352 logloss 180 iterations learning rate 0.7 10 files: bestTest = 0.4653168207 0.8583423138228865 !!!!!!!!!! logloss 180 iterations learning rate 0.7 10 files day extracted from date (not as categorical): 0.8583034988 logloss 180 iterations learning rate 0.7 10 files day extracted from date (as categorical): 0.8583014151 logloss 180 iterations learning rate 0.75 10 files day extracted from date (as categorical): 0.8582889749 logloss 180 iterations learning rate 0.65 10 files day extracted from date (as categorical): 0.8582334254 logloss 180 iterations learning rate 0.65 10 files day extracted from date (as categorical) StandardScaler: 0.8582101013 logloss 180 iterations learning rate 0.7 10 files day extracted from date (as categorical) MinMaxScaler dropna: ~0.8582 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as categorical MinMaxScaler: 0.8561707 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as not categorical no scale: 0.8561707195 logloss 180 iterations learning rate 0.7 distributed data train and val, no scale no date: 0.8559952294 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as not categorical no scale with date: 0.8560461554 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, no user no date: 0.8545560094 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, with user and numeric day: 0.8561601034 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, with user with numeric date: 0.8568834122 logloss 180 iterations learning rate 0.7, 10 different files, scaled, all features: 0.8584829166 !!!!!!! logloss 180 iterations learning rate 0.7, new data, scaled, all features: 0.8915972905 test: 0.84108 logloss 180 iterations learning rate 0.9 10 files: bestTest = 0.4656462845 logloss 100 iterations learning rate 0.5 8 files: 0.8568031111965864 logloss 300 iterations learning rate 0.5: crossentropy 50 iterations learning rate 0.5: 0.8556282878645277 """ model.fit(X_train, y_train, eval_set=(X_val, y_val), verbose=10)
0: test: 0.8149026 best: 0.8149026 (0) total: 6.36s remaining: 18m 57s 10: test: 0.8461028 best: 0.8461028 (10) total: 53.6s remaining: 13m 44s 20: test: 0.8490288 best: 0.8490288 (20) total: 1m 38s remaining: 12m 26s 30: test: 0.8505695 best: 0.8505695 (30) total: 2m 23s remaining: 11m 29s 40: test: 0.8514950 best: 0.8514950 (40) total: 3m 8s remaining: 10m 38s 50: test: 0.8522340 best: 0.8522340 (50) total: 3m 53s remaining: 9m 50s 60: test: 0.8526374 best: 0.8526374 (60) total: 4m 37s remaining: 9m 70: test: 0.8531463 best: 0.8531463 (70) total: 5m 22s remaining: 8m 14s 80: test: 0.8534035 best: 0.8534035 (80) total: 6m 6s remaining: 7m 27s 90: test: 0.8536159 best: 0.8536567 (89) total: 6m 51s remaining: 6m 42s 100: test: 0.8537674 best: 0.8537674 (100) total: 7m 35s remaining: 5m 56s 110: test: 0.8539636 best: 0.8539636 (110) total: 8m 19s remaining: 5m 10s 120: test: 0.8541628 best: 0.8541628 (120) total: 9m 3s remaining: 4m 25s 130: test: 0.8542642 best: 0.8542642 (130) total: 9m 48s remaining: 3m 39s 140: test: 0.8543702 best: 0.8543800 (137) total: 10m 31s remaining: 2m 54s 150: test: 0.8544469 best: 0.8544550 (149) total: 11m 15s remaining: 2m 9s 160: test: 0.8543904 best: 0.8545011 (158) total: 11m 59s remaining: 1m 24s 170: test: 0.8543992 best: 0.8545011 (158) total: 12m 43s remaining: 40.2s 179: test: 0.8544623 best: 0.8545011 (158) total: 13m 23s remaining: 0us bestTest = 0.8545011269 bestIteration = 158 Shrink model to first 159 iterations.
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Submission File
test_data = pd.read_csv("/content/drive/MyDrive/RS_Kaggle_Competition/test/test_file.csv") test_data = test_data.iloc[:,:-1] test_data[scaling_cols] = scaler.transform(test_data[scaling_cols]) X_test = test_data.fillna("Unknown") X_test pred_proba = model.predict_proba(X_test) submission_dir_path = "/content/drive/MyDrive/RS_Kaggle_Competition/submissions" pred = pred_proba[:,1] pred_df = pd.DataFrame(pred) pred_df.reset_index(inplace=True) pred_df.columns = ['Id', 'Predicted'] pred_df.to_csv(submission_dir_path + '/catboost_submission_datafrom1704_data_lr_0.7_with_scale_with_num_startdate_with_user_iters_159.csv', index=False)
_____no_output_____
MIT
CTR Prediction/RS_Kaggle_Catboost.ipynb
amitdamri/Recommendation-Systems-Course
Random Search Algorithms Importing Necessary Libraries
import six import sys sys.modules['sklearn.externals.six'] = six import mlrose import numpy as np import pandas as pd import seaborn as sns import mlrose_hiive import matplotlib.pyplot as plt np.random.seed(44) sns.set_style("darkgrid")
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Defining a Fitness Function Object
# Define alternative N-Queens fitness function for maximization problem def queens_max(state): # Initialize counter fitness = 0 # For all pairs of queens for i in range(len(state) - 1): for j in range(i + 1, len(state)): # Check for horizontal, diagonal-up and diagonal-down attacks if (state[j] != state[i]) \ and (state[j] != state[i] + (j - i)) \ and (state[j] != state[i] - (j - i)): # If no attacks, then increment counter fitness += 1 return fitness # Initialize custom fitness function object fitness_cust = mlrose.CustomFitness(queens_max)
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Defining an Optimization Problem Object
%%time # DiscreteOpt() takes integers in range 0 to max_val -1 defined at initialization number_of_queens = 16 problem = mlrose_hiive.DiscreteOpt(length = number_of_queens, fitness_fn = fitness_cust, maximize = True, max_val = number_of_queens)
CPU times: user 138 µs, sys: 79 µs, total: 217 µs Wall time: 209 µs
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 1 Simulated Annealing
%%time sa = mlrose_hiive.SARunner(problem, experiment_name="SA_Exp", iteration_list=[10000], temperature_list=[10, 50, 100, 250, 500], decay_list=[mlrose_hiive.ExpDecay, mlrose_hiive.GeomDecay], seed=44, max_attempts=100) sa_run_stats, sa_run_curves = sa.run() last_iters = sa_run_stats[sa_run_stats.Iteration != 0].reset_index() print('Mean:', last_iters.Fitness.mean(), '\nMin:',last_iters.Fitness.max(),'\nMax:',last_iters.Fitness.max()) print('Mean Time;',last_iters.Time.mean()) best_index_in_curve = sa_run_curves.Fitness.idxmax() best_decay = sa_run_curves.iloc[best_index_in_curve].Temperature best_curve = sa_run_curves.loc[sa_run_curves.Temperature == best_decay, :] best_curve.reset_index(inplace=True) best_decay best_index_in_curve = sa_run_curves.Fitness.idxmax() best_decay = sa_run_curves.iloc[best_index_in_curve].Temperature best_sa_curve = sa_run_curves.loc[sa_run_curves.Temperature == best_decay, :] best_sa_curve.reset_index(inplace=True) # draw lineplot sa_run_curves['Temperature'] = sa_run_curves['Temperature'].astype(str).astype(float) sa_run_curves_t1 = sa_run_curves[sa_run_curves['Temperature'] == 10] sa_run_curves_t2 = sa_run_curves[sa_run_curves['Temperature'] == 50] sa_run_curves_t3 = sa_run_curves[sa_run_curves['Temperature'] == 100] sa_run_curves_t4 = sa_run_curves[sa_run_curves['Temperature'] == 250] sa_run_curves_t5 = sa_run_curves[sa_run_curves['Temperature'] == 500] sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t1, label = "t = 10") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t2, label = "t = 50") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t3, label = "t = 100") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t4, label = "t = 250") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t5, label = "t = 500") plt.title('16-Queens SA Fitness Vs Iterations') plt.show() sa_run_curves
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 2 Genetic Algorithm
%%time ga = mlrose_hiive.GARunner(problem=problem, experiment_name="GA_Exp", seed=44, iteration_list = [10000], max_attempts = 100, population_sizes = [100, 500], mutation_rates = [0.1, 0.25, 0.5]) ga_run_stats, ga_run_curves = ga.run() last_iters = ga_run_stats[ga_run_stats.Iteration != 0].reset_index() print("Max and mean") print(last_iters.Fitness.max(), last_iters.Fitness.mean(), last_iters.Time.mean()) print(last_iters.groupby("Mutation Rate").Fitness.mean()) print(last_iters.groupby("Population Size").Fitness.mean()) print(last_iters.groupby("Population Size").Time.mean()) # draw lineplot ga_run_curves_mu1 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.1] ga_run_curves_mu2 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.25] ga_run_curves_mu3 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.5] sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu1, label = "mr = 0.1") sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu2, label = "mr = 0.25") sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu3, label = "mr = 0.5") plt.title('16-Queens GA Fitness Vs Iterations') plt.show()
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 3 MIMIC
%%time mmc = mlrose_hiive.MIMICRunner(problem=problem, experiment_name="MMC_Exp", seed=44, iteration_list=[10000], max_attempts=100, population_sizes=[100,500], keep_percent_list=[0.1, 0.25, 0.5], use_fast_mimic=True) # the two data frames will contain the results mmc_run_stats, mmc_run_curves = mmc.run() last_iters = mmc_run_stats[mmc_run_stats.Iteration != 0].reset_index() print("Max and mean") print(last_iters.Fitness.max(), last_iters.Fitness.mean(), last_iters.Time.mean()) print(last_iters.groupby("Keep Percent").Fitness.mean()) print(last_iters.groupby("Population Size").Fitness.mean()) print(last_iters.groupby("Population Size").Time.mean()) mmc_run_curves # draw lineplot mmc_run_curves_kp1 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.1] mmc_run_curves_kp2 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.25] mmc_run_curves_kp3 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.5] sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp1, label = "kp = 0.1") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp2, label = "kp = 0.25") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp3, label = "kp = 0.5") plt.title('16-Queens MIMIC Fitness Vs Iterations') plt.show()
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Optimization 4 Randomized Hill Climbing
%%time runner_return = mlrose_hiive.RHCRunner(problem, experiment_name="first_try", iteration_list=[10000], seed=44, max_attempts=100, restart_list=[100]) rhc_run_stats, rhc_run_curves = runner_return.run() last_iters = rhc_run_stats[rhc_run_stats.Iteration != 0].reset_index() print(last_iters.Fitness.mean(), last_iters.Fitness.max()) print(last_iters.Time.max()) best_index_in_curve = rhc_run_curves.Fitness.idxmax() best_decay = rhc_run_curves.iloc[best_index_in_curve].current_restart best_RHC_curve = rhc_run_curves.loc[rhc_run_curves.current_restart == best_decay, :] best_RHC_curve.reset_index(inplace=True) best_RHC_curve # draw lineplot sns.lineplot(x="Iteration", y="Fitness", data=best_RHC_curve) plt.title('16-Queens RHC Fitness Vs Iterations') plt.show() sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu3, label = "GA") sns.lineplot(x="Iteration", y="Fitness", data=best_sa_curve, label = "SA") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves, label = "MIMIC") sns.lineplot(x="Iteration", y="Fitness", data=best_RHC_curve, label = "RHC") plt.show()
_____no_output_____
Apache-2.0
Randomized Optimization/NQueens.ipynb
cindynyoumsigit/MachineLearning
Performance Tuning Guide***************************Author**: `Szymon Migacz `_Performance Tuning Guide is a set of optimizations and best practices which canaccelerate training and inference of deep learning models in PyTorch. Presentedtechniques often can be implemented by changing only a few lines of code and canbe applied to a wide range of deep learning models across all domains.General optimizations--------------------- Enable async data loading and augmentation~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`torch.utils.data.DataLoader `_supports asynchronous data loading and data augmentation in separate workersubprocesses. The default setting for ``DataLoader`` is ``num_workers=0``,which means that the data loading is synchronous and done in the main process.As a result the main training process has to wait for the data to be availableto continue the execution.Setting ``num_workers > 0`` enables asynchronous data loading and overlapbetween the training and data loading. ``num_workers`` should be tuneddepending on the workload, CPU, GPU, and location of training data.``DataLoader`` accepts ``pin_memory`` argument, which defaults to ``False``.When using a GPU it's better to set ``pin_memory=True``, this instructs``DataLoader`` to use pinned memory and enables faster and asynchronous memorycopy from the host to the GPU. Disable gradient calculation for validation or inference~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~PyTorch saves intermediate buffers from all operations which involve tensorsthat require gradients. Typically gradients aren't needed for validation orinference.`torch.no_grad() `_context manager can be applied to disable gradient calculation within aspecified block of code, this accelerates execution and reduces the amount ofrequired memory.`torch.no_grad() `_can also be used as a function decorator. Disable bias for convolutions directly followed by a batch norm~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`torch.nn.Conv2d() `_has ``bias`` parameter which defaults to ``True`` (the same is true for`Conv1d `_and`Conv3d `_).If a ``nn.Conv2d`` layer is directly followed by a ``nn.BatchNorm2d`` layer,then the bias in the convolution is not needed, instead use``nn.Conv2d(..., bias=False, ....)``. Bias is not needed because in the firststep ``BatchNorm`` subtracts the mean, which effectively cancels out theeffect of bias.This is also applicable to 1d and 3d convolutions as long as ``BatchNorm`` (orother normalization layer) normalizes on the same dimension as convolution'sbias.Models available from `torchvision `_already implement this optimization. Use parameter.grad = None instead of model.zero_grad() or optimizer.zero_grad()~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Instead of calling:
model.zero_grad() # or optimizer.zero_grad()
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
to zero out gradients, use the following method instead:
for param in model.parameters(): param.grad = None
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
The second code snippet does not zero the memory of each individual parameter,also the subsequent backward pass uses assignment instead of addition to storegradients, this reduces the number of memory operations.Setting gradient to ``None`` has a slightly different numerical behavior thansetting it to zero, for more details refer to the`documentation `_.Alternatively, starting from PyTorch 1.7, call ``model`` or``optimizer.zero_grad(set_to_none=True)``. Fuse pointwise operations~~~~~~~~~~~~~~~~~~~~~~~~~Pointwise operations (elementwise addition, multiplication, math functions -``sin()``, ``cos()``, ``sigmoid()`` etc.) can be fused into a single kernelto amortize memory access time and kernel launch time.`PyTorch JIT `_ can fuse kernelsautomatically, although there could be additional fusion opportunities not yetimplemented in the compiler, and not all device types are supported equally.Pointwise operations are memory-bound, for each operation PyTorch launches aseparate kernel. Each kernel loads data from the memory, performs computation(this step is usually inexpensive) and stores results back into the memory.Fused operator launches only one kernel for multiple fused pointwise ops andloads/stores data only once to the memory. This makes JIT very useful foractivation functions, optimizers, custom RNN cells etc.In the simplest case fusion can be enabled by applying`torch.jit.script `_decorator to the function definition, for example:
@torch.jit.script def fused_gelu(x): return x * 0.5 * (1.0 + torch.erf(x / 1.41421))
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
Refer to`TorchScript documentation `_for more advanced use cases. Enable channels_last memory format for computer vision models~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~PyTorch 1.5 introduced support for ``channels_last`` memory format forconvolutional networks. This format is meant to be used in conjunction with`AMP `_ to further accelerateconvolutional neural networks with`Tensor Cores `_.Support for ``channels_last`` is experimental, but it's expected to work forstandard computer vision models (e.g. ResNet-50, SSD). To convert models to``channels_last`` format follow`Channels Last Memory Format Tutorial `_.The tutorial includes a section on`converting existing models `_. Checkpoint intermediate buffers~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Buffer checkpointing is a technique to mitigate the memory capacity burden ofmodel training. Instead of storing inputs of all layers to compute upstreamgradients in backward propagation, it stores the inputs of a few layers andthe others are recomputed during backward pass. The reduced memoryrequirements enables increasing the batch size that can improve utilization.Checkpointing targets should be selected carefully. The best is not to storelarge layer outputs that have small re-computation cost. The example targetlayers are activation functions (e.g. ``ReLU``, ``Sigmoid``, ``Tanh``),up/down sampling and matrix-vector operations with small accumulation depth.PyTorch supports a native`torch.utils.checkpoint `_API to automatically perform checkpointing and recomputation. Disable debugging APIs~~~~~~~~~~~~~~~~~~~~~~Many PyTorch APIs are intended for debugging and should be disabled forregular training runs:* anomaly detection: `torch.autograd.detect_anomaly `_ or `torch.autograd.set_detect_anomaly(True) `_* profiler related: `torch.autograd.profiler.emit_nvtx `_, `torch.autograd.profiler.profile `_* autograd gradcheck: `torch.autograd.gradcheck `_ or `torch.autograd.gradgradcheck `_ GPU specific optimizations-------------------------- Enable cuDNN auto-tuner~~~~~~~~~~~~~~~~~~~~~~~`NVIDIA cuDNN `_ supports many algorithmsto compute a convolution. Autotuner runs a short benchmark and selects thekernel with the best performance on a given hardware for a given input size.For convolutional networks (other types currently not supported), enable cuDNNautotuner before launching the training loop by setting:
torch.backends.cudnn.benchmark = True
_____no_output_____
BSD-3-Clause
docs/_downloads/a584d8a4ce8e691ca795984e7a5eedbd/tuning_guide.ipynb
junhyung9985/PyTorch-tutorials-kr
78. Subsets__Difficulty__: Medium[Link](https://leetcode.com/problems/subsets/)Given an integer array `nums` of unique elements, return all possible subsets (the power set).The solution set must not contain duplicate subsets. Return the solution in any order.__Example 1__:Input: `nums = [1,2,3]`Output: `[[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]]`
from typing import List
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
DFS Approach
class SolutionDFS: def dfs(self, res, nums): if len(nums)==0: return [res] ans = [] for i, num in enumerate(nums): # print(res+[num]) ans.extend(self.dfs(res+[num], nums[i+1:])) ans.append(res) # print(ans) return ans def subsets(self, nums: List[int]) -> List[List[int]]: return self.dfs([], nums)
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
Using a bit-mask to indicate selected items from the list of numbers
class SolutionMask: def subsets(self, nums: List[int]) -> List[List[int]]: combs = [] n = len(nums) for mask in range(0, 2**n): i = 0 rem = mask current_set = [] while rem: if rem%2: current_set.append(nums[i]) rem = rem//2 i += 1 combs.append(current_set) return combs
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
A cleaner and efficient implementation of using bit-mask.
class SolutionMask2: def subsets(self, nums: List[int]) -> List[List[int]]: res = [] n = len(nums) nth_bit = 1<<n for i in range(2**n): # To create a bit-mask with length n bit_mask = bin(i | nth_bit)[3:] res.append([nums[j] for j in range(n) if bit_mask[j]=='1']) return res
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
Test cases
# Example 1 nums1 = [1,2,3] res1 = [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]] # Example 2 nums2 = [0] res2 = [[],[0]] # Example 3 nums3 = [0, -2, 5, -7, 9] res3 = [[0,-2,5,-7,9],[0,-2,5,-7],[0,-2,5,9],[0,-2,5],[0,-2,-7,9],[0,-2,-7],[0,-2,9],[0,-2],[0,5,-7,9],[0,5,-7],[0,5,9],[0,5],[0,-7,9],[0,-7],[0,9],[0],[-2,5,-7,9],[-2,5,-7],[-2,5,9],[-2,5],[-2,-7,9],[-2,-7],[-2,9],[-2],[5,-7,9],[5,-7],[5,9],[5],[-7,9],[-7],[9],[]] def test_function(inp, result): assert len(inp)==len(result) inp_set = [set(x) for x in inp] res_set = [set(x) for x in result] for i in inp_set: assert i in res_set # Test DFS method dfs_sol = SolutionDFS() test_function(dfs_sol.subsets(nums1), res1) test_function(dfs_sol.subsets(nums2), res2) test_function(dfs_sol.subsets(nums3), res3) # Test bit-mask method mask_sol = SolutionMask() test_function(mask_sol.subsets(nums1), res1) test_function(mask_sol.subsets(nums2), res2) test_function(mask_sol.subsets(nums3), res3) # Test bit-mask method mask_sol = SolutionMask2() test_function(mask_sol.subsets(nums1), res1) test_function(mask_sol.subsets(nums2), res2) test_function(mask_sol.subsets(nums3), res3)
_____no_output_____
Apache-2.0
leetcode/78_subsets.ipynb
Kaushalya/algo_journal
**Matrix factorization** is a class of collaborative filtering algorithms used in recommender systems. **Matrix factorization** approximates a given rating matrix as a product of two lower-rank matrices.It decomposes a rating matrix R(nxm) into a product of two matrices W(nxd) and U(mxd).\begin{equation*}\mathbf{R}_{n \times m} \approx \mathbf{\hat{R}} = \mathbf{V}_{n \times k} \times \mathbf{V}_{m \times k}^T\end{equation*}
#install pyspark !pip install pyspark
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Importing the necessary libraries
#Import the necessary libraries from pyspark import SparkContext, SQLContext # required for dealing with dataframes import numpy as np from pyspark.ml.recommendation import ALS # for Matrix Factorization using ALS # instantiating spark context and SQL context
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 1. Loading the data into a PySpark dataframe
#Read the dataset into a dataframe jester_ratings_df = sqlContext.read.csv("/kaggle/input/jester-17m-jokes-ratings-dataset/jester_ratings.csv",header = True, inferSchema = True) #show the ratings jester_ratings_df.show(5) #Print the total number of ratings, unique users and unique jokes.
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 2. Splitting into train and test part
#Split the dataset using randomSplit in a 90:10 ratio #Print the training data size and the test data size #Show the train set X_train.show(5) #Show the test set
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 3. Fitting an ALS model
#Fit an ALS model with rank=5, maxIter=10 and Seed=0 # displaying the latent features for five users
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 4. Making predictions
# Pass userId and jokeId from test dataset as an argument # Join X_test and prediction dataframe and also drop the records for which no predictions are made
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 5. Evaluating the model
# Convert the columns into numpy arrays for direct and easy calculations #Also print the RMSE
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Step 6. Recommending jokes
# Recommend top 3 jokes for all the users with highest predicted rating
_____no_output_____
MIT
Matrix Factorization_PySpark/Matrix Factorization Recommendation_PySpark_solution.ipynb
abhisngh/Data-Science
Simple Flavor MixingIllustrate very basic neutrino flavor mixing in supernova neutrinos using the `SimpleMixing` class in ASTERIA.
import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import astropy.units as u from asteria import config, source from asteria.neutrino import Flavor from asteria.oscillation import SimpleMixing mpl.rc('font', size=16)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Load CCSN Neutrino ModelLoad a neutrino luminosity model (see YAML documentation).
conf = config.load_config('../../data/config/test.yaml') ccsn = source.initialize(conf)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Basic MixingSet up the mixing class, which only depends on $\theta_{12}$.See [nu-fit.org](http://www.nu-fit.org/) for current values of the PMNS mixing angles.
# Use theta_12 in degrees. # To do: explicitly use astropy units for input. mix = SimpleMixing(33.8)
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Mix the FlavorsApply the mixing and plot the resulting flux curves for the unmixed case and assuming the normal and inverted neutrino mass hierarchies.
fig, axes = plt.subplots(1, 3, figsize=(13,3.5), sharex=True, sharey=True) ax1, ax2, ax3 = axes t = np.linspace(-0.1, 1, 201) * u.s # UNMIXED for ls, flavor in zip(["-", "--", "-.", ":"], Flavor): flux = ccsn.get_flux(t, flavor) ax1.plot(t, flux, ls, lw=2, label=flavor.to_tex(), alpha=0.7) ax1.set_title("Unmixed") # plt.yscale('log') # plt.ylim(3e51, 5e53) ax1.set(xlabel='time - $t_{bounce}$ [s]', ylabel='flux') ax1.legend() # NORMAL MIXING nu_list1 = [] i = 0 for flavor in Flavor: flux = ccsn.get_flux(t, flavor) nu_list1.append(flux) nu_new1 = mix.normal_mixing(nu_list1) for ls, i, flavor in zip(["-", "--", "-.", ":"], range(len(nu_new1)), Flavor): new_flux1 = nu_new1[i] ax2.plot(t, new_flux1, ls, lw=2, alpha=0.7, label=flavor.to_tex()) ax2.set_title(label="Normal Mixing") ax2.set(xlabel='time - $t_{bounce}$ [s]', ylabel='flux') ax2.legend() # INVERTED MIXING nu_list2 = [] i = 0 for flavor in Flavor: flux = ccsn.get_flux(t, flavor) nu_list2.append(flux) nu_new2 = mix.inverted_mixing(nu_list1) for ls, i, flavor in zip(["-", "--", "-.", ":"], range(len(nu_new2)), Flavor): new_flux2 = nu_new2[i] ax3.plot(t, new_flux2, ls, lw=2, alpha=0.7, label=flavor.to_tex()) ax3.set_title(label="Inverted Mixing") ax3.set(xlabel='time - $t_{bounce}$ [s]', ylabel='flux') ax3.legend() fig.tight_layout();
_____no_output_____
BSD-3-Clause
docs/nb/simplemixing_class.ipynb
IceCubeOpenSource/USSR
Imputasi Imputasi adalah mengganti nilai/data yang hilang (missing value; NaN; blank) dengan nilai pengganti Mean
import pandas as pd import numpy as np kolom = {'col1':[2, 9, 19], 'col2':[5, np.nan, 17], 'col3':[3, 9, np.nan], 'col4':[6, 0, 9], 'col5':[np.nan, 7, np.nan]} data = pd.DataFrame(kolom) data data.fillna(data.mean())
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Arbitrary (Nilai Suka-Suka)
umur = {'umur' : [29, 43, np.nan, 25, 34, np.nan, 50]} data = pd.DataFrame(umur) data data.fillna(99)
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
End of Tail
umur = {'umur' : [29, 43, np.nan, 25, 34, np.nan, 50]} data = pd.DataFrame(umur) data #install library feature-engine pip install feature-engine #import EndTailImputer from feature_engine.imputation import EndTailImputer #buat Imputer imputer = EndTailImputer(imputation_method='gaussian', tail='right') #fit-kan imputer ke set imputer.fit(data) #ubah data test_data = imputer.transform(data) #tampil data test_data
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Data Kategorikal Modus
from sklearn.impute import SimpleImputer mobil = {'mobil':['Ford', 'Ford', 'Toyota', 'Honda', np.nan, 'Toyota', 'Honda', 'Toyota', np.nan, np.nan]} data = pd.DataFrame(mobil) data imp = SimpleImputer(strategy='most_frequent') imp.fit_transform(data)
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Random Sample
#import Random Sample from feature_engine.imputation import RandomSampleImputer #buat data missing value data = {'Jenis Kelamin' : ['Laki-laki', 'Perempuan', 'Laki-laki', np.nan, 'Laki-laki', 'Perempuan', 'Perempuan', np.nan, 'Laki-laki', np.nan], 'Umur' : [29, np.nan, 32, 43, 50, 22, np.nan, 52, np.nan, 17]} df = pd.DataFrame(data) df #membuat imputer imputer = RandomSampleImputer(random_state=29) #fit-kan imputer.fit(df) #ubah data testing_df = imputer.transform(df) testing_df
_____no_output_____
MIT
Pertemuan 8/Imputasi.ipynb
grudasakti/metdat-science
Chapter 8 - Applying Machine Learning To Sentiment Analysis Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) NLP: Natural Language Processing Sentiment Analysis (Opinion Mining)Analyzes the polarity of documents- Expressed opinions or emotions of the authors with regard to a particular topic Obtaining the IMDb movie review dataset - IMDb: the Internet Movie Database- IMDb dataset - A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics- 50,000 movie reviews labeled either *positive* or *negative* The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.`aclImdb_v1.tar.gz`
import pyprind import pandas as pd import os # change the `basepath` to the directory of the # unzipped movie dataset basepath = '/Users/sklee/datasets/aclImdb/' labels = {'pos': 1, 'neg': 0} pbar = pyprind.ProgBar(50000) df = pd.DataFrame() for s in ('test', 'train'): for l in ('pos', 'neg'): path = os.path.join(basepath, s, l) for file in os.listdir(path): with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: txt = infile.read() df = df.append([[txt, labels[l]]], ignore_index=True) pbar.update() df.columns = ['review', 'sentiment'] df.head(5)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Shuffling the DataFrame:
import numpy as np np.random.seed(0) df = df.reindex(np.random.permutation(df.index)) df.head(5) df.to_csv('./movie_data.csv', index=False)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Introducing the bag-of-words model - **Vocabulary** : the collection of unique tokens (e.g. words) from the entire set of documents- Construct a feature vector from each document - Vector length = length of the vocabulary - Contains the counts of how often each token occurs in the particular document - Sparse vectors Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
import numpy as np from sklearn.feature_extraction.text import CountVectorizer count = CountVectorizer() docs = np.array([ 'The sun is shining', 'The weather is sweet', 'The sun is shining, the weather is sweet, and one and one is two']) bag = count.fit_transform(docs)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
print(count.vocabulary_)
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
print(bag.toarray())
[[0 1 0 1 1 0 1 0 0] [0 1 0 0 0 1 1 0 1] [2 3 2 1 1 1 2 1 1]]
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Those count values are called the **raw term frequency td(t,d)** - t: term - d: document The **n-gram** Models- 1-gram: "the", "sun", "is", "shining"- 2-gram: "the sun", "sun is", "is shining" - CountVectorizer(ngram_range=(2,2)) Assessing word relevancy via term frequency-inverse document frequency
np.set_printoptions(precision=2)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
- Frequently occurring words across multiple documents from both classes typically don't contain useful or discriminatory information. - ** Term frequency-inverse document frequency (tf-idf)** can be used to downweight those frequently occurring words in the feature vectors.$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$ - **tf(t, d) the term frequency** - **idf(t, d) the inverse document frequency**:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$ - $n_d$ is the total number of documents - **df(d, t) document frequency**: the number of documents *d* that contain the term *t*. - Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True) print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ] [ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56] [ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
tf_is = 3 n_docs = 3 idf_is = np.log((n_docs+1) / (3+1)) tfidf_is = tf_is * (idf_is + 1) print('tf-idf of term "is" = %.2f' % tfidf_is)
tf-idf of term "is" = 3.00
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tf-idf}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True) raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1] raw_tfidf l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2)) l2_tfidf
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Cleaning text data **Before** we build the bag-of-words model.
df.loc[112, 'review'][-1000:]
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Python regular expression library
import re def preprocessor(text): text = re.sub('<[^>]*>', '', text) emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text) text = re.sub('[\W]+', ' ', text.lower()) +\ ' '.join(emoticons).replace('-', '') return text preprocessor(df.loc[112, 'review'][-1000:]) preprocessor("</a>This :) is :( a test :-)!") df['review'] = df['review'].apply(preprocessor)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Processing documents into tokens Word StemmingTransforming a word into its root form- Original stemming algorithm: Martin F. Porter. An algorithm for suf x stripping. Program: electronic library and information systems, 14(3):130–137, 1980)- Snowball stemmer (Porter2 or "English" stemmer)- Lancaster stemmer (Paice-Husk stemmer) Python NLP toolkit: NLTK (the Natural Language ToolKit) - Free online book http://www.nltk.org/book/
from nltk.stem.porter import PorterStemmer porter = PorterStemmer() def tokenizer(text): return text.split() def tokenizer_porter(text): return [porter.stem(word) for word in text.split()] tokenizer('runners like running and thus they run') tokenizer_porter('runners like running and thus they run')
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Lemmatization- thus -> thu- Tries to find canonical forms of words- Computationally expensive, little impact on text classification performance Stop-words Removal- Stop-words: extremely common words, e.g., is, and, has, like...- Removal is useful when we use raw or normalized tf, rather than tf-idf
import nltk nltk.download('stopwords') from nltk.corpus import stopwords stop = stopwords.words('english') [w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop] stop[-10:]
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Training a logistic regression model for document classification
X_train = df.loc[:25000, 'review'].values y_train = df.loc[:25000, 'sentiment'].values X_test = df.loc[25000:, 'review'].values y_test = df.loc[25000:, 'sentiment'].values from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import GridSearchCV tfidf = TfidfVectorizer(strip_accents=None, lowercase=False, preprocessor=None) param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer, tokenizer_porter], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0, 100.0]}, {'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer, tokenizer_porter], 'vect__use_idf':[False], 'vect__norm':[None], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0, 100.0]}, ] lr_tfidf = Pipeline([('vect', tfidf), ('clf', LogisticRegression(random_state=0))]) gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid, scoring='accuracy', cv=5, verbose=1, n_jobs=-1) gs_lr_tfidf.fit(X_train, y_train) print('Best parameter set: %s ' % gs_lr_tfidf.best_params_) print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_) # Best parameter set: {'vect__tokenizer': <function tokenizer at 0x11851c6a8>, 'clf__C': 10.0, 'vect__stop_words': None, 'clf__penalty': 'l2', 'vect__ngram_range': (1, 1)} # CV Accuracy: 0.897 clf = gs_lr_tfidf.best_estimator_ print('Test Accuracy: %.3f' % clf.score(X_test, y_test)) # Test Accuracy: 0.899
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Working with bigger data - online algorithms and out-of-core learning
import numpy as np import re from nltk.corpus import stopwords def tokenizer(text): text = re.sub('<[^>]*>', '', text) emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower()) text = re.sub('[\W]+', ' ', text.lower()) +\ ' '.join(emoticons).replace('-', '') tokenized = [w for w in text.split() if w not in stop] return tokenized # reads in and returns one document at a time def stream_docs(path): with open(path, 'r', encoding='utf-8') as csv: next(csv) # skip header for line in csv: text, label = line[:-3], int(line[-2]) yield text, label doc_stream = stream_docs(path='./movie_data.csv') next(doc_stream)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
Minibatch
def get_minibatch(doc_stream, size): docs, y = [], [] try: for _ in range(size): text, label = next(doc_stream) docs.append(text) y.append(label) except StopIteration: return None, None return docs, y
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
- We cannot use `CountVectorizer` since it requires holding the complete vocabulary. Likewise, `TfidfVectorizer` needs to keep all feature vectors in memory. - We can use `HashingVectorizer` instead for online training (32-bit MurmurHash3 algorithm by Austin Appleby (https://sites.google.com/site/murmurhash/)
from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import SGDClassifier vect = HashingVectorizer(decode_error='ignore', n_features=2**21, preprocessor=None, tokenizer=tokenizer) clf = SGDClassifier(loss='log', random_state=1, max_iter=1) doc_stream = stream_docs(path='./movie_data.csv') import pyprind pbar = pyprind.ProgBar(45) classes = np.array([0, 1]) for _ in range(45): X_train, y_train = get_minibatch(doc_stream, size=1000) if not X_train: break X_train = vect.transform(X_train) clf.partial_fit(X_train, y_train, classes=classes) pbar.update() X_test, y_test = get_minibatch(doc_stream, size=5000) X_test = vect.transform(X_test) print('Accuracy: %.3f' % clf.score(X_test, y_test)) clf = clf.partial_fit(X_test, y_test)
_____no_output_____
MIT
chap11.ipynb
dev-strender/python-machine-learning-with-scikit-learn
! pip3 install git+https://github.com/extensive-nlp/ttc_nlp --quiet ! pip3 install torchmetrics --quiet from ttctext.datamodules.sst import SSTDataModule from ttctext.datasets.sst import StanfordSentimentTreeBank sst_dataset = SSTDataModule(batch_size=128) sst_dataset.setup() import pytorch_lightning as pl import torch import torch.nn as nn import torch.nn.functional as F from torchmetrics.functional import accuracy, precision, recall, confusion_matrix from sklearn.metrics import classification_report import matplotlib.pyplot as plt import seaborn as sns import pandas as pd sns.set() class SSTModel(pl.LightningModule): def __init__(self, hparams, *args, **kwargs): super().__init__() self.save_hyperparameters(hparams) self.num_classes = self.hparams.output_dim self.embedding = nn.Embedding(self.hparams.input_dim, self.hparams.embedding_dim) self.lstm = nn.LSTM( self.hparams.embedding_dim, self.hparams.hidden_dim, num_layers=self.hparams.num_layers, dropout=self.hparams.dropout, batch_first=True ) self.proj_layer = nn.Sequential( nn.Linear(self.hparams.hidden_dim, self.hparams.hidden_dim), nn.BatchNorm1d(self.hparams.hidden_dim), nn.ReLU(), nn.Dropout(self.hparams.dropout), ) self.fc = nn.Linear(self.hparams.hidden_dim, self.num_classes) self.loss = nn.CrossEntropyLoss() def init_state(self, sequence_length): return (torch.zeros(self.hparams.num_layers, sequence_length, self.hparams.hidden_dim).to(self.device), torch.zeros(self.hparams.num_layers, sequence_length, self.hparams.hidden_dim).to(self.device)) def forward(self, text, text_length, prev_state=None): # [batch size, sentence length] => [batch size, sentence len, embedding size] embedded = self.embedding(text) # packs the input for faster forward pass in RNN packed = torch.nn.utils.rnn.pack_padded_sequence( embedded, text_length.to('cpu'), enforce_sorted=False, batch_first=True ) # [batch size sentence len, embedding size] => # output: [batch size, sentence len, hidden size] # hidden: [batch size, 1, hidden size] packed_output, curr_state = self.lstm(packed, prev_state) hidden_state, cell_state = curr_state # print('hidden state shape: ', hidden_state.shape) # print('cell') # unpack packed sequence # unpacked, unpacked_len = torch.nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True) # print('unpacked: ', unpacked.shape) # [batch size, sentence len, hidden size] => [batch size, num classes] # output = self.proj_layer(unpacked[:, -1]) output = self.proj_layer(hidden_state[-1]) # print('output shape: ', output.shape) output = self.fc(output) return output, curr_state def shared_step(self, batch, batch_idx): label, text, text_length = batch logits, in_state = self(text, text_length) loss = self.loss(logits, label) pred = torch.argmax(F.log_softmax(logits, dim=1), dim=1) acc = accuracy(pred, label) metric = {'loss': loss, 'acc': acc, 'pred': pred, 'label': label} return metric def training_step(self, batch, batch_idx): metrics = self.shared_step(batch, batch_idx) log_metrics = {'train_loss': metrics['loss'], 'train_acc': metrics['acc']} self.log_dict(log_metrics, prog_bar=True) return metrics def validation_step(self, batch, batch_idx): metrics = self.shared_step(batch, batch_idx) return metrics def validation_epoch_end(self, outputs): acc = torch.stack([x['acc'] for x in outputs]).mean() loss = torch.stack([x['loss'] for x in outputs]).mean() log_metrics = {'val_loss': loss, 'val_acc': acc} self.log_dict(log_metrics, prog_bar=True) if self.trainer.sanity_checking: return log_metrics preds = torch.cat([x['pred'] for x in outputs]).view(-1) labels = torch.cat([x['label'] for x in outputs]).view(-1) accuracy_ = accuracy(preds, labels) precision_ = precision(preds, labels, average='macro', num_classes=self.num_classes) recall_ = recall(preds, labels, average='macro', num_classes=self.num_classes) classification_report_ = classification_report(labels.cpu().numpy(), preds.cpu().numpy(), target_names=self.hparams.class_labels) confusion_matrix_ = confusion_matrix(preds, labels, num_classes=self.num_classes) cm_df = pd.DataFrame(confusion_matrix_.cpu().numpy(), index=self.hparams.class_labels, columns=self.hparams.class_labels) print(f'Test Epoch {self.current_epoch}/{self.hparams.epochs-1}: F1 Score: {accuracy_:.5f}, Precision: {precision_:.5f}, Recall: {recall_:.5f}\n') print(f'Classification Report\n{classification_report_}') fig, ax = plt.subplots(figsize=(10, 8)) heatmap = sns.heatmap(cm_df, annot=True, ax=ax, fmt='d') # font size locs, labels = plt.xticks() plt.setp(labels, rotation=45) locs, labels = plt.yticks() plt.setp(labels, rotation=45) plt.show() print("\n") return log_metrics def test_step(self, batch, batch_idx): return self.validation_step(batch, batch_idx) def test_epoch_end(self, outputs): accuracy = torch.stack([x['acc'] for x in outputs]).mean() self.log('hp_metric', accuracy) self.log_dict({'test_acc': accuracy}, prog_bar=True) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.hparams.lr) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=10, verbose=True), 'monitor': 'train_loss', 'name': 'scheduler' } return [optimizer], [lr_scheduler] from omegaconf import OmegaConf hparams = OmegaConf.create({ 'input_dim': len(sst_dataset.get_vocab()), 'embedding_dim': 128, 'num_layers': 2, 'hidden_dim': 64, 'dropout': 0.5, 'output_dim': len(StanfordSentimentTreeBank.get_labels()), 'class_labels': sst_dataset.raw_dataset_train.get_labels(), 'lr': 5e-4, 'epochs': 10, 'use_lr_finder': False }) sst_model = SSTModel(hparams) trainer = pl.Trainer(gpus=1, max_epochs=hparams.epochs, progress_bar_refresh_rate=1, reload_dataloaders_every_epoch=True) trainer.fit(sst_model, sst_dataset)
_____no_output_____
MIT
09_NLP_Evaluation/ClassificationEvaluation.ipynb
satyajitghana/TSAI-DeepNLP-END2.0
MultiGroupDirectLiNGAM Import and settingsIn this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`.
import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0)
['1.16.2', '0.24.2', '0.11.1', '1.5.4']
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Test dataWe generate two datasets consisting of 6 variables.
x3 = np.random.uniform(size=1000) x0 = 3.0*x3 + np.random.uniform(size=1000) x2 = 6.0*x3 + np.random.uniform(size=1000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000) x5 = 4.0*x0 + np.random.uniform(size=1000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000) X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X1.head() m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0], [3.0, 0.0, 2.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.0, 0.0,-1.0, 0.0, 0.0, 0.0], [4.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) x3 = np.random.uniform(size=1000) x0 = 3.5*x3 + np.random.uniform(size=1000) x2 = 6.5*x3 + np.random.uniform(size=1000) x1 = 3.5*x0 + 2.5*x2 + np.random.uniform(size=1000) x5 = 4.5*x0 + np.random.uniform(size=1000) x4 = 8.5*x0 - 1.5*x2 + np.random.uniform(size=1000) X2 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X2.head() m = np.array([[0.0, 0.0, 0.0, 3.5, 0.0, 0.0], [3.5, 0.0, 2.5, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.5, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.5, 0.0,-1.5, 0.0, 0.0, 0.0], [4.5, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
We create a list variable that contains two datasets.
X_list = [X1, X2]
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Causal DiscoveryTo run causal discovery for multiple datasets, we create a `MultiGroupDirectLiNGAM` object and call the `fit` method.
model = lingam.MultiGroupDirectLiNGAM() model.fit(X_list)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Using the `causal_order_` properties, we can see the causal ordering as a result of the causal discovery.
model.causal_order_
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Also, using the `adjacency_matrix_` properties, we can see the adjacency matrix as a result of the causal discovery. As you can see from the following, DAG in each dataset is correctly estimated.
print(model.adjacency_matrices_[0]) make_dot(model.adjacency_matrices_[0]) print(model.adjacency_matrices_[1]) make_dot(model.adjacency_matrices_[1])
[[ 0. 0. 0. 3.483 0. 0. ] [ 3.516 0. 2.466 0.165 0. 0. ] [ 0. 0. 0. 6.383 0. 0. ] [ 0. 0. 0. 0. 0. 0. ] [ 8.456 0. -1.471 0. 0. 0. ] [ 4.446 0. 0. 0. 0. 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
To compare, we run DirectLiNGAM with single dataset concatenating two datasets.
X_all = pd.concat([X1, X2]) print(X_all.shape) model_all = lingam.DirectLiNGAM() model_all.fit(X_all) model_all.causal_order_
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
You can see that the causal structure cannot be estimated correctly for a single dataset.
make_dot(model_all.adjacency_matrix_)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Independence between error variablesTo check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
p_values = model.get_error_independence_p_values(X_list) print(p_values[0]) print(p_values[1])
[[0. 0.545 0.908 0.285 0.525 0.728] [0.545 0. 0.84 0.814 0.086 0.297] [0.908 0.84 0. 0.032 0.328 0.026] [0.285 0.814 0.032 0. 0.904 0. ] [0.525 0.086 0.328 0.904 0. 0.237] [0.728 0.297 0.026 0. 0.237 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
BootstrappingIn `MultiGroupDirectLiNGAM`, bootstrap can be executed in the same way as normal `DirectLiNGAM`.
results = model.bootstrap(X_list, n_sampling=100)
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Causal DirectionsThe `bootstrap` method returns a list of multiple `BootstrapResult`, so we can get the result of bootstrapping from the list. We can get the same number of results as the number of datasets, so we specify an index when we access the results. We can get the ranking of the causal directions extracted by `get_causal_direction_counts()`.
cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100)
x0 <--- x3 (100.0%) x1 <--- x0 (100.0%) x1 <--- x2 (100.0%) x2 <--- x3 (100.0%) x4 <--- x0 (100.0%) x4 <--- x2 (100.0%) x5 <--- x0 (100.0%) x1 <--- x3 (72.0%)
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Directed Acyclic GraphsAlso, using the `get_directed_acyclic_graph_counts()` method, we can get the ranking of the DAGs extracted. In the following sample code, `n_dags` option is limited to the dags of the top 3 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.01 or more.
dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100)
DAG[0]: 59.0% x0 <--- x3 x1 <--- x0 x1 <--- x2 x1 <--- x3 x2 <--- x3 x4 <--- x0 x4 <--- x2 x5 <--- x0 DAG[1]: 17.0% x0 <--- x3 x1 <--- x0 x1 <--- x2 x2 <--- x3 x4 <--- x0 x4 <--- x2 x5 <--- x0 DAG[2]: 10.0% x0 <--- x2 x0 <--- x3 x1 <--- x0 x1 <--- x2 x1 <--- x3 x2 <--- x3 x4 <--- x0 x4 <--- x2 x5 <--- x0
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
ProbabilityUsing the get_probabilities() method, we can get the probability of bootstrapping.
prob = results[0].get_probabilities(min_causal_effect=0.01) print(prob)
[[0. 0. 0.08 1. 0. 0. ] [1. 0. 1. 0.08 0. 0.05] [0. 0. 0. 1. 0. 0. ] [0. 0. 0. 0. 0. 0. ] [1. 0. 0.94 0. 0. 0.2 ] [1. 0. 0. 0. 0.01 0. ]]
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Total Causal EffectsUsing the `get_total_causal_effects()` method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
causal_effects = results[0].get_total_causal_effects(min_causal_effect=0.01) df = pd.DataFrame(causal_effects) labels = [f'x{i}' for i in range(X1.shape[1])] df['from'] = df['from'].apply(lambda x : labels[x]) df['to'] = df['to'].apply(lambda x : labels[x]) df
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
We can easily perform sorting operations with pandas.DataFrame.
df.sort_values('effect', ascending=False).head()
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
df[df['to']=='x1'].head()
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline from_index = 3 to_index = 0 plt.hist(results[0].total_effects_[:, to_index, from_index])
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam
Bootstrap Probability of PathUsing the `get_paths()` method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array `[3, 0, 1]` shows the path from variable X3 through variable X0 to variable X1.
from_index = 3 # index of x3 to_index = 1 # index of x0 pd.DataFrame(results[0].get_paths(from_index, to_index))
_____no_output_____
MIT
examples/MultiGroupDirectLiNGAM.ipynb
YanaZeng/lingam