kernel_id
int64 24.2k
23.3M
| prompt
stringlengths 8
1.85M
| completetion
stringlengths 1
182k
| comp_name
stringlengths 5
57
|
---|---|---|---|
22,834,546 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<set_options> | %matplotlib inline
!pip install --upgrade 'scikit-learn>=1.0'
| Digit Recognizer |
22,834,546 | %matplotlib inline
!pip install --upgrade 'scikit-learn>=1.0'
<load_from_csv> | labeled_data = pd.read_csv(".. /input/train.csv")
print(labeled_data.shape)
labeled_data.head() | Digit Recognizer |
22,834,546 | labeled_data = pd.read_csv(".. /input/train.csv")
print(labeled_data.shape)
labeled_data.head()<load_from_csv> | unlabeled_data= pd.read_csv(".. /input/test.csv")
print(unlabeled_data.shape)
unlabeled_data.head() | Digit Recognizer |
22,834,546 | unlabeled_data= pd.read_csv(".. /input/test.csv")
print(unlabeled_data.shape)
unlabeled_data.head()<data_type_conversions> | X_labeled_raw =(labeled_data.iloc[:,1:].values ).astype('float32')
y_labeled_raw = labeled_data.iloc[:,0].values.astype('int32')
X_unlabeled_raw = unlabeled_data.values.astype('float32' ) | Digit Recognizer |
22,834,546 | X_labeled_raw =(labeled_data.iloc[:,1:].values ).astype('float32')
y_labeled_raw = labeled_data.iloc[:,0].values.astype('int32')
X_unlabeled_raw = unlabeled_data.values.astype('float32' )<prepare_output> | y_labeled = y_labeled_raw | Digit Recognizer |
22,834,546 | y_labeled = y_labeled_raw<split> | X_train, X_val, y_train, y_val = train_test_split(X_labeled, y_labeled, test_size=0.20, random_state=42, shuffle=True ) | Digit Recognizer |
22,834,546 | X_train, X_val, y_train, y_val = train_test_split(X_labeled, y_labeled, test_size=0.20, random_state=42, shuffle=True )<prepare_x_and_y> | training_data = tf.data.Dataset.from_tensor_slices(( X_train, y_train))
validation_data = tf.data.Dataset.from_tensor_slices(( X_val, y_val))
print(training_data.element_spec ) | Digit Recognizer |
22,834,546 | training_data = tf.data.Dataset.from_tensor_slices(( X_train, y_train))
validation_data = tf.data.Dataset.from_tensor_slices(( X_val, y_val))
print(training_data.element_spec )<import_modules> | print(device_lib.list_local_devices() ) | Digit Recognizer |
22,834,546 | print(device_lib.list_local_devices() )<choose_model_class> | def get_model() :
model = Sequential([
Input(shape=(28,28,1)) ,
data_augmentation,
Convolution2D(32,(5,5), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,(5,5), activation='relu'),
MaxPooling2D() ,
Dropout(0.25),
Convolution2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,(3,3), activation='relu'),
MaxPooling2D() ,
Dropout(0.25),
Flatten() ,
Dense(256, activation='relu'),
Dropout(0.25),
Dense(10, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam() ,
loss=tf.keras.losses.SparseCategoricalCrossentropy() ,
metrics=['accuracy'])
return model
model = get_model()
model.summary() | Digit Recognizer |
22,834,546 | def get_model() :
model = Sequential([
Input(shape=(28,28,1)) ,
data_augmentation,
Convolution2D(32,(5,5), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,(5,5), activation='relu'),
MaxPooling2D() ,
Dropout(0.25),
Convolution2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,(3,3), activation='relu'),
MaxPooling2D() ,
Dropout(0.25),
Flatten() ,
Dense(256, activation='relu'),
Dropout(0.25),
Dense(10, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam() ,
loss=tf.keras.losses.SparseCategoricalCrossentropy() ,
metrics=['accuracy'])
return model
model = get_model()
model.summary()<train_model> | EPOCHS=60
BATCH=64
autotune = tf.data.AUTOTUNE
train_data_batches = training_data.shuffle(buffer_size=40000 ).batch(BATCH ).prefetch(buffer_size=autotune)
val_data_batches = validation_data.shuffle(buffer_size=10000 ).batch(BATCH ).prefetch(buffer_size=autotune)
history = model.fit(train_data_batches, epochs=EPOCHS,
validation_data=val_data_batches, verbose=1 ) | Digit Recognizer |
22,834,546 | EPOCHS=60
BATCH=64
autotune = tf.data.AUTOTUNE
train_data_batches = training_data.shuffle(buffer_size=40000 ).batch(BATCH ).prefetch(buffer_size=autotune)
val_data_batches = validation_data.shuffle(buffer_size=10000 ).batch(BATCH ).prefetch(buffer_size=autotune)
history = model.fit(train_data_batches, epochs=EPOCHS,
validation_data=val_data_batches, verbose=1 )<predict_on_test> | probabilities = model.predict(X_val)
y_predicted = np.argmax(probabilities, axis=1)
ConfusionMatrixDisplay.from_predictions(y_val, y_predicted)
| Digit Recognizer |
22,834,546 | <save_to_csv><EOS> | probabilities = model.predict(X_test, verbose=0)
predictions = np.argmax(probabilities, axis=1)
sample_submission = pd.read_csv('.. /input/sample_submission.csv')
sample_submission['Label'] = predictions
sample_submission.to_csv("submission.csv",index=False ) | Digit Recognizer |
22,159,074 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<import_modules> | import pandas as pd | Digit Recognizer |
22,159,074 | import pandas as pd<load_from_csv> | mnist_test = pd.read_csv("/kaggle/input/mnist-fashion-data-classification/mnist_test.csv")
mnist_train = pd.read_csv("/kaggle/input/mnist-fashion-data-classification/mnist_train.csv")
| Digit Recognizer |
22,159,074 | mnist_test = pd.read_csv("/kaggle/input/mnist-fashion-data-classification/mnist_test.csv")
mnist_train = pd.read_csv("/kaggle/input/mnist-fashion-data-classification/mnist_train.csv")
<load_from_csv> | sample_submission = pd.read_csv("/kaggle/input/digit-recognizer/sample_submission.csv")
train = pd.read_csv("/kaggle/input/digit-recognizer/train.csv")
test = pd.read_csv("/kaggle/input/digit-recognizer/test.csv" ) | Digit Recognizer |
22,159,074 | sample_submission = pd.read_csv("/kaggle/input/digit-recognizer/sample_submission.csv")
train = pd.read_csv("/kaggle/input/digit-recognizer/train.csv")
test = pd.read_csv("/kaggle/input/digit-recognizer/test.csv" )<feature_engineering> | test['dataset'] = 'test' | Digit Recognizer |
22,159,074 | test['dataset'] = 'test'<feature_engineering> | train['dataset'] = 'train' | Digit Recognizer |
22,159,074 | train['dataset'] = 'train'<concatenate> | dataset = pd.concat([train.drop('label', axis=1), test] ).reset_index() | Digit Recognizer |
22,159,074 | dataset = pd.concat([train.drop('label', axis=1), test] ).reset_index()<concatenate> | mnist = pd.concat([mnist_train, mnist_test] ).reset_index(drop=True)
labels = mnist['label'].values
mnist.drop('label', axis=1, inplace=True)
mnist.columns = cols | Digit Recognizer |
22,159,074 | mnist = pd.concat([mnist_train, mnist_test] ).reset_index(drop=True)
labels = mnist['label'].values
mnist.drop('label', axis=1, inplace=True)
mnist.columns = cols<sort_values> | idx_mnist = mnist.sort_values(by=list(mnist.columns)).index
dataset_from = dataset.sort_values(by=list(mnist.columns)) ['dataset'].values
original_idx = dataset.sort_values(by=list(mnist.columns)) ['index'].values | Digit Recognizer |
22,159,074 | idx_mnist = mnist.sort_values(by=list(mnist.columns)).index
dataset_from = dataset.sort_values(by=list(mnist.columns)) ['dataset'].values
original_idx = dataset.sort_values(by=list(mnist.columns)) ['index'].values<feature_engineering> | for i in range(len(idx_mnist)) :
if dataset_from[i] == 'test':
sample_submission.loc[original_idx[i], 'Label'] = labels[idx_mnist[i]] | Digit Recognizer |
22,159,074 | <save_to_csv><EOS> | sample_submission.to_csv('submission.csv', index=False ) | Digit Recognizer |
19,567,958 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<define_variables> | warnings.filterwarnings('ignore')
%matplotlib inline
| Digit Recognizer |
19,567,958 | input_path = Path('/kaggle/input/tabular-playground-series-jan-2021/' )<load_from_csv> | train = pd.read_csv('.. /input/digit-recognizer/train.csv')
test = pd.read_csv('.. /input/digit-recognizer/test.csv')
submission = pd.read_csv('.. /input/digit-recognizer/sample_submission.csv' ) | Digit Recognizer |
19,567,958 | train = pd.read_csv(input_path / 'train.csv', index_col='id')
display(train.head() )<load_from_csv> | X_train = train.drop(['label'], axis=1)
y_train = train['label'] | Digit Recognizer |
19,567,958 | test = pd.read_csv(input_path / 'test.csv', index_col='id')
display(test.head() )<drop_column> | X_train /= 255.0
test /= 255.0 | Digit Recognizer |
19,567,958 | target = train.pop('target' )<split> | X_train1 = X_train.values.reshape(-1, 28, 28, 1)
test = test.values.reshape(-1, 28, 28, 1 ) | Digit Recognizer |
19,567,958 | X_train, X_test, y_train, y_test = train_test_split(train, target, train_size=0.80 )<import_modules> | y_train = to_categorical(y_train, num_classes=10 ) | Digit Recognizer |
19,567,958 | from sklearn.dummy import DummyRegressor
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_regression
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import BayesianRidge
from sklearn.linear_model import LassoLars
from sklearn.linear_model import ARDRegression
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.linear_model import TheilSenRegressor
from sklearn.linear_model import LinearRegression
from lightgbm import LGBMRegressor
from xgboost import XGBRegressor
<compute_train_metric> | X_train, X_val , y_train, y_val = train_test_split(X_train1, y_train, test_size=0.2 ) | Digit Recognizer |
19,567,958 | def FitAndScoreModel(df,name, model,X_tr,y_tr,X_tst,y_tst):
model.fit(X_tr,y_tr)
Y_pred = model.predict(X_tst)
score=mean_squared_error(y_tst, Y_pred, squared=False)
df = df.append({'Model':name, 'MSE': score},ignore_index = True)
return df<create_dataframe> | model = keras.models.Sequential()
model.add(keras.layers.Conv2D(filters = 64, kernel_size=(5,5), padding='same', activation='relu', input_shape=(28, 28, 1)))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv2D(filters = 64, kernel_size=(5,5), padding='same', activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.MaxPool2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Conv2D(filters = 64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv2D(filters = 64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.MaxPool2D(pool_size=(2,2)))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256, activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(256, activation='relu'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(10, activation='softmax')) | Digit Recognizer |
19,567,958 | dResults = pd.DataFrame(columns = ['Model', 'MSE'] )<train_model> | model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"] ) | Digit Recognizer |
19,567,958 | classifiers = [
DummyRegressor(strategy='median'),
SGDRegressor() ,
BayesianRidge() ,
LassoLars() ,
ARDRegression() ,
PassiveAggressiveRegressor() ,
LinearRegression() ,
LGBMRegressor() ,
RandomForestRegressor() ,
XGBRegressor() ]
for item in classifiers:
print(item)
clf = item
dResults=FitAndScoreModel(dResults,item,item,X_train,y_train,X_test,y_test )<sort_values> | history = model.fit(X_train, y_train, epochs=25, validation_data=(X_val, y_val)) | Digit Recognizer |
19,567,958 | dResults.sort_values(by='MSE', ascending=True,inplace=True)
dResults.set_index('MSE',inplace=True)
dResults.head(dResults.shape[0] )<init_hyperparams> | y_pred = model.predict(test ) | Digit Recognizer |
19,567,958 | <init_hyperparams><EOS> | submission['Label'] = results
submission.to_csv('submission.csv', index=False ) | Digit Recognizer |
19,478,081 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<prepare_x_and_y> | import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from skimage import color
from skimage import measure
from skimage.filters import try_all_threshold
from skimage.filters import threshold_otsu
from skimage.filters import threshold_local
import keras
from keras import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout, BatchNormalization
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint | Digit Recognizer |
19,478,081 | n_fold = 10
folds = KFold(n_splits=n_fold, shuffle=True, random_state=42)
train_columns = train.columns.values
oof = np.zeros(len(train))
LGBMpredictions = np.zeros(len(test))
feature_importance_df = pd.DataFrame()
for fold_,(trn_idx, val_idx)in enumerate(folds.split(train, target.values)) :
strLog = "fold {}".format(fold_)
print(strLog)
X_tr, X_val = train.iloc[trn_idx], train.iloc[val_idx]
y_tr, y_val = target.iloc[trn_idx], target.iloc[val_idx]
model = LGBMRegressor(**params, n_estimators = 20000)
model.fit(X_tr, y_tr,
eval_set=[(X_tr, y_tr),(X_val, y_val)], eval_metric='rmse',
verbose=1000, early_stopping_rounds=400)
oof[val_idx] = model.predict(X_val, num_iteration=model.best_iteration_)
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = train_columns
fold_importance_df["importance"] = model.feature_importances_[:len(train_columns)]
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
LGBMpredictions += model.predict(test, num_iteration=model.best_iteration_)/ folds.n_splits
<choose_model_class> | df_train = pd.read_csv('.. /input/digit-recognizer/train.csv')
df_test = pd.read_csv('.. /input/digit-recognizer/test.csv' ) | Digit Recognizer |
19,478,081 |
<init_hyperparams> | y_train = df_train['label']
X_train = df_train.drop('label', axis = 1)
X_test = np.array(df_test ) | Digit Recognizer |
19,478,081 | XGparams={'colsample_bytree': 0.7,
'learning_rate': 0.01,
'max_depth': 7,
'min_child_weight': 1,
'n_estimators': 4000,
'nthread': 4,
'objective': 'reg:squarederror',
'subsample': 0.7}<train_model> | y_train = to_categorical(y_train, num_classes = 10)
y_train.shape | Digit Recognizer |
19,478,081 | n_fold = 10
folds = KFold(n_splits=n_fold, shuffle=True, random_state=42)
train_columns = train.columns.values
oof = np.zeros(len(train))
XGpredictions = np.zeros(len(test))
feature_importance_df = pd.DataFrame()
for fold_,(trn_idx, val_idx)in enumerate(folds.split(train, target.values)) :
strLog = "fold {}".format(fold_)
print(strLog)
X_tr, X_val = train.iloc[trn_idx], train.iloc[val_idx]
y_tr, y_val = target.iloc[trn_idx], target.iloc[val_idx]
model = XGBRegressor(**XGparams)
model.fit(X_tr, y_tr,
eval_set=[(X_tr, y_tr),(X_val, y_val)], verbose=1000, early_stopping_rounds=400)
oof[val_idx] = model.predict(X_val, ntree_limit=model.best_iteration)
preds = model.predict(test, ntree_limit=model.best_iteration)
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = train_columns
fold_importance_df["importance"] = model.feature_importances_[:len(train_columns)]
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
XGpredictions += model.predict(test, ntree_limit=model.best_iteration)/ folds.n_splits
<load_from_csv> | X_train, X_val, y_train, y_val = train_test_split(X_train,
y_train,
test_size=0.25,
random_state=1 ) | Digit Recognizer |
19,478,081 | submission = pd.read_csv(input_path / 'sample_submission.csv', index_col='id')
submission.reset_index(inplace=True)
submission = submission.rename(columns = {'index':'id'} )<save_to_csv> | kernel_ =(5,5 ) | Digit Recognizer |
19,478,081 | LGBMsubmission=submission.copy()
LGBMsubmission['target'] = LGBMpredictions
LGBMsubmission.to_csv('submission_LGBM.csv', header=True, index=False)
LGBMsubmission.head()<save_to_csv> | model = Sequential()
model.add(Conv2D(filters = 32, kernel_size =(5,5),padding = 'Same',
activation ='relu', input_shape =(28, 28, 1)))
model.add(Conv2D(filters = 32, kernel_size =(5,5),padding = 'Same', activation ='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 64, kernel_size =(3,3),padding = 'Same', activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size =(3,3),padding = 'Same', activation ='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation = "relu"))
model.add(Dense(256, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = "softmax"))
model.summary() | Digit Recognizer |
19,478,081 | XGBoostsubmission=submission.copy()
XGBoostsubmission['target'] = XGpredictions
XGBoostsubmission.to_csv('submission_XGBoost.csv', header=True, index=False)
XGBoostsubmission.head()<save_to_csv> | aug = ImageDataGenerator(
rotation_range=10,
zoom_range = 0.1,
width_shift_range=0.1,
height_shift_range=0.1)
gen_train = aug.flow(X_train, y_train, batch_size=64)
gen_val = aug.flow(X_val, y_val, batch_size=64 ) | Digit Recognizer |
19,478,081 | EnsembledSubmission=submission.copy()
EnsembledSubmission['target'] =(LGBMpredictions*0.72 + XGpredictions*0.28)
EnsembledSubmission.to_csv('ensembled_submission.csv', header=True, index=False)
EnsembledSubmission.head()<import_modules> | model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) | Digit Recognizer |
19,478,081 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import optuna
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import mean_squared_error
from sklearn.base import TransformerMixin
import xgboost as xgb
import lightgbm as lgb<load_from_csv> | checkpoint = tf.keras.callbacks.ModelCheckpoint("weights.hdf5",
monitor='val_accuracy',
verbose=1,
save_best_only=True)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=0.5,
patience=4,
min_lr=0.00005,
verbose=1)
early_stop = tf.keras.callbacks.EarlyStopping(patience=5, restore_best_weights=True ) | Digit Recognizer |
19,478,081 | df = pd.read_csv('.. /input/tabular-playground-series-jan-2021/train.csv')
df.head()<compute_train_metric> | training = model.fit(gen_train, epochs=100, batch_size=512,
validation_data=gen_val,
callbacks = [checkpoint, reduce_lr], verbose = 1 ) | Digit Recognizer |
19,478,081 | def objective_xgb(trial, data, target):
parameters = {
'tree_method': 'gpu_hist',
'lambda': trial.suggest_loguniform('lambda', 1e-3, 10.0),
'alpha': trial.suggest_loguniform('alpha', 1e-3, 10.0),
'colsample_bytree': trial.suggest_categorical('colsample_bytree', [0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0]),
'subsample': trial.suggest_categorical('subsample', [0.4, 0.5, 0.6, 0.7, 0.8, 1.0]),
'learning_rate': trial.suggest_categorical('learning_rate', [0.008, 0.009, 0.01, 0.012, 0.014, 0.016, 0.018, 0.02]),
'n_estimators': 1000,
'max_depth': trial.suggest_categorical('max_depth', [5, 7, 9, 11, 13, 15, 17, 20]),
'random_state': trial.suggest_categorical('random_state', [24, 48, 2020]),
'min_child_weight': trial.suggest_int('min_child_weight', 1, 300),
}
folds = KFold(n_splits=5, random_state=1337, shuffle=True)
rmse = []
for train_idx, test_idx in folds.split(data, target):
X_train, X_test = X.iloc[train_idx], X.iloc[test_idx]
y_train, y_test = y.iloc[train_idx], y.iloc[test_idx]
model = xgb.XGBRegressor(**parameters)
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=100, verbose=False)
rmse.append(mean_squared_error(y_test, model.predict(X_test), squared=False))
print(f'Mean RMSE for all the folds: {np.mean(rmse)}')
return np.mean(rmse )<init_hyperparams> | model.load_weights("weights.hdf5" ) | Digit Recognizer |
19,478,081 | xgb_parameters = {
'objective': 'reg:squarederror',
'tree_method': 'gpu_hist',
'n_estimators': 1000,
'lambda': 7.610705234008646,
'alpha': 0.0019377246932580476,
'colsample_bytree': 0.5,
'subsample': 0.7,
'learning_rate': 0.012,
'max_depth': 20,
'random_state': 24,
'min_child_weight': 229
}<split> | y_test = model.predict(X_test)
y_pred = np.argmax(y_test, axis=1 ) | Digit Recognizer |
19,478,081 | <init_hyperparams><EOS> | output_csv = {"ImageId":[*range(1,1+len(y_pred)) ], "Label":y_pred}
Y_pre = pd.DataFrame(output_csv)
Y_pre.set_index("ImageId", drop=True, append=False, inplace=True)
Y_pre.to_csv("/kaggle/working/submission.csv" ) | Digit Recognizer |
18,907,580 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<categorify> | import numpy as np
import pandas as pd
import os
import torch
import torch.nn as nn
import torch.optim as optim
from PIL import Image
from matplotlib import pyplot as plt
from torch.utils.data import Dataset,DataLoader
from torchvision import transforms as T
from torchvision import models
import tqdm
from sklearn.metrics import f1_score,roc_auc_score,accuracy_score,confusion_matrix | Digit Recognizer |
18,907,580 | class NonLinearTransformer(TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X = X.drop(columns=['id'])
for c in X.columns:
if c == 'target':
continue
X[f'{c}^2'] = X[c] ** 2
return X<define_search_model> | train_df=pd.read_csv(".. /input/digit-recognizer/train.csv")
test_df=pd.read_csv(".. /input/digit-recognizer/test.csv" ) | Digit Recognizer |
18,907,580 | pipe_xgb = Pipeline([
('custom', NonLinearTransformer()),
('scaling', StandardScaler()),
('regression', xgb.XGBRegressor(**xgb_parameters))
])
pipe_lgb = Pipeline([
('custom', NonLinearTransformer()),
('scaling', StandardScaler()),
('regression', lgb.LGBMRegressor(**lgb_parameters))
] )<load_from_csv> | def get_image(data_df,idx):
return Image.fromarray(np.uint8(np.reshape(data_df.iloc[idx][data_df.columns[-784:]].to_numpy() ,(28,28)))).convert('RGB')
| Digit Recognizer |
18,907,580 | df_train = pd.read_csv('.. /input/tabular-playground-series-jan-2021/train.csv')
df_predict = pd.read_csv('.. /input/tabular-playground-series-jan-2021/test.csv' )<prepare_x_and_y> | class TrainDataSet(Dataset):
def __init__(self,data_df,transforms=T.ToTensor()):
self.data_df=data_df
self.transform=transforms
def __len__(self):
return self.data_df.shape[0]
def __getitem__(self,idx):
image=self.transform(get_image(self.data_df,idx))
label=torch.tensor(self.data_df.label.iloc[idx],dtype=torch.long)
return image,label | Digit Recognizer |
18,907,580 | X, y = df_train.drop(columns=['target']), df_train['target']<split> | class TestDataSet(TrainDataSet):
def __getitem__(self,idx):
image=self.transform(get_image(self.data_df,idx))
return image | Digit Recognizer |
18,907,580 | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1337 )<compute_test_metric> | def create_model() :
model = models.resnet18(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 10)
return model | Digit Recognizer |
18,907,580 | pipe_xgb.fit(X_train, y_train)
pipe_lgb.fit(X_train, y_train)
print(f'XGB Score: {pipe_xgb.score(X_test, y_test)}, LGB Score: {pipe_lgb.score(X_test, y_test)}')
print(f'XGB RMSE: {mean_squared_error(y_test, pipe_xgb.predict(X_test), squared=False)}, LGB RMSE: {mean_squared_error(y_test, pipe_lgb.predict(X_test), squared=False)}' )<predict_on_test> | transform=T.Compose([
T.Resize(( 256,256)) ,
T.ToTensor() ,
T.Normalize(( 0.485, 0.456, 0.406),(0.229, 0.224, 0.225))
] ) | Digit Recognizer |
18,907,580 | def ensemble_predict(X):
target_xgb = pipe_xgb.predict(X)
target_lgb = pipe_lgb.predict(X)
return [0.85 * x + 0.15 * l for(x, l)in zip(target_xgb, target_lgb)]<compute_test_metric> | def train_once(model,dataloader,criterion,optimizer,device):
total_loss=0
n_total=0
criterion.reduction="sum"
model.train()
for i,(images,labels)in enumerate(tqdm.tqdm(dataloader)) :
optimizer.zero_grad()
images=images.to(device)
labels=labels.to(device)
outputs=model(images)
loss=criterion(outputs,labels)
total_loss+=loss.item()
n_total+=labels.shape[0]
loss.backward()
optimizer.step()
return total_loss/n_total | Digit Recognizer |
18,907,580 | print(f'Ensemble RMSE: {mean_squared_error(y_test, ensemble_predict(X_test), squared=False)}' )<train_model> | class Validation_Metrics(object):
def __init__(self,activation_func=nn.Softmax(dim=1)) :
self.predictions=[]
self.labels=[]
self.activation_func=activation_func
self.collapsed=False
def update(self,model_outputs,labels):
if not self.collapsed:
self.predictions.append(self.activation_func(model_outputs ).detach())
self.labels.append(labels.detach())
else:
raise ValueError('Error, one cannot add further values to a logger once it has been collapsed')
def collapse(self):
if self.collapsed:
pass
else:
self.predictions=torch.cat(self.predictions ).cpu().numpy()
self.labels=torch.cat(self.labels ).cpu().numpy()
self.collapsed=True
def Confusion_matrix(self):
self.collapse()
Confusion_matrix=np.zeros(10,10)
pred=np.argmax(self.predictions,axis=1)
labels=self.labels
return confusion_matrix(labels,pred)
def AUC(self):
self.collapse()
pred=self.predictions
labels=np.zeros(pred.shape)
labels[np.arange(label.shape[0]),self.labels]=1.0
aucs = []
for i in range(labels.shape[1]):
aucs.append(roc_auc_score(labels[:, i], pred[:, i]))
return aucs
def F1_score(self):
self.collapse()
pred=np.argmax(self.predictions,axis=1)
labels=self.labels
return f1_score(labels, pred, average=None)
def Accuracy(self):
self.collapse()
pred=np.argmax(self.predictions,axis=1)
labels=self.labels
return accuracy_score(labels, pred)
| Digit Recognizer |
18,907,580 | pipe_xgb.fit(X, y)
pipe_lgb.fit(X, y )<save_to_csv> | def val(model,dataloader,criterion,device):
total_loss=0
n_total=0
criterion.reduction="sum"
Metrics=Validation_Metrics()
model.eval()
with torch.no_grad() :
for images,labels in tqdm.tqdm(dataloader):
images=images.to(device)
labels=labels.to(device)
outputs=model(images)
loss=criterion(outputs,labels)
Metrics.update(outputs,labels)
total_loss+=loss.item()
n_total+=labels.shape[0]
return total_loss/n_total,Metrics
| Digit Recognizer |
18,907,580 | target = pd.DataFrame({
'id': df_predict['id'], 'target': ensemble_predict(df_predict)
})
target.to_csv('submission.csv', index=False )<define_variables> | n_folds=5 | Digit Recognizer |
18,907,580 | PATH = '/kaggle/input/tabular-playground-series-jan-2021/'<load_from_csv> | train_df.insert(1,"fold",np.random.randint(1,n_folds+1,size=train_df.shape[0])) | Digit Recognizer |
18,907,580 | train = pd.read_csv(PATH+'train.csv')
test = pd.read_csv(PATH+'test.csv')
submission = pd.read_csv(PATH+'sample_submission.csv' )<install_modules> | def Get_Train_Val_Set(fold_i,transform=transform):
train_set=TrainDataSet(train_df[train_df.fold!=fold_i],transforms=transform)
test_set=TrainDataSet(train_df[train_df.fold==fold_i],transforms=transform)
return train_set, test_set | Digit Recognizer |
18,907,580 | !pip install pycaret<import_modules> | USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu" ) | Digit Recognizer |
18,907,580 | from pycaret.regression import *<define_variables> | criterion=nn.CrossEntropyLoss()
optimizer_name="Adam"
optimizer_parameters={"lr":0.0001}
epochs=1 | Digit Recognizer |
18,907,580 | reg = setup(data=train, target='target', silent=True, session_id=2021 )<choose_model_class> | def create_optimizer(model,optimizer_name,optimizer_parameters):
if optimizer_name=="SGD":
return optim.SGD(model.parameters() ,**optimizer_parameters)
elif optimizer_name=="Adam":
return optim.Adam(model.parameters() ,**optimizer_parameters ) | Digit Recognizer |
18,907,580 | blended = blend_models(best_3, fold=5 )<predict_on_test> | Best_val_accuracy=0
for fold in range(1,n_folds+1):
print(f"Training fold {fold}")
model=create_model()
model.to(device)
optimizer=create_optimizer(model,optimizer_name,optimizer_parameters)
TrainSet,ValSet=Get_Train_Val_Set(fold)
TrainLoader=DataLoader(TrainSet, batch_size=256)
ValLoader=DataLoader(ValSet, batch_size=1024)
for epoch in range(epochs):
train_loss=train_once(model,TrainLoader,criterion,optimizer,device)
print(f"For epoch {epoch+1}, the Train Loss was: {train_loss}")
val_loss,Metrics=val(model,ValLoader,criterion,device)
print(f"The Val Loss was {val_loss}, and the val accuracy was {Metrics.Accuracy() }")
if Metrics.Accuracy() >Best_val_accuracy:
print("New Best, saving")
torch.save(model.state_dict() ,f"fold{fold}Best.pt")
| Digit Recognizer |
18,907,580 | pred_holdout = predict_model(blended )<categorify> | optimizer=create_optimizer(model,optimizer,optimizer_parameters)
optimizer | Digit Recognizer |
18,907,580 | final_model = finalize_model(blended )<predict_on_test> | model_outputs=torch.zeros(( test_df.shape[0],10)).to(device)
plot_every=1 | Digit Recognizer |
18,907,580 | predictions = predict_model(final_model, data=test )<prepare_output> | TestSet=TestDataSet(test_df,transforms=transform ) | Digit Recognizer |
18,907,580 | submission['target'] = predictions['Label']<save_to_csv> | for fold in range(1,n_folds+1):
print(f"Running on fold {fold}")
model=create_model()
model.load_state_dict(torch.load(f"fold{fold}Best.pt"))
model.to(device)
model.eval()
for i,image in enumerate(tqdm.tqdm(TestSet)) :
image=torch.unsqueeze(image,0 ).to(device)
outputs=model(image)
model_outputs[i]+=outputs.detach() [0] | Digit Recognizer |
18,907,580 | submission.to_csv('submission_0116_baseline.csv', index=False )<import_modules> | submission_df=pd.read_csv(".. /input/digit-recognizer/sample_submission.csv" ) | Digit Recognizer |
18,907,580 | import lightgbm as lgb
import optuna.integration.lightgbm as oplgb
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import seaborn as sns<load_from_csv> | submission_df["Label"]=np.argmax(model_outputs.cpu().numpy() ,1 ) | Digit Recognizer |
18,907,580 | df_train = pd.read_csv("/kaggle/input/tabular-playground-series-jan-2021/train.csv")
df_test = pd.read_csv("/kaggle/input/tabular-playground-series-jan-2021/test.csv")
df_sample = pd.read_csv("/kaggle/input/tabular-playground-series-jan-2021/sample_submission.csv" )<drop_column> | submission_df.to_csv("submission.csv", index=False ) | Digit Recognizer |
18,907,580 | <define_variables><EOS> | pd.read_csv("submission.csv" ) | Digit Recognizer |
19,223,927 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<prepare_x_and_y> | for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
| Digit Recognizer |
19,223,927 | train_x = df_train[feature_cols]
train_y = df_train.target
test_x = df_test<choose_model_class> | np.random.seed(1)
df_train = pd.read_csv("/kaggle/input/digit-recognizer/train.csv")
df_train = df_train.iloc[np.random.permutation(len(df_train)) ] | Digit Recognizer |
19,223,927 | folds = KFold(n_splits=5, shuffle=True, random_state=2021 )<train_model> | sample_size = df_train.shape[0]
validation_size = int(df_train.shape[0] * 0.1)
train_x = np.asarray(df_train.iloc[:sample_size - validation_size:, 1:] ).reshape([sample_size - validation_size, 28, 28, 1])
train_y = np.asarray(df_train.iloc[:sample_size - validation_size:, 0] ).reshape([sample_size - validation_size, 1])
val_x = np.asarray(df_train.iloc[sample_size - validation_size:,1:] ).reshape([validation_size,28,28,1])
val_y = np.asarray(df_train.iloc[sample_size - validation_size:, 0] ).reshape([validation_size, 1] ) | Digit Recognizer |
19,223,927 | class FoldsAverageLGBM:
def __init__(self, folds):
self.folds = folds
self.models = []
def fit(self, lgb_params, train_x, train_y):
oof_preds = np.zeros_like(train_y)
self.train_x = train_x.values
self.train_y = train_y.values
for tr_idx, va_idx in tqdm(folds.split(train_x)) :
tr_x, va_x = self.train_x[tr_idx], self.train_x[va_idx]
tr_y, va_y = self.train_y[tr_idx], self.train_y[va_idx]
lgb_train_dataset = lgb.Dataset(tr_x, tr_y)
lgb_valid_dataset = lgb.Dataset(va_x, va_y)
model = lgb.train(lgb_params, lgb_train_dataset, valid_sets=[lgb_valid_dataset], verbose_eval=100)
self.models.append(model)
oof_pred = model.predict(va_x)
oof_preds[va_idx] = oof_pred
self.oof_preds = oof_preds
def predict(self, test_x):
preds = []
for model in tqdm(self.models):
pred = model.predict(test_x)
preds.append(pred)
preds = np.mean(preds, axis=0)
return preds<init_hyperparams> | df_test = pd.read_csv("/kaggle/input/digit-recognizer/test.csv")
test_x = np.asarray(df_test.iloc[:, :] ).reshape([-1, 28, 28, 1] ) | Digit Recognizer |
19,223,927 | best_lgb_params = {
'seed': 2021,
'objective': 'regression',
'metric': 'rmse',
'verbosity': -1,
'feature_pre_filter': False,
'lambda_l1': 6.540486456085813,
'lambda_l2': 0.01548480538099245,
'num_leaves': 256,
'feature_fraction': 0.52,
'bagging_fraction': 0.6161835249194311,
'bagging_freq': 7,
'min_child_samples': 20
}
best_lgb_params["learning_rate"] = 0.001
best_lgb_params["early_stopping_round"] = 1000
best_lgb_params["num_iterations"] = 20000<statistical_test> | train_x = train_x/255
val_x = val_x/255
test_x = test_x/255 | Digit Recognizer |
19,223,927 | folds_average_lgbm = FoldsAverageLGBM(folds )<train_model> | model = models.Sequential() | Digit Recognizer |
19,223,927 | folds_average_lgbm.fit(best_lgb_params, train_x, train_y )<compute_test_metric> | model.add(Conv2D(32,3, padding ="same",input_shape=(28, 28, 1)))
model.add(LeakyReLU())
model.add(Conv2D(32,3, padding ="same"))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64,3, padding ="same"))
model.add(LeakyReLU())
model.add(Conv2D(64,3, padding ="same"))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(32,activation='relu'))
model.add(Dense(10,activation="sigmoid")) | Digit Recognizer |
19,223,927 | np.sqrt(mean_squared_error(df_train.target, folds_average_lgbm.oof_preds))<predict_on_test> | initial_lr = 0.001
loss = "sparse_categorical_crossentropy"
model.compile(Adam(lr = initial_lr), loss = loss, metrics = ['accuracy'])
model.summary() | Digit Recognizer |
19,223,927 | y_pred = folds_average_lgbm.predict(test_x )<save_to_csv> | epochs = 20
batch_size = 256
history_1 = model.fit(train_x, train_y, batch_size = batch_size, epochs = epochs, validation_data =(val_x, val_y)) | Digit Recognizer |
19,223,927 | sub = df_sample.copy()
sub["target"] = y_pred
sub.to_csv("submission_lgbm_1.csv", index=False)
sub.head()<load_from_csv> | val_p = np.argmax(model.predict(val_x), axis = 1)
error = 0
confusion_matrix = np.zeros([10, 10])
for i in range(val_x.shape[0]):
confusion_matrix[val_y[i], val_p[i]] += 1
if val_y[i] != val_p[i]:
error += 1
print("Confusion Matrix:
", confusion_matrix)
print("
Errors in validation set: ", error)
print("
Error Persentage: ",(error * 100)/ val_p.shape[0])
print("
Accuracy: ", 100 -(error * 100)/ val_p.shape[0])
print("
Validation set Shape: ", val_p.shape[0] ) | Digit Recognizer |
19,223,927 | train = pd.read_csv(input_path / 'train.csv', index_col='id')
display(train.head() )<load_from_csv> | datagen = ImageDataGenerator(
featurewise_center = False,
samplewise_center = False,
featurewise_std_normalization = False,
samplewise_std_normalization = False,
zca_whitening = False,
rotation_range = 10,
zoom_range = 0.1,
width_shift_range = 0.1,
height_shift_range = 0.1,
horizontal_flip = False,
vertical_flip = False)
datagen.fit(train_x ) | Digit Recognizer |
19,223,927 | test = pd.read_csv(input_path / 'test.csv', index_col='id')
display(test.head() )<load_from_csv> | lrr = ReduceLROnPlateau(monitor = 'val_accuracy', patience = 2, verbose = 1, factor = 0.5, min_lr = 0.00001 ) | Digit Recognizer |
19,223,927 | submission = pd.read_csv(input_path / 'sample_submission.csv', index_col='id')
display(submission.head() )<install_modules> | epochs = 30
history_2 = model.fit_generator(datagen.flow(train_x, train_y, batch_size = batch_size), steps_per_epoch = int(train_x.shape[0]/batch_size)+ 1, epochs = epochs, validation_data =(val_x, val_y), callbacks = [lrr])
| Digit Recognizer |
19,223,927 | !pip install pytorch-tabnet
<prepare_x_and_y> | val_p = np.argmax(model.predict(val_x), axis = 1)
error = 0
confusion_matrix = np.zeros([10, 10])
for i in range(val_x.shape[0]):
confusion_matrix[val_y[i], val_p[i]] += 1
if val_y[i] != val_p[i]:
error += 1
print("Confusion Matrix:
", confusion_matrix)
print("
Errors in validation set: ", error)
print("
Error Persentage: ",(error * 100)/ val_p.shape[0])
print("
Accuracy: ", 100 -(error * 100)/ val_p.shape[0])
print("
Validation set Shape: ", val_p.shape[0] ) | Digit Recognizer |
19,223,927 | features = train.columns[1:-1]
X = train[features]
y = np.log1p(train["target"])
X_test = test[features]
<data_type_conversions> | test_y = np.argmax(model.predict(test_x), axis = 1 ) | Digit Recognizer |
19,223,927 | <train_on_grid><EOS> | df_submission = pd.DataFrame([df_test.index + 1, test_y], ["ImageId", "Label"] ).transpose()
df_submission.to_csv("MySubmission.csv", index = False ) | Digit Recognizer |
18,968,817 | <SOS> metric: categorizationaccuracy Kaggle data source: digit-recognizer<compute_test_metric> | tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
| Digit Recognizer |
18,968,817 | print("The CV score is %.5f" % np.mean(CV_score_array,axis=0))
<save_to_csv> | train_dataframe=pd.read_csv(".. /input/digit-recognizer/train.csv")
test_dataframe=pd.read_csv(".. /input/digit-recognizer/test.csv" ) | Digit Recognizer |
18,968,817 | submission.iloc[:,0:] = predictions
submission.to_csv('submission.csv' )<import_modules> | train_dataframe['label'].value_counts() | Digit Recognizer |
18,968,817 | from catboost import CatBoostRegressor<load_from_csv> | train_label = train_dataframe.label.to_numpy()
train_image=train_dataframe.to_numpy() [0:,1:].reshape(42000,28,28,1)
test_image = test_dataframe.to_numpy().reshape(28000,28,28,1 ) | Digit Recognizer |
18,968,817 | df_train = pd.read_csv('/kaggle/input/tabular-playground-series-jan-2021/train.csv')
y = df_train['target']
df_train.drop(['id', 'target'], axis = 1, inplace = True)
df_test = pd.read_csv('/kaggle/input/tabular-playground-series-jan-2021/test.csv')
sub_id = df_test['id']
df_test.drop('id', axis = 1, inplace = True )<train_on_grid> | train_image = train_image.astype(float)/ 255.0
test_image = test_image.astype(float)/ 255.0 | Digit Recognizer |
18,968,817 | cbr = CatBoostRegressor()
cbr.fit(df_train, y )<prepare_output> | with tpu_strategy.scope() :
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64,(3,3), activation='relu',padding = 'Same', input_shape=(28, 28, 1)) ,
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(128,(3,3), activation='relu',padding = 'Same'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(256,(3,3), activation='relu',padding = 'Same'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten() ,
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation='softmax')
])
optimizer = Adam(learning_rate=0.001)
model.compile(loss=SparseCategoricalCrossentropy(from_logits=True),
optimizer = optimizer,
metrics=['accuracy'])
epochs = 50
batch_size = 16 | Digit Recognizer |
18,968,817 | submission = pd.DataFrame(sub_id, columns = ['id'])
submission.head()<predict_on_test> | x_train,x_val,y_train,y_val=train_test_split(train_image,train_label,test_size=0.2,random_state=42 ) | Digit Recognizer |
18,968,817 | submission['target'] = cbr.predict(df_test )<save_to_csv> | history = model.fit(x_train,y_train,batch_size=64,epochs=15,validation_data=(x_val,y_val),shuffle=True ) | Digit Recognizer |
18,968,817 | submission.to_csv('catboost.csv', index = False )<set_options> | val_pred = model.predict(x_val ) | Digit Recognizer |
18,968,817 | mpl.rcParams['agg.path.chunksize'] = 10000<load_from_csv> | val_pred1 = np.argmax(val_pred, axis=1 ) | Digit Recognizer |
18,968,817 | train_data = pd.read_csv('/kaggle/input/tabular-playground-series-jan-2021/train.csv')
test_data = pd.read_csv('/kaggle/input/tabular-playground-series-jan-2021/test.csv')
print("successfully loaded!" )<filter> | predictions = model.predict(test_image ) | Digit Recognizer |
Dataset Card for Pipeline2Code
Dataset origin
Code4ML: a Large-scale Dataset of annotated Machine Learning Code
Dataset Summary
This dataset is designed for the iterative generation of Machine Learning (ML) code based on high-level ML pipeline descriptions. It consists of code snippets extracted from Kaggle kernels, organized as Jupyter Notebook snippets. Each kernel includes a set of prompts and completions. The initial prompt contains an token, meta-information about the task the notebook aims to solve, and the semantic type of the code snippet. The final prompt of each kernel consists of the semantic type of the code snippet followed by an token. Each prompt is associated with a code snippet completion. Subsequent prompts include previously generated completions and the semantic type of the snippet.
- Downloads last month
- 35