codeparrot/codeparrot-small-code-to-text
Text Generation
โข
Updated
โข
28
โข
2
repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
keras-team/keras-io | examples/vision/ipynb/mnist_convnet.ipynb | apache-2.0 | import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Simple MNIST convnet
Author: fchollet<br>
Date created: 2015/06/19<br>
Last modified: 2020/04/21<br>
Description: A simple convnet that achieves ~99% test accuracy on MNIST.
Setup
End of explanation
"""
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
"""
Explanation: Prepare the data
End of explanation
"""
model = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.summary()
"""
Explanation: Build the model
End of explanation
"""
batch_size = 128
epochs = 15
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
"""
Explanation: Train the model
End of explanation
"""
score = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
"""
Explanation: Evaluate the trained model
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tfx/tutorials/tfx/components_keras.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
import sys
if 'google.colab' in sys.modules:
!pip install --upgrade pip
"""
Explanation: TFX Keras ใณใณใใผใใณใใฎใใฅใผใใชใขใซ
TensorFlow Extended (TFX) ใฎๅใณใณใใผใใณใใฎ็ดนไป
ๆณจ๏ผใใฎไพใฏใJupyter ในใฟใคใซใฎใใผใใใใฏใงไปใใๅฎ่กใงใใพใใใปใใใขใใใฏๅฟ
่ฆใใใพใใใใGoogle Colab ใงๅฎ่กใใใฏใชใใฏใใใ ใใงใ
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/components_keras"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org ใง่กจ็คบ</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tfx/tutorials/tfx/components_keras.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab ใงๅฎ่ก</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tfx/tutorials/tfx/components_keras.ipynb"> <img width="32px" src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub ใงใฝใผในใ่กจ็คบ</a></td>
<td><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tfx/tutorials/tfx/components_keras.ipynb"> <img width="32px" src="https://www.tensorflow.org/images/download_logo_32px.png">ใใผใใใใฏใใใฆใณใญใผใ</a></td>
</table></div>
ใใฎ Colab ใใผในใฎใใฅใผใใชใขใซใงใฏใTensorFlow Extended (TFX) ใฎใใใใใฎ็ตใฟ่พผใฟใณใณใใผใใณใใใคใณใฟใฉใฏใใฃใใซ่ชฌๆใใพใใ
ใใใงใฏใใผใฟใฎๅใ่พผใฟใใใขใใซใฎใใใทใฅใใตใผใใณใฐใพใงใใจใณใ ใใผ ใจใณใใฎๆฉๆขฐๅญฆ็ฟใใคใใฉใคใณใฎใในใฆใฎในใใใใ่ฆใฆใใใพใใ
ๅฎไบใใใใใใฎใใผใใใใฏใฎใณใณใใณใใ TFX ใใคใใฉใคใณ ใฝใผใน ใณใผใใจใใฆ่ชๅ็ใซใจใฏในใใผใใงใใพใใใใใฏใApache Airflow ใใใณ Apache Beam ใจใชใผใฑในใใฌใผใทใงใณใงใใพใใ
ๆณจๆ: ใใฎใใผใใใใฏใฏใTFX ใใคใใฉใคใณใงใฎใใคใใฃใ Keras ใขใใซใฎไฝฟ็จใ็คบใใฆใใพใใTFX ใฏ TensorFlow 2 ใใผใธใงใณใฎ Keras ใฎใฟใใตใใผใใใพใใ
่ๆฏๆ
ๅ ฑ
ใใฎใใผใใใใฏใฏใJupyter/Colab ็ฐๅขใง TFX ใไฝฟ็จใใๆนๆณใ็คบใใฆใใพใใ ใใใงใฏใใคใณใฟใฉใฏใใฃใใชใใผใใใใฏใงใทใซใดใฎใฟใฏใทใผใฎไพใ่ฆใฆใใใพใใ
TFX ใใคใใฉใคใณใฎๆง้ ใซๆ
ฃใใใฎใซใฏใใคใณใฟใฉใฏใใฃใใชใใผใใใใฏใงไฝๆฅญใใใฎใไพฟๅฉใงใใ็ฌ่ชใฎใใคใใฉใคใณใ่ปฝ้ใฎ้็บ็ฐๅขใจใใฆ้็บใใๅ ดๅใซใๅฝน็ซใกใพใใใใคใณใฟใฉใฏใใฃใ ใใผใใใใฏใฎใชใผใฑในใใฌใผใทใงใณใจใกใฟใใผใฟ ใขใผใใฃใใกใฏใใธใฎใขใฏใปในๆนๆณใซใฏ้ใใใใใฎใงๆณจๆใใฆใใ ใใใ
ใชใผใฑในใใฌใผใทใงใณ
TFX ใฎๅฎ็จผๅใใใญใคใกใณใใงใฏใApache AirflowใKubeflow PipelinesใApache Beam ใชใฉใฎใชใผใฑในใใฌใผใฟใผใไฝฟ็จใใฆใTFX ใณใณใใผใใณใใฎไบๅๅฎ็พฉๆธใฟใใคใใฉใคใณ ใฐใฉใใใชใผใฑในใใฌใผใทใงใณใใพใใใคใณใฟใฉใฏใใฃใใชใใผใใใใฏใงใฏใใใผใใใใฏ่ชไฝใใชใผใฑในใใฌใผใฟใผใงใใใใใผใใใใฏ ใปใซใๅฎ่กใใใจใใซใใใใใฎ TFX ใณใณใใผใใณใใๅฎ่กใใพใใ
ใกใฟใใผใฟ
TFX ใฎๅฎ็จผๅใใใญใคใกใณใใงใฏใML Metadata๏ผMLMD๏ผAPI ใไปใใฆใกใฟใใผใฟใซใขใฏใปในใใพใใMLMD ใฏใใกใฟใใผใฟ ใใญใใใฃใ MySQL ใ SQLite ใชใฉใฎใใผใฟใใผในใซๆ ผ็ดใใใกใฟใใผใฟ ใใคใญใผใใใใกใคใซ ใทในใใ ใชใฉใฎๆฐธ็ถในใใขใซไฟๅญใใพใใใคใณใฟใฉใฏใใฃใใชใใผใใใใฏใงใฏใใใญใใใฃใจใใคใญใผใใฎไธกๆนใใJupyter ใใผใใใใฏใพใใฏ Colab ใตใผใใผใฎ /tmp ใใฃใฌใฏใใชใซใใใจใใงใกใฉใซ SQLite ใใผใฟใใผในใซไฟๅญใใใพใใ
ใปใใใขใใ
ใพใใๅฟ
่ฆใชใใใฑใผใธใใคใณในใใผใซใใฆใคใณใใผใใใใในใ่จญๅฎใใฆใใใผใฟใใใฆใณใญใผใใใพใใ
Pip ใฎใขใใใฐใฌใผใ
ใญใผใซใซใงๅฎ่กใใๅ ดๅใซใทในใใ PipใใขใใใฐใฌใผใใใชใใใใซใใใซใฏใColab ใงๅฎ่กใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใใใกใใใใญใผใซใซใทในใใ ใฏๅๅฅใซใขใใใฐใฌใผใใงใใพใใ
End of explanation
"""
!pip install -U tfx
"""
Explanation: TFX ใใคใณในใใผใซใใ
ๆณจ๏ผGoogle Colab ใงใฏใใใใฑใผใธใๆดๆฐใใใใใใใใฎใปใซใๅใใฆๅฎ่กใใใจใใซใใฉใณใฟใคใ ใๅ่ตทๅใใๅฟ
่ฆใใใใพใ๏ผ[ใฉใณใฟใคใ ]> [ใฉใณใฟใคใ ใฎๅ่ตทๅ...]๏ผใ
End of explanation
"""
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
"""
Explanation: ใฉใณใฟใคใ ใๅ่ตทๅใใพใใใ๏ผ
Google Colab ใไฝฟ็จใใฆใใๅ ดๅใฏใไธ่จใฎใปใซใๅใใฆๅฎ่กใใใจใใซใฉใณใฟใคใ ใๅ่ตทๅใใๅฟ
่ฆใใใใพใ๏ผ[ใฉใณใฟใคใ ]> [ใฉใณใฟใคใ ใฎๅ่ตทๅ...]๏ผใ ใใใฏใColab ใใใใฑใผใธใ่ชญใฟ่พผใใใใซๅฟ
่ฆใงใใงใใ
ใใใฑใผใธใใคใณใใผใใใ
ๆจๆบใฎ TFX ใณใณใใผใใณใ ใฏใฉในใๅซใๅฟ
่ฆใชใใใฑใผใธใใคใณใใผใใใพใใ
End of explanation
"""
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
"""
Explanation: ใฉใคใใฉใชใฎใใผใธใงใณใ็ขบ่ชใใพใใ
End of explanation
"""
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
"""
Explanation: ใใคใใฉใคใณ ใในใ่จญๅฎ
End of explanation
"""
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
"""
Explanation: ใตใณใใซใใผใฟใฎใใฆใณใญใผใ
TFX ใใคใใฉใคใณใงไฝฟ็จใใใตใณใใซ ใใผใฟใปใใใใใฆใณใญใผใใใพใใ
ไฝฟ็จใใฆใใใใผใฟใปใใใฏใใทใซใดๅธใใชใชใผในใใ ใฟใฏใทใผใใชใใใใผใฟใปใใใงใใ ใใฎใใผใฟใปใใใฎๅใฏๆฌกใฎใจใใใงใใ
<table>
<tr>
<td>pickup_community_area</td>
<td>fare</td>
<td>trip_start_month</td>
</tr>
<tr>
<td>trip_start_hour</td>
<td>trip_start_day</td>
<td>trip_start_timestamp</td>
</tr>
<tr>
<td>pickup_latitude</td>
<td>pickup_longitude</td>
<td>dropoff_latitude</td>
</tr>
<tr>
<td>dropoff_longitude</td>
<td>trip_miles</td>
<td>pickup_census_tract</td>
</tr>
<tr>
<td>dropoff_census_tract</td>
<td>payment_type</td>
<td>company</td>
</tr>
<tr>
<td>trip_seconds</td>
<td>dropoff_community_area</td>
<td>tips</td>
</tr>
</table>
ใใฎใใผใฟใปใใใไฝฟ็จใใฆใใฟใฏใทใผไน่ปใฎtipsใไบๆธฌใใใขใใซใๆง็ฏใใพใใ
End of explanation
"""
!head {_data_filepath}
"""
Explanation: CSV ใใกใคใซใ่ฆใฆใฟใพใใใใ
End of explanation
"""
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
"""
Explanation: ๆณจ๏ผใใฎWeb ใตใคใใฏใใทใซใดๅธใฎๅ
ฌๅผ Web ใตใคใ www.cityofchicago.org ใงๅ
ฌ้ใใใใใผใฟใๅคๆดใใฆไฝฟ็จใใใขใใชใฑใผใทใงใณใๆไพใใพใใใทใซใดๅธใฏใใใฎ Web ใตใคใใงๆไพใใใใใผใฟใฎๅ
ๅฎนใๆญฃ็ขบๆงใ้ฉๆๆงใใพใใฏๅฎๅ
จๆงใซใคใใฆไธๅใฎ่กจๆใ่กใใพใใใใใฎ Web ใตใคใใงๆไพใใใใใผใฟใฏใใใคใงใๅคๆดใใใๅฏ่ฝๆงใใใใพใใใใใ Web ใตใคใใงๆไพใใใใใผใฟใฏใฆใผใถใผใฎ่ชๅทฑ่ฒฌไปปใงๅฉ็จใใใใใฎใจใใพใใ
InteractiveContext ใไฝๆใใ
ๆๅพใซใใใฎใใผใใใใฏใง TFX ใณใณใใผใใณใใใคใณใฟใฉใฏใใฃใใซๅฎ่กใงใใใใใซใใ InteractiveContext ใไฝๆใใพใใ
End of explanation
"""
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen, enable_cache=True)
"""
Explanation: TFX ใณใณใใผใใณใใใคใณใฟใฉใฏใใฃใใซๅฎ่กใใ
ๆฌกใฎใปใซใงใฏใTFX ใณใณใใผใใณใใ 1 ใคใใคไฝๆใใใใใใใๅฎ่กใใฆใๅบๅใขใผใใฃใใกใฏใใ่ฆ่ฆๅใใพใใ
ExampleGen
ExampleGen ใณใณใใผใใณใใฏ้ๅธธใTFX ใใคใใฉใคใณใฎๅ
้ ญใซใใใไปฅไธใๅฎ่กใใพใใ
ใใผใฟใใใฌใผใใณใฐ ใปใใใจ่ฉไพกใปใใใซๅๅฒใใพใ (ใใใฉใซใใงใฏใ2/3 ใใฌใผใใณใฐ + 1/3 ่ฉไพก)ใ
ใใผใฟใ tf.Example ๅฝขๅผใซๅคๆใใพใใ (่ฉณ็ดฐใฏใใกใ)
ไปใฎใณใณใใผใใณใใใขใฏใปในใงใใใใใซใใใผใฟใ _tfx_root ใใฃใฌใฏใใชใซใณใใผใใพใใ
ExampleGen ใฏใใใผใฟใฝใผในใธใฎใในใๅ
ฅๅใจใใฆๅใๅใใพใใ ใใใงใฏใใใใฏใใฆใณใญใผใใใ CSV ใๅซใ _data_root ใในใงใใ
ๆณจๆ: ใใฎใใผใใใใฏใงใฏใใณใณใใผใใณใใ 1 ใคใใคใคใณในใฟใณในๅใใInteractiveContext.run() ใงๅฎ่กใใพใใใๅฎ็จผๅ็ฐๅขใงใฏใใในใฆใฎใณใณใใผใใณใใไบๅใซ Pipelineใงๆๅฎใใฆใใชใผใฑในใใฌใผใฟใผใซๆธกใใพใ๏ผTFX ใใคใใฉใคใณ ใฌใคใใฎๆง็ฏใๅ็
งใใฆใใ ใใ๏ผใ
ใญใฃใใทใฅใๆๅนใซใใ
ใใผใใใใฏใง InteractiveContext ใไฝฟ็จใใฆใใคใใฉใคใณใไฝๆใใฆใใๅ ดๅใๅๅฅใฎใณใณใใผใใณใใๅบๅใใญใฃใใทใฅใใใฟใคใใณใฐใๅถๅพกใใใใจใใงใใพใใใณใณใใผใใณใใๅใซ็ๆใใๅบๅใขใผใใฃใใกใฏใใๅๅฉ็จใใๅ ดๅใฏใenable_cache ใ True ใซ่จญๅฎใใพใใใณใผใใๅคๆดใใใชใฉใซใใใใณใณใใผใใณใใฎๅบๅใขใผใใฃใใกใฏใใๅ่จ็ฎใใๅ ดๅใฏใenable_cache ใ False ใซ่จญๅฎใใพใใ
End of explanation
"""
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
"""
Explanation: ExampleGenใฎๅบๅใขใผใใฃใใกใฏใใ่ชฟในใฆใฟใพใใใใใใฎใณใณใใผใใณใใฏใใใฌใผใใณใฐใตใณใใซใจ่ฉไพกใตใณใใซใฎ 2 ใคใฎใขใผใใฃใใกใฏใใ็ๆใใพใใ
End of explanation
"""
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
"""
Explanation: ใพใใๆๅใฎ 3 ใคใฎใใฌใผใใณใฐใตใณใใซใ่ฆใฆใฟใพใใ
End of explanation
"""
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen, enable_cache=True)
"""
Explanation: ExampleGenใใใผใฟใฎๅใ่พผใฟใๅฎไบใใใฎใงใๆฌกใฎในใใใใใใผใฟๅๆใซ้ฒใฟใพใใ
StatisticsGen
StatisticsGenใณใณใใผใใณใใฏใใใผใฟๅๆ็จใฎใใผใฟใปใใใฎ็ตฑ่จใ่จ็ฎใใใใฆใณในใใชใผใ ใฎใณใณใใผใใณใใงไฝฟ็จใใพใใใใใฏใTensorFlow Data Validation ใฉใคใใฉใชใไฝฟ็จใใพใใ
StatisticsGenใณใณใใผใใณใใฏใใใผใฟๅๆ็จใฎใใผใฟใปใใใฎ็ตฑ่จใ่จ็ฎใใใใฆใณในใใชใผใ ใณใณใใผใใณใใงไฝฟ็จใใพใใ
End of explanation
"""
context.show(statistics_gen.outputs['statistics'])
"""
Explanation: StatisticsGen ใฎๅฎ่กใๅฎไบใใใจใๅบๅใใใ็ตฑ่จใ่ฆ่ฆๅใงใใพใใ ่ฒใ
ใชใใญใใใ่ฉฆใใฆใฟใฆใใ ใใ๏ผ
End of explanation
"""
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen, enable_cache=True)
"""
Explanation: SchemaGen
SchemaGen ใณใณใใผใใณใใฏใใใผใฟ็ตฑ่จใซๅบใฅใใฆในใญใผใใ็ๆใใพใใ๏ผในใญใผใใฏใใใผใฟใปใใๅ
ใฎ็นๅพดใฎไบๆณใใใๅข็ใใฟใคใใใใญใใใฃใๅฎ็พฉใใพใใ๏ผใพใใTensorFlow ใใผใฟๆค่จผใฉใคใใฉใชใไฝฟ็จใใพใใ
ๆณจๆ: ็ๆใใใในใญใผใใฏใในใใจใใฉใผใใฎใใฎใงใใใผใฟใฎๅบๆฌ็ใชใใญใใใฃใ ใใๆจ่ซใใใใจใใพใใ็ขบ่ชใใๅฟ
่ฆใซๅฟใใฆไฟฎๆญฃใใๅฟ
่ฆใใใใพใใ
SchemaGen ใฏใStatisticsGen ใง็ๆใใ็ตฑ่จใๅ
ฅๅใจใใฆๅใๅใใใใใฉใซใใงใใฌใผใใณใฐๅๅฒใๅ็
งใใพใใ
End of explanation
"""
context.show(schema_gen.outputs['schema'])
"""
Explanation: SchemaGen ใฎๅฎ่กใๅฎไบใใใจใ็ๆใใใในใญใผใใใใผใใซใจใใฆ่ฆ่ฆๅใงใใพใใ
End of explanation
"""
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator, enable_cache=True)
"""
Explanation: ใใผใฟใปใใใฎใใใใใฎ็นๅพดใฏใในใญใผใ ใใผใใซใฎใใญใใใฃใฎๆจชใซ่กใจใใฆ่กจ็คบใใใพใใในใญใผใใฏใใใกใคใณใจใใฆ็คบใใใใใซใใดใช็นๅพดใๅใใในใฆใฎๅคใใญใฃใใใฃใใพใใ
ในใญใผใใฎ่ฉณ็ดฐใซใคใใฆใฏใSchemaGen ใฎใใญใฅใกใณใใใ่ฆงใใ ใใใ
ExampleValidator
ExampleValidator ใณใณใใผใใณใใฏใในใญใผใใงๅฎ็พฉใใใๆๅพ
ใซๅบใฅใใฆใใใผใฟใฎ็ฐๅธธใๆคๅบใใพใใใพใใTensorFlow Data Validation ใฉใคใใฉใชใไฝฟ็จใใพใใ
ExampleValidator ใฏใStatistics Gen{/code 1} ใใใฎ็ตฑ่จใจ <code data-md-type="codespan">SchemaGen ใใใฎในใญใผใใๅ
ฅๅใจใใฆๅใๅใใพใใ
End of explanation
"""
context.show(example_validator.outputs['anomalies'])
"""
Explanation: ExampleValidator ใฎๅฎ่กใๅฎไบใใใจใ็ฐๅธธใใใผใใซใจใใฆ่ฆ่ฆๅใงใใพใใ
End of explanation
"""
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
NUMERICAL_FEATURES = ['trip_miles', 'fare', 'trip_seconds']
BUCKET_FEATURES = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
CATEGORICAL_NUMERICAL_FEATURES = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
CATEGORICAL_STRING_FEATURES = [
'payment_type',
'company',
]
# Number of vocabulary terms used for encoding categorical features.
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized categorical are hashed.
OOV_SIZE = 10
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def t_name(key):
"""
Rename the feature keys so that they don't clash with the raw keys when
running the Evaluator component.
Args:
key: The original feature key
Returns:
key with '_xf' appended
"""
return key + '_xf'
"""
Explanation: ็ฐๅธธใใผใใซใงใฏใ็ฐๅธธใใชใใใจใใใใใพใใใใใฏใๅๆใใๆๅใฎใใผใฟใปใใใงใในใญใผใใฏใใใซๅใใใฆ่ชฟๆดใใใฆใใใใใ็ฐๅธธใใชใใใจใไบๆณใใใพใใใใฎในใญใผใใ็ขบ่ชใใๅฟ
่ฆใใใใพใใไบๆใใใชใใใฎใฏใใใผใฟใซ็ฐๅธธใใใใใจใๆๅณใใพใใ็ขบ่ชใใใในใญใผใใไฝฟ็จใใฆๅฐๆฅใฎใใผใฟใไฟ่ญทใงใใพใใใใใง็ๆใใใ็ฐๅธธใฏใใขใใซใฎใใใฉใผใใณในใใใใใฐใใใใผใฟใๆ้ใฎ็ต้ใจใจใใซใฉใฎใใใซๅคๅใใใใ็่งฃใใใใผใฟ ใจใฉใผใ็นๅฎใใใใใซไฝฟ็จใงใใพใใ
ๅคๆ
Transformใณใณใใผใใณใใฏใใใฌใผใใณใฐใจใตใผใใณใฐใฎไธกๆนใง็นๅพด้ใจใณใธใใขใชใณใฐใๅฎ่กใใพใใใใใฏใ TensorFlow Transform ใฉใคใใฉใชใไฝฟ็จใใพใใ
TransformใฏใExampleGenใใใฎใใผใฟใSchemaGenใใใฎในใญใผใใใฆใผใถใผๅฎ็พฉใฎ Transform ใณใผใใๅซใใขใธใฅใผใซใๅ
ฅๅใจใใฆๅใๅใใพใใ
ไปฅไธใฎใฆใผใถใผๅฎ็พฉใฎ Transform ใณใผใใฎไพใ่ฆใฆใฟใพใใใ๏ผTensorFlow Transform API ใฎๆฆ่ฆใซใคใใฆใฏใใใฅใผใใชใขใซใๅ็
งใใฆใใ ใใ๏ผใใพใใ็นๅพด้ใจใณใธใใขใชใณใฐใฎใใใคใใฎๅฎๆฐใๅฎ็พฉใใพใใ
ๆณจๆ: %%writefile ใปใซ ใใธใใฏใฏใใปใซใฎๅ
ๅฎนใใใฃในใฏไธใฎ.pyใใกใคใซใจใใฆไฟๅญใใพใใใใใซใใใTransform ใณใณใใผใใณใใฏใณใผใใใขใธใฅใผใซใจใใฆ่ชญใฟ่พผใใใจใใงใใพใใ
End of explanation
"""
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_NUMERICAL_FEATURES = taxi_constants.NUMERICAL_FEATURES
_BUCKET_FEATURES = taxi_constants.BUCKET_FEATURES
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_CATEGORICAL_NUMERICAL_FEATURES = taxi_constants.CATEGORICAL_NUMERICAL_FEATURES
_CATEGORICAL_STRING_FEATURES = taxi_constants.CATEGORICAL_STRING_FEATURES
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
def _make_one_hot(x, key):
"""Make a one-hot tensor to encode categorical features.
Args:
X: A dense tensor
key: A string key for the feature in the input
Returns:
A dense one-hot tensor as a float list
"""
integerized = tft.compute_and_apply_vocabulary(x,
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE,
vocab_filename=key, name=key)
depth = (
tft.experimental.get_vocabulary_size_by_name(key) + _OOV_SIZE)
one_hot_encoded = tf.one_hot(
integerized,
depth=tf.cast(depth, tf.int32),
on_value=1.0,
off_value=0.0)
return tf.reshape(one_hot_encoded, [-1, depth])
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
if not isinstance(x, tf.sparse.SparseTensor):
return x
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _NUMERICAL_FEATURES:
# If sparse make it dense, setting nan's to 0 or '', and apply zscore.
outputs[taxi_constants.t_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]), name=key)
for key in _BUCKET_FEATURES:
outputs[taxi_constants.t_name(key)] = tf.cast(tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT, name=key),
dtype=tf.float32)
for key in _CATEGORICAL_STRING_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(_fill_in_missing(inputs[key]), key)
for key in _CATEGORICAL_NUMERICAL_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(tf.strings.strip(
tf.strings.as_string(_fill_in_missing(inputs[key]))), key)
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_LABEL_KEY] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
"""
Explanation: ๆฌกใซใ็ใใผใฟใๅ
ฅๅใจใใฆๅใๅใใใขใใซใใใฌใผใใณใฐใงใใๅคๆใใใ็นๅพด้ใ่ฟใ {code 0}preprocessing _fn ใ่จ่ฟฐใใพใใ
End of explanation
"""
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform, enable_cache=True)
"""
Explanation: ๆฌกใซใใใฎ็นๅพด้ใจใณใธใใขใชใณใฐ ใณใผใใ Transformใณใณใใผใใณใใซๆธกใใๅฎ่กใใฆใใผใฟใๅคๆใใพใใ
End of explanation
"""
transform.outputs
"""
Explanation: Transformใฎๅบๅใขใผใใฃใใกใฏใใ่ชฟในใฆใฟใพใใใใใใฎใณใณใใผใใณใใฏใ2 ็จฎ้กใฎๅบๅใ็ๆใใพใใ
transform_graph ใฏใๅๅฆ็ๆผ็ฎใๅฎ่กใงใใใฐใฉใใงใ (ใใฎใฐใฉใใฏใใตใผใใณใฐใขใใซใจ่ฉไพกใขใใซใซๅซใพใใพใ)ใ
transformed_examplesใฏๅๅฆ็ใใใใใฌใผใใณใฐใใใณ่ฉไพกใใผใฟใ่กจใใพใใ
End of explanation
"""
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
"""
Explanation: transform_graph ใขใผใใฃใใกใฏใใ่ฆใฆใฟใพใใใใใใใฏใ3 ใคใฎใตใใใฃใฌใฏใใชใๅซใใใฃใฌใฏใใชใๆใใฆใใพใใ
End of explanation
"""
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
"""
Explanation: transformed_metadata ใตใใใฃใฌใฏใใชใซใฏใๅๅฆ็ใใใใใผใฟใฎในใญใผใใๅซใพใใฆใใพใใtransform_fnใตใใใฃใฌใฏใใชใซใฏใๅฎ้ใฎๅๅฆ็ใฐใฉใใๅซใพใใฆใใพใใmetadataใตใใใฃใฌใฏใใชใซใฏใๅ
ใฎใใผใฟใฎในใญใผใใๅซใพใใฆใใพใใ
ใพใใๆๅใฎ 3 ใคใฎๅคๆใใใไพใ่ฆใฆใฟใพใใ
End of explanation
"""
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
from typing import Dict, List, Text
import os
import glob
from absl import logging
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_transform import TFTransformOutput
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_LABEL_KEY = taxi_constants.LABEL_KEY
_BATCH_SIZE = 40
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
tf_transform_output.transformed_metadata.schema)
def _get_tf_examples_serving_signature(model, tf_transform_output):
"""Returns a serving signature that accepts `tensorflow.Example`."""
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_inference = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_example):
"""Returns the output to be used in the serving signature."""
raw_feature_spec = tf_transform_output.raw_feature_spec()
# Remove label feature since these will not be present at serving time.
raw_feature_spec.pop(_LABEL_KEY)
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_inference(raw_features)
logging.info('serve_transformed_features = %s', transformed_features)
outputs = model(transformed_features)
# TODO(b/154085620): Convert the predicted labels from the model using a
# reverse-lookup (opposite of transform.py).
return {'outputs': outputs}
return serve_tf_examples_fn
def _get_transform_features_signature(model, tf_transform_output):
"""Returns a serving signature that applies tf.Transform to features."""
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_eval = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def transform_features_fn(serialized_tf_example):
"""Returns the transformed_features to be fed as input to evaluator."""
raw_feature_spec = tf_transform_output.raw_feature_spec()
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_eval(raw_features)
logging.info('eval_transformed_features = %s', transformed_features)
return transformed_features
return transform_features_fn
def export_serving_model(tf_transform_output, model, output_dir):
"""Exports a keras model for serving.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
model: A keras model to export for serving.
output_dir: A directory where the model will be exported to.
"""
# The layer has to be saved to the model for keras tracking purpases.
model.tft_layer = tf_transform_output.transform_features_layer()
signatures = {
'serving_default':
_get_tf_examples_serving_signature(model, tf_transform_output),
'transform_features':
_get_transform_features_signature(model, tf_transform_output),
}
model.save(output_dir, save_format='tf', signatures=signatures)
def _build_keras_model(tf_transform_output: TFTransformOutput
) -> tf.keras.Model:
"""Creates a DNN Keras model for classifying taxi data.
Args:
tf_transform_output: [TFTransformOutput], the outputs from Transform
Returns:
A keras Model.
"""
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(_LABEL_KEY)
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
# TODO(b/208879020): Move into schema such that spec.shape is [1] and not
# [] for scalars.
inputs[key] = tf.keras.layers.Input(
shape=spec.shape or [1], name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
output = tf.keras.layers.Concatenate()(tf.nest.flatten(inputs))
output = tf.keras.layers.Dense(100, activation='relu')(output)
output = tf.keras.layers.Dense(70, activation='relu')(output)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(20, activation='relu')(output)
output = tf.keras.layers.Dense(1)(output)
return tf.keras.Model(inputs=inputs, outputs=output)
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
model = _build_keras_model(tf_transform_output)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
# Export the model.
export_serving_model(tf_transform_output, model, fn_args.serving_model_dir)
"""
Explanation: Transformใณใณใใผใใณใใใใผใฟใ็นๅพด้ใซๅคๆใใใใๆฌกใซใขใใซใใใฌใผใใณใฐใใพใใ
ใใฌใผใใผ
TrainerใณใณใใผใใณใใฏใTensorFlow ใงๅฎ็พฉใใใขใใซใใใฌใผใใณใฐใใพใใใใใฉใซใใงใฏใTrainer ใฏ Estimator API ใใตใใผใใใพใใKeras API ใไฝฟ็จใใใซใฏใใใฌใผใใผใฎใณใณในใใฉใฏใฟใผใงcustom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor)ใใปใใใขใใใใฆ Generic Trainer ใๆๅฎใใๅฟ
่ฆใใใใพใใ
Trainer ใฏใSchemaGenใใใฎในใญใผใใTransformใใใฎๅคๆใใใใใผใฟใจใฐใฉใใใใฌใผใใณใฐ ใใฉใกใผใฟใใใใณใฆใผใถใผๅฎ็พฉใใใใขใใซ ใณใผใใๅซใใขใธใฅใผใซใๅ
ฅๅใจใใฆๅใๅใใพใใ
ไปฅไธใฎใฆใผใถใผๅฎ็พฉใขใใซ ใณใผใใฎไพใ่ฆใฆใฟใพใใใ๏ผTensorFlow Keras API ใฎๆฆ่ฆใซใคใใฆใฏใใใฅใผใใชใขใซใๅ็
งใใฆใใ ใใ๏ผใ
End of explanation
"""
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10000),
eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer, enable_cache=True)
"""
Explanation: ๆฌกใซใใใฎใขใใซ ใณใผใใTrainerใณใณใใผใใณใใซๆธกใใใใใๅฎ่กใใฆใขใใซใใใฌใผใใณใฐใใพใใ
End of explanation
"""
model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
"""
Explanation: TensorBoard ใงใใฌใผใใณใฐใๅๆใใ
ใใฌใผใใผใฎใขใผใใฃใใกใฏใใ่ฆใฆใฟใพใใใใใใใฏใขใใซใฎใตใใใฃใฌใฏใใชใๅซใใใฃใฌใฏใใชใๆใใฆใใพใใ
End of explanation
"""
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}
"""
Explanation: ใชใใทใงใณใงใTensorBoard ใ Trainer ใซๆฅ็ถใใฆใใขใใซใฎๅญฆ็ฟๆฒ็ทใๅๆใงใใพใใ
End of explanation
"""
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(
signature_name='serving_default',
label_key=taxi_constants.LABEL_KEY,
preprocessing_function_names=['transform_features'],
)
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10})))
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(
feature_keys=['trip_start_hour'])
])
"""
Explanation: Evaluator
Evaluator ใณใณใใผใใณใใฏใ่ฉไพกใปใใใซๅฏพใใฆใขใใซ ใใใฉใผใใณในๆๆจใ่จ็ฎใใพใใTensorFlow Model Analysisใฉใคใใฉใชใไฝฟ็จใใพใใEvaluatorใฏใใชใใทใงใณใงใๆฐใใใใฌใผใใณใฐใใใใขใใซใไปฅๅใฎใขใใซใใใๅชใใฆใใใใจใๆค่จผใงใใพใใใใใฏใใขใใซใๆฏๆฅ่ชๅ็ใซใใฌใผใใณใฐใใใณๆค่จผใใๅฎ็จผๅ็ฐๅขใฎใใคใใฉใคใณ่จญๅฎใงๅฝน็ซใกใพใใใใฎใใผใใใใฏใงใฏ 1 ใคใฎใขใใซใฎใฟใใใฌใผใใณใฐใใใใใEvaluatorใฏใขใใซใซ่ชๅ็ใซใgoodใใจใใใฉใใซใไปใใพใใ
EvaluatorใฏใExampleGenใใใฎใใผใฟใTrainerใใใฎใใฌใผใใณใฐๆธใฟใขใใซใใใใณในใฉใคในๆงๆใๅ
ฅๅใจใใฆๅใๅใใพใใในใฉใคในๆงๆใซใใใ็นๅพดๅคใซ้ขใใๆๆจใในใฉใคในใใใใจใใงใใพใ (ใใจใใฐใๅๅ 8 ๆใใๅๅพ 8 ๆใพใงใฎใฟใฏใทใผไน่ปใงใขใใซใใฉใฎใใใซๅไฝใใใใชใฉ)ใ ใใฎๆงๆใฎไพใฏใไปฅไธใๅ็
งใใฆใใ ใใใ
End of explanation
"""
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
context.run(model_resolver, enable_cache=True)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
context.run(evaluator, enable_cache=True)
"""
Explanation: ๆฌกใซใใใฎๆงๆใ Evaluatorใซๆธกใใฆๅฎ่กใใพใใ
End of explanation
"""
evaluator.outputs
"""
Explanation: Evaluator ใฎๅบๅใขใผใใฃใใกใฏใใ่ชฟในใฆใฟใพใใใใ
End of explanation
"""
context.show(evaluator.outputs['evaluation'])
"""
Explanation: evaluationๅบๅใไฝฟ็จใใใจใ่ฉไพกใปใใๅ
จไฝใฎใฐใญใผใใซๆๆจใฎใใใฉใซใใฎ่ฆ่ฆๅใ่กจ็คบใงใใพใใ
End of explanation
"""
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
"""
Explanation: ในใฉใคในใใใ่ฉไพกใกใใชใฏในใฎ่ฆ่ฆๅใ่กจ็คบใใใซใฏใTensorFlow Model Analysis ใฉใคใใฉใชใ็ดๆฅๅผใณๅบใใพใใ
End of explanation
"""
blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
"""
Explanation: ใใฎ่ฆ่ฆๅใฏๅใๆๆจใ็คบใใฆใใพใใใ่ฉไพกใปใใๅ
จไฝใงใฏใชใใtrip_start_hourใฎใในใฆใฎ็นๅพดๅคใง่จ็ฎใใใฆใใพใใ
TensorFlow ใขใใซๅๆใฏใๅ
ฌๅนณๆงใคใณใธใฑใผใฟใผใใขใใซ ใใใฉใผใใณในใฎๆ็ณปๅใฎใใญใใใชใฉใไปใฎๅคใใฎ่ฆ่ฆๅใใตใใผใใใฆใใพใใ ่ฉณ็ดฐใซใคใใฆใฏใใใฅใผใใชใขใซใๅ็
งใใฆใใ ใใใ
ๆงๆใซใใใๅคใ่ฟฝๅ ใใใใใๆค่จผๅบๅใๅฉ็จใงใใพใใ{code 0}blessing{/code 0} ใขใผใใฃใใกใฏใใฎๅญๅจใฏใใขใใซใๆค่จผใซๅๆ ผใใใใจใ็คบใใฆใใพใใใใใฏๅฎ่กใใใๆๅใฎๆค่จผใงใใใใใๅ่ฃใฏ่ชๅ็ใซ bless ใใใพใใ
End of explanation
"""
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
"""
Explanation: ๆค่จผ็ตๆใฌใณใผใใ่ชญใฟ่พผใฟใๆๅใ็ขบ่ชใใใใจใใงใใพใใ
End of explanation
"""
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher, enable_cache=True)
"""
Explanation: Pusher
Pusher ใณใณใใผใใณใใฏ้ๅธธใTFX ใใคใใฉใคใณใฎๆๅพใซใใใพใใใใฎใณใณใใผใใณใใฏใขใใซใๆค่จผใซๅๆ ผใใใใฉใใใใใงใใฏใใๅๆ ผใใๅ ดๅใฏใขใใซใ _serving_model_dirใซใจใฏในใใผใใใพใใ
End of explanation
"""
pusher.outputs
"""
Explanation: ๆฌกใซPusherใฎๅบๅใขใผใใฃใใกใฏใใ่ชฟในใฆใฟใพใใใใ
End of explanation
"""
push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
"""
Explanation: ็นใซใPusher ใฏใขใใซใๆฌกใฎใใใช SavedModel ๅฝขๅผใงใจใฏในใใผใใใพใใ
End of explanation
"""
|
ganguli-lab/twpca | notebooks/warp_unit_tests.ipynb | mit | _, _, data = twpca.datasets.jittered_neuron()
model = TWPCA(data, n_components=1, warpinit='identity')
np.all(np.isclose(model.params['warp'], np.arange(model.shared_length), atol=1e-5, rtol=2))
np.nanmax(np.abs(model.transform() - data)) < 1e-5
"""
Explanation: check identity warp does not change data appreciably
End of explanation
"""
model = TWPCA(data, n_components=1, warpinit='shift')
plt.imshow(np.squeeze(model.transform()))
"""
Explanation: check that shift initialization for warp solves the simple toy problem
End of explanation
"""
|
oddt/notebooks | DUD-E.ipynb | bsd-3-clause | from __future__ import print_function, division, unicode_literals
import oddt
from oddt.datasets import dude
print(oddt.__version__)
"""
Explanation: <h1>DUD-E: A Database of Useful Decoys: Enhanced</h1>
End of explanation
"""
%%bash
mkdir -p ./DUD-E_targets/
wget -qO- http://dude.docking.org/targets/ampc/ampc.tar.gz | tar xz -C ./DUD-E_targets/
wget -qO- http://dude.docking.org/targets/cxcr4/cxcr4.tar.gz | tar xz -C ./DUD-E_targets/
wget -qO- http://dude.docking.org/targets/pur2/pur2.tar.gz | tar xz -C ./DUD-E_targets/
wget -qO- http://dude.docking.org/targets/pygm/pygm.tar.gz | tar xz -C ./DUD-E_targets/
wget -qO- http://dude.docking.org/targets/sahh/sahh.tar.gz | tar xz -C ./DUD-E_targets/
directory = './DUD-E_targets'
"""
Explanation: We'd like to read files from DUD-E.<br/>
You can download different targets and different numbers of targets, but I used only these five:
ampc,
cxcr4,
pur2,
pygm,
sahh.<br/>
End of explanation
"""
dude_database = dude(home=directory)
"""
Explanation: We will use the dude class.
End of explanation
"""
target = dude_database['cxcr4']
"""
Explanation: Now we can get one target or iterate over all targets in our directory.
Let's choose one target.
End of explanation
"""
target.ligand
"""
Explanation: target has four properties: protein, ligand, actives and decoys:<br/>
protein - protein molecule<br/>
ligand - ligand molecule<br/>
actives - generator containing actives<br/>
decoys - generator containing decoys
End of explanation
"""
for target in dude_database:
actives = list(target.actives)
decoys = list(target.decoys)
print('Target: ' + target.dude_id,
'Number of actives: ' + str(len(actives)),
'Number of decoys: ' + str(len(decoys)),
sep='\t\t')
"""
Explanation: Let's see which target has the most actives and decoys.
End of explanation
"""
|
iAInNet/tensorflow_in_action | _pratice_cifar10.ipynb | gpl-3.0 | max_steps = 3000
batch_size = 128
data_dir = 'data/cifar10/cifar-10-batches-bin/'
model_dir = 'model/_cifar10_v2/'
"""
Explanation: ๅ
จๅฑๅๆฐ
End of explanation
"""
X_train, y_train = cifar10_input.distorted_inputs(data_dir, batch_size)
X_test, y_test = cifar10_input.inputs(eval_data=True, data_dir=data_dir, batch_size=batch_size)
image_holder = tf.placeholder(tf.float32, [batch_size, 24, 24, 3])
label_holder = tf.placeholder(tf.int32, [batch_size])
"""
Explanation: ๅๅงๅๆ้
ๅฆๆ้่ฆ๏ผไผ็ปๆ้ๅ ไธL2 lossใไธบไบๅจๅ้ข่ฎก็ฎ็ฅ็ป็ฝ็ป็ๆปไฝloss็ๆถๅ่ขซ็จไธ๏ผ้่ฆ็ปไธๅญๅฐไธไธชcollectionใ
ๅ ่ฝฝๆฐๆฎ
ไฝฟ็จcifa10_inputๆฅ่ทๅๆฐๆฎ๏ผ่ฟไธชๆไปถๆฅ่ชtensorflow github๏ผๅฏไปฅไธ่ฝฝไธๆฅ็ดๆฅไฝฟ็จใๅฆๆไฝฟ็จdistorted_inputๆนๆณ๏ผ้ฃไนๅพๅฐ็ๆฐๆฎๆฏ็ป่ฟๅขๅผบๅค็็ใไผๅฏนๅพ็้ๆบๅๅบๅ็ใ็ฟป่ฝฌใไฟฎๆนไบฎๅบฆใไฟฎๆนๅฏนๆฏๅบฆ็ญๆไฝใ่ฟๆ ทๅฐฑ่ฝๅคๆ ทๅๆไปฌ็่ฎญ็ปๆฐๆฎใ
ๅพๅฐไธไธชtensor๏ผbatch_sizeๅคงๅฐ็batchใๅนถไธๅฏไปฅ่ฟญไปฃ็่ฏปๅไธไธไธชbatchใ
End of explanation
"""
weight1 = variable_with_weight_loss([5, 5, 3, 64], stddev=0.05, lambda_value=0)
kernel1 = tf.nn.conv2d(image_holder, weight1, [1, 1, 1, 1], padding='SAME')
bias1 = tf.Variable(tf.constant(0.0, shape=[64]))
conv1 = tf.nn.relu(tf.nn.bias_add(kernel1, bias1))
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
"""
Explanation: ็ฌฌไธไธชๅท็งฏๅฑ
ๅๆ ท็๏ผๆไปฌไฝฟ็จ5x5ๅท็งฏๆ ธ๏ผ3ไธช้้๏ผinput_channel๏ผ๏ผ64ไธชoutput_channelใไธๅฏน็ฌฌไธๅฑ็ๅๆฐๅๆญฃๅๅ๏ผๆไปฅๅฐlambda_value่ฎพๅฎไธบ0ใๅ
ถไธญๆถๅๅฐไธไธชๅฐๆๅทง๏ผๅฐฑๆฏๅจpoolๅฑ๏ผไฝฟ็จไบ3x3ๅคงๅฐ็ksize๏ผไฝๆฏไฝฟ็จ2x2็stride๏ผ่ฟๆ ทๅขๅ ๆฐๆฎ็ไธฐๅฏๆงใๆๅไฝฟ็จLRNใLRNๆๆฉ่งไบAlexๅ่งImageNet็็ซ่ต็้ฃ็ฏCNN่ฎบๆไธญ๏ผAlexๅจ่ฎบๆไธญ่งฃ้ไบLRNๅฑๆจกไปฟไบ็็ฉ็ฅ็ป็ณป็ป็โไพงๆๅถโๆบๅถ๏ผๅฏนๅฑ้จ็ฅ็ปๅ
็ๆดปๅจๅๅปบ็ซไบ็ฏๅข๏ผไฝฟๅพๅ
ถไธญๅๅบๆฏ่พๅคง็ๅผๅๅพ็ธๅฏนๆดๅคง๏ผๅนถๆๅถๅ
ถไปๅ้ฆ่พๅฐ็็ฅ็ปๅ
๏ผๅขๅ ไบๆจกๅ็ๆณๅ่ฝๅใไธ่ฟๅจไนๅ็VGGNet่ฎบๆไธญ๏ผๅฏนๆฏไบไฝฟ็จๅไธไฝฟ็จLRNไธค็งๆจกๅ๏ผ็ปๆ่กจๆLRNๅนถไธ่ฝๆ้ซๆจกๅ็ๆง่ฝใไธ่ฟ่ฟ้่ฟๆฏๅบไบAlexNet็่ฎพ่ฎกๅฐๅ
ถๅ ไธใ
End of explanation
"""
weight2 = variable_with_weight_loss(shape=[5, 5, 64, 64], stddev=5e-2, lambda_value=0.0)
kernel2 = tf.nn.conv2d(norm1, weight2, strides=[1, 1, 1, 1], padding='SAME')
bias2 = tf.Variable(tf.constant(0.1, shape=[64]))
conv2 = tf.nn.relu(tf.nn.bias_add(kernel2, bias2))
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
"""
Explanation: ็ฌฌไบไธชๅท็งฏๅฑ
่พๅ
ฅ64ไธชchannel๏ผ่พๅบไพ็ถๆฏ64ไธชchannel
่ฎพๅฎbias็ๅคงๅฐไธบ0.1
่ฐๆขๆๅคงๆฑ ๅๅฑๅLRN็้กบๅบ๏ผๅ
่ฟ่กLRN็ถๅๅๆๅคงๆฑ ๅๅฑ
ไฝๆฏไธบไปไน่ฆ่ฟไนๅ๏ผๅฎๅ
จไธ็ฅ้๏ผ
ๅค็่ฎบๆใ
End of explanation
"""
flattern = tf.reshape(pool2, [batch_size, -1])
dim = flattern.get_shape()[1].value
weight3 = variable_with_weight_loss(shape=[dim, 384], stddev=0.04, lambda_value=0.04)
bias3 = tf.Variable(tf.constant(0.1, shape=[384]))
local3 = tf.nn.relu(tf.matmul(flattern, weight3) + bias3)
"""
Explanation: ็ฌฌไธไธชๅ
จ่ฟๆฅๅฑ
่ฆๅฐๅท็งฏๅฑๆไผธ
ๅ
จ่ฟๆฅๅฐๆฐ็้่ๅฑ๏ผ่ฎพๅฎไธบ384ไธช่็น
ๆญฃๆๅๅธ่ฎพๅฎไธบ0.04๏ผbias่ฎพๅฎไธบ0.1
้็นๆฏ๏ผๅจ่ฟ้ๆไปฌ่ฟ่ฎพๅฎweight loss็lambdaๆฐๅผไธบ0.04
End of explanation
"""
weight4 = variable_with_weight_loss(shape=[384, 192], stddev=0.04, lambda_value=0.04)
bias4 = tf.Variable(tf.constant(0.1, shape=[192]))
local4 = tf.nn.relu(tf.matmul(local3, weight4) + bias4)
"""
Explanation: ็ฌฌไบไธชๅ
จ่ฟๆฅๅฑ
ไธ้ไธบ192ไธช่็น๏ผๅๅฐไบไธๅ
End of explanation
"""
weight5 = variable_with_weight_loss(shape=[192, 10], stddev=1/192.0, lambda_value=0.0)
bias5 = tf.Variable(tf.constant(0.0, shape=[10]))
logits = tf.add(tf.matmul(local4, weight5), bias5)
def loss(logits, labels):
labels = tf.cast(labels, tf.int64)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels,
name = 'cross_entropy_per_example'
)
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
tf.add_to_collection('losses', cross_entropy_mean)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
loss = loss(logits, label_holder)
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
"""
Explanation: ่พๅบๅฑ
ๆๅๆ10ไธช็ฑปๅซ
End of explanation
"""
top_k_op = tf.nn.in_top_k(logits, label_holder, 1)
sess = tf.InteractiveSession()
saver = tf.train.Saver()
tf.global_variables_initializer().run()
"""
Explanation: ไฝฟ็จin_top_kๆฅ่พๅบtop k็ๅ็กฎ็๏ผ้ป่ฎคไฝฟ็จtop 1ใๅธธ็จ็ๅฏไปฅๆฏtop 5ใ
End of explanation
"""
tf.train.start_queue_runners()
"""
Explanation: ๅฏๅจcaifar_inputไธญ้่ฆ็จ็็บฟ็จ้ๅใไธป่ฆ็จ้ๆฏๅพ็ๆฐๆฎๅขๅผบใ่ฟ้ๆปๅ
ฑไฝฟ็จไบ16ไธช็บฟ็จๆฅๅค็ๅพ็ใ
End of explanation
"""
for step in range(max_steps):
start_time = time.time()
image_batch, label_batch = sess.run([X_train, y_train])
_, loss_value = sess.run([train_op, loss],
feed_dict={image_holder: image_batch, label_holder: label_batch})
duration = time.time() - start_time
if step % 10 == 0:
examples_per_sec = batch_size / duration
sec_this_batch = float(duration)
format_str = ('step %d, loss = %.2f (%.1f examples/sec; %.3f sec/batch)')
print(format_str % (step, loss_value, examples_per_sec, sec_this_batch))
saver.save(sess, save_path=os.path.join(model_dir, 'model.chpt'), global_step=max_steps)
num_examples = 10000
num_iter = int(math.ceil(num_examples / batch_size))
ture_count = 0
total_sample_count = num_iter * batch_size
step = 0
while step < num_iter:
image_batch, label_batch = sess.run([X_test, y_test])
predictions = sess.run([top_k_op],
feed_dict={image_holder: image_batch, label_holder: label_batch})
true_count += np.sum(predictions)
step += 1
precision = ture_count / total_sample_count
print("Precision @ 1 = %.3f" % precision)
sess.close()
"""
Explanation: ๆฏๆฌกๅจ่ฎก็ฎไนๅ๏ผๅ
ๆง่กimage_train,label_trainๆฅ่ทๅไธไธชbatch_sizeๅคงๅฐ็่ฎญ็ปๆฐๆฎใ็ถๅ๏ผfeedๅฐtrain_opๅlossไธญ๏ผ่ฎญ็ปๆ ทๆฌใๆฏ10ๆฌก่ฟญไปฃ่ฎก็ฎๅฐฑไผ่พๅบไธไบๅฟ
่ฆ็ไฟกๆฏใ
End of explanation
"""
|
mitdbg/modeldb | demos/webinar-2020-5-6/02-mdb_versioned/01-train/01 Basic NLP.ipynb | mit | !python -m spacy download en_core_web_sm
"""
Explanation: Versioning Example (Part 1/3)
In this example, we'll train an NLP model for sentiment analysis of tweets using spaCy.
Through this series, we'll take advantage of ModelDB's versioning system to keep track of changes.
This workflow requires verta>=0.14.4 and spaCy>=2.0.0.
Setup
Download a spaCy model to train.
End of explanation
"""
from __future__ import unicode_literals, print_function
import boto3
import json
import numpy as np
import pandas as pd
import spacy
"""
Explanation: Import libraries we'll need.
End of explanation
"""
from verta import Client
client = Client('http://localhost:3000/')
proj = client.set_project('Tweet Classification')
expt = client.set_experiment('SpaCy')
"""
Explanation: Bring in Verta's ModelDB client to organize our work, and log and version metadata.
End of explanation
"""
S3_BUCKET = "verta-starter"
S3_KEY = "english-tweets.csv"
FILENAME = S3_KEY
boto3.client('s3').download_file(S3_BUCKET, S3_KEY, FILENAME)
"""
Explanation: Prepare Data
Download a dataset of English tweets from S3 for us to train with.
End of explanation
"""
import utils
data = pd.read_csv(FILENAME).sample(frac=1).reset_index(drop=True)
utils.clean_data(data)
data.head()
"""
Explanation: Then we'll load and clean the data.
End of explanation
"""
from verta.code import Notebook
from verta.configuration import Hyperparameters
from verta.dataset import S3
from verta.environment import Python
code_ver = Notebook() # Notebook & git environment
config_ver = Hyperparameters({'n_iter': 20})
dataset_ver = S3("s3://{}/{}".format(S3_BUCKET, S3_KEY))
env_ver = Python(Python.read_pip_environment()) # pip environment and Python version
"""
Explanation: Capture and Version Model Ingredients
We'll first capture metadata about our code, configuration, dataset, and environment using utilities from the verta library.
End of explanation
"""
repo = client.set_repository('Tweet Classification')
commit = repo.get_commit(branch='master')
"""
Explanation: Then, to log them, we'll use a ModelDB repository to prepare a commit.
End of explanation
"""
commit.update("notebooks/tweet-analysis", code_ver)
commit.update("config/hyperparams", config_ver)
commit.update("data/tweets", dataset_ver)
commit.update("env/python", env_ver)
commit.save("Initial model")
commit
"""
Explanation: Now we'll add these versioned components to the commit and save it to ModelDB.
End of explanation
"""
nlp = spacy.load('en_core_web_sm')
"""
Explanation: Train and Log Model
We'll use the pre-trained spaCy model we downloaded earlier...
End of explanation
"""
import training
training.train(nlp, data, n_iter=20)
"""
Explanation: ...and fine-tune it with our dataset.
End of explanation
"""
run = client.set_experiment_run()
run.log_model(nlp)
"""
Explanation: Now that our model is good to go, we'll log it to ModelDB so our progress is never lost.
Using Verta's ModelDB Client, we'll create an Experiment Run to encapsulate our work, and log our model as an artifact.
End of explanation
"""
run.log_commit(
commit,
{
'notebook': "notebooks/tweet-analysis",
'hyperparameters': "config/hyperparams",
'training_data': "data/tweets",
'python_env': "env/python",
},
)
"""
Explanation: And finally, we'll link the commit we created earlier to the Experiment Run to complete our logged model version.
End of explanation
"""
|
cipri-tom/Swiss-on-Amazon | filter_swiss_helpful_reviews.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import yaml
"""
Explanation: The following script extracts the (more) helpful reviews from the swiss reviews and saves them locally.
From the extracted reviews it also saves a list with their asin identifiers.
The list of asin identifiers will be later used to to find the average review rating for the respective products.
End of explanation
"""
with open("data/swiss-reviews.txt", 'r') as fp:
swiss_rev = fp.readlines()
len(swiss_rev)
swiss_rev[2]
"""
Explanation: Load the swiss reviews
End of explanation
"""
def filter_helpful(line):
l = line.rstrip('\n')
l = yaml.load(l)
if('helpful' in l.keys()):
if(l['helpful'][1] >= 5):
return True
else:
return False
else:
print("Review does not have helpful score key: "+line)
return False
"""
Explanation: The filter_helpful function keeps only the reviews which had at least 5 flags/votes in the helpfulness field.
This amounts to a subset of around 23000 reviews. A smaller subset of around 10000 reviews was obtained as well by only keeping reviews with 10 flags/votes. The main advantage of the smaller subset is that it contains better quality reviews while its drawback is, of course, the reduced size.
1) Extract the helpful reviews
End of explanation
"""
def get_helpful(data):
res = []
counter = 1
i = 0
for line in data:
i += 1
if(filter_helpful(line)):
if(counter % 1000 == 0):
print("Count "+str(counter)+" / "+str(i))
counter += 1
res.append(line)
return res
swiss_reviews_helpful = get_helpful(swiss_rev)
len(swiss_reviews_helpful)
"""
Explanation: Apply the filter_helpful to each swiss product review
End of explanation
"""
write_file = open('data/swiss-reviews-helpful-correct-bigger.txt', 'w')
for item in swiss_reviews_helpful:
write_file.write(item)
write_file.close()
"""
Explanation: Save the subset with helpful swiss product reviews
End of explanation
"""
with open('data/swiss-reviews-helpful-correct-bigger.txt', 'r') as fp:
swiss_reviews_helpful = fp.readlines()
"""
Explanation: 2) Extract the asins of the products which the helpful reviews correspond to
End of explanation
"""
def filter_asin(line):
l = line.rstrip('\n')
l = yaml.load(l)
if('asin' in l.keys()):
return l['asin']
else:
return ''
helpful_asins = []
counter = 1
for item in swiss_reviews_helpful:
if(counter%500 == 0):
print(counter)
counter += 1
x = filter_asin(item)
if(len(x) > 0):
helpful_asins.append(x)
"""
Explanation: The following function simply extracts the 'asin' from the helpful reviews.
Repetitions of the asins are of no consequence, as the list is just meant to be a check up.
End of explanation
"""
import pickle
with open('data/helpful_asins_bigger.pickle', 'wb') as fp:
pickle.dump(helpful_asins, fp)
"""
Explanation: Save the list of asins.
End of explanation
"""
|
simonsfoundation/CaImAn | demos/notebooks/demo_Ring_CNN.ipynb | gpl-2.0 | get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
import glob
import logging
import numpy as np
import os
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.WARNING)
import caiman as cm
from caiman.source_extraction import cnmf as cnmf
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
import bokeh.plotting as bpl
bpl.output_notebook()
"""
Explanation: Example of 1p online analysis using a Ring CNN + OnACID
The demo shows how to perform online analysis on one photon data using a Ring CNN for extracting the background followed by processing using the OnACID algorithm. The algorithm relies on the usage a GPU to efficiently estimate and apply the background model so it is recommended to have access to a GPU when running this notebook.
End of explanation
"""
fnames=download_demo('blood_vessel_10Hz.mat')
"""
Explanation: First specify the data file(s) to be analyzed
The download_demo method will download the file (if not already present) and store it inside your caiman_data/example_movies folder. You can specify any path to files you want to analyze.
End of explanation
"""
reuse_model = False # set to True to re-use an existing ring model
path_to_model = None # specify a pre-trained model here if needed
gSig = (7, 7) # expected half size of neurons
gnb = 2 # number of background components for OnACID
init_batch = 500 # number of frames for initialization and training
params_dict = {'fnames': fnames,
'var_name_hdf5': 'Y', # name of variable inside mat file where the data is stored
'fr': 10, # frame rate (Hz)
'decay_time': 0.5, # approximate length of transient event in seconds
'gSig': gSig,
'p': 0, # order of AR indicator dynamics
'ring_CNN': True, # SET TO TRUE TO USE RING CNN
'min_SNR': 2.65, # minimum SNR for accepting new components
'SNR_lowest': 0.75, # reject components with SNR below this value
'use_cnn': False, # do not use CNN based test for components
'use_ecc': True, # test eccentricity
'max_ecc': 2.625, # reject components with eccentricity above this value
'rval_thr': 0.70, # correlation threshold for new component inclusion
'rval_lowest': 0.25, # reject components with corr below that value
'ds_factor': 1, # spatial downsampling factor (increases speed but may lose some fine structure)
'nb': gnb,
'motion_correct': False, # Flag for motion correction
'init_batch': init_batch, # number of frames for initialization (presumably from the first file)
'init_method': 'bare',
'normalize': False,
'expected_comps': 1100, # maximum number of expected components used for memory pre-allocation (exaggerate here)
'sniper_mode': False, # flag using a CNN to detect new neurons (o/w space correlation is used)
'dist_shape_update' : True, # flag for updating shapes in a distributed way
'min_num_trial': 5, # number of candidate components per frame
'epochs': 3, # number of total passes over the data
'stop_detection': True, # Run a last epoch without detecting new neurons
'K': 50, # initial number of components
'lr': 6e-4,
'lr_scheduler': [0.9, 6000, 10000],
'pct': 0.01,
'path_to_model': path_to_model, # where the ring CNN model is saved/loaded
'reuse_model': reuse_model # flag for re-using a ring CNN model
}
opts = cnmf.params.CNMFParams(params_dict=params_dict)
"""
Explanation: Set up some parameters
Here we set up some parameters for specifying the ring model and running OnACID. We use the same params object as in batch processing with CNMF.
End of explanation
"""
run_onacid = True
if run_onacid:
cnm = cnmf.online_cnmf.OnACID(params=opts)
cnm.fit_online()
fld_name = os.path.dirname(cnm.params.ring_CNN['path_to_model'])
res_name_nm = os.path.join(fld_name, 'onacid_results_nm.hdf5')
cnm.save(res_name_nm) # save initial results (without any postprocessing)
else:
fld_name = os.path.dirname(path_to_model)
res_name = os.path.join(fld_name, 'onacid_results.hdf5')
cnm = cnmf.online_cnmf.load_OnlineCNMF(res_name)
cnm.params.data['fnames'] = fnames
"""
Explanation: Now run the Ring-CNN + CaImAn online algorithm (OnACID).
The first initbatch frames are used for training the ring-CNN model. Once the model is trained the background is subtracted and the different is used for initialization purposes. The initialization method chosen here bare will only search for a small number of neurons and is mostly used to initialize the background components. Initialization with the full CNMF can also be used by choosing cnmf.
We first create an OnACID object located in the module online_cnmf and we pass the parameters similarly to the case of batch processing. We then run the algorithm using the fit_online method. We then save the results inside
the folder where the Ring_CNN model is saved.
End of explanation
"""
ds = 10 # plot every ds frames to make more manageable figures
init_batch = 500
dims, T = cnmf.utilities.get_file_size(fnames, var_name_hdf5='Y')
T = np.array(T).sum()
n_epochs = cnm.params.online['epochs']
T_detect = 1e3*np.hstack((np.zeros(init_batch), cnm.t_detect))
T_shapes = 1e3*np.hstack((np.zeros(init_batch), cnm.t_shapes))
T_online = 1e3*np.hstack((np.zeros(init_batch), cnm.t_online)) - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_detect))[::ds], T_online[::ds], T_detect[::ds], T_shapes[::ds],
colors=['tab:red', 'tab:purple', 'tab:brown'])
plt.legend(labels=['process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
max_val = 80
plt.ylim([0, max_val]);
plt.plot([init_batch, init_batch], [0, max_val], '--k')
for i in range(n_epochs - 1):
plt.plot([(i+1)*T, (i+1)*T], [0, max_val], '--k')
plt.xlim([0, n_epochs*T]);
plt.savefig(os.path.join(fld_name, 'time_per_frame_ds.pdf'), bbox_inches='tight', pad_inches=0)
init_batch = 500
plt.figure()
tc_init = cnm.t_init*np.ones(T*n_epochs)
ds = 10
#tc_mot = np.hstack((np.zeros(init_batch), np.cumsum(T_motion)/1000))
tc_prc = np.cumsum(T_online)/1000#np.hstack((np.zeros(init_batch), ))
tc_det = np.cumsum(T_detect)/1000#np.hstack((np.zeros(init_batch), ))
tc_shp = np.cumsum(T_shapes)/1000#np.hstack((np.zeros(init_batch), ))
plt.stackplot(np.arange(len(tc_init))[::ds], tc_init[::ds], tc_prc[::ds], tc_det[::ds], tc_shp[::ds],
colors=['g', 'tab:red', 'tab:purple', 'tab:brown'])
plt.legend(labels=['initialize', 'process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [s]')
max_val = (tc_prc[-1] + tc_det[-1] + tc_shp[-1] + cnm.t_init)*1.05
for i in range(n_epochs - 1):
plt.plot([(i+1)*T, (i+1)*T], [0, max_val], '--k')
plt.xlim([0, n_epochs*T]);
plt.ylim([0, max_val])
plt.savefig(os.path.join(fld_name, 'time_cumulative_ds.pdf'), bbox_inches='tight', pad_inches=0)
print('Cost of estimating model and running first epoch: {:.2f}s'.format(tc_prc[T] + tc_det[T] + tc_shp[T] + tc_init[T]))
"""
Explanation: Check speed
Create some plots that show the speed per frame and cumulatively
End of explanation
"""
# first compute background summary images
images = cm.load(fnames, var_name_hdf5='Y', subindices=slice(None, None, 2))
cn_filter, pnr = cm.summary_images.correlation_pnr(images, gSig=3, swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
plt.figure(figsize=(15, 7))
plt.subplot(1,2,1); plt.imshow(cn_filter); plt.colorbar()
plt.subplot(1,2,2); plt.imshow(pnr); plt.colorbar()
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components, line_color='white', thr=0.3)
"""
Explanation: Do some initial plotting
End of explanation
"""
cnm.estimates.nb_view_components(img=cn_filter, denoised_color='red')
"""
Explanation: View components
Now inspect the components extracted by OnACID. Note that if single pass was used then several components would be non-zero only for the part of the time interval indicating that they were detected online by OnACID.
Note that if you get data rate error you can start Jupyter notebooks using:
'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
End of explanation
"""
save_file = True
if save_file:
from caiman.utils.nn_models import create_LN_model
model_LN = create_LN_model(images, shape=opts.data['dims'] + (1,), n_channels=opts.ring_CNN['n_channels'],
width=opts.ring_CNN['width'], use_bias=opts.ring_CNN['use_bias'], gSig=gSig[0],
use_add=opts.ring_CNN['use_add'])
model_LN.load_weights(cnm.params.ring_CNN['path_to_model'])
# Load the data in batches and save them
m = []
saved_files = []
batch_length = 256
for i in range(0, T, batch_length):
images = cm.load(fnames, var_name_hdf5='Y', subindices=slice(i, i + batch_length))
images_filt = np.squeeze(model_LN.predict(np.expand_dims(images, axis=-1)))
temp_file = os.path.join(fld_name, 'pfc_back_removed_' + format(i, '05d') + '.h5')
saved_files.append(temp_file)
m = cm.movie(np.maximum(images - images_filt, 0))
m.save(temp_file)
else:
saved_files = glob.glob(os.path.join(fld_name, 'pfc_back_removed_*'))
saved_files.sort()
fname_mmap = cm.save_memmap([saved_files], order='C', border_to_0=0)
Yr, dims, T = cm.load_memmap(fname_mmap)
images_mmap = Yr.T.reshape((T,) + dims, order='F')
"""
Explanation: Load ring model to filter the data
Filter the data with the learned Ring CNN model and a create memory mapped file with the background subtracted data. We will use this to run the quality tests and screen for false positive components.
End of explanation
"""
cnm.params.merging['merge_thr'] = 0.7
cnm.estimates.c1 = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.bl = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.neurons_sn = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.g = None #np.ones((cnm.estimates.A.shape[-1], 1))*.9
cnm.estimates.merge_components(Yr, cnm.params)
"""
Explanation: Merge components
End of explanation
"""
cnm.params.quality
cnm.estimates.evaluate_components(imgs=images_mmap, params=cnm.params)
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components, line_color='white')
cnm.estimates.nb_view_components(idx=cnm.estimates.idx_components, img=cn_filter)
"""
Explanation: Evaluate components and compare again
We run the component evaluation tests to screen for false positive components.
End of explanation
"""
cnmfe_results = download_demo('online_vs_offline.npz')
locals().update(np.load(cnmfe_results, allow_pickle=True))
A_patch_good = A_patch_good.item()
estimates_gt = cnmf.estimates.Estimates(A=A_patch_good, C=C_patch_good, dims=dims)
maxthr=0.01
cnm.estimates.A_thr=None
cnm.estimates.threshold_spatial_components(maxthr=maxthr)
estimates_gt.A_thr=None
estimates_gt.threshold_spatial_components(maxthr=maxthr*10)
min_size = np.pi*(gSig[0]/1.5)**2
max_size = np.pi*(gSig[0]*1.5)**2
ntk = cnm.estimates.remove_small_large_neurons(min_size_neuro=min_size, max_size_neuro=2*max_size)
gtk = estimates_gt.remove_small_large_neurons(min_size_neuro=min_size, max_size_neuro=2*max_size)
m1, m2, nm1, nm2, perf = cm.base.rois.register_ROIs(estimates_gt.A_thr[:, estimates_gt.idx_components],
cnm.estimates.A_thr[:, cnm.estimates.idx_components],
dims, align_flag=False, thresh_cost=.7, plot_results=True,
Cn=cn_filter, enclosed_thr=None)[:-1]
"""
Explanation: Compare against CNMF-E results
We download the results of CNMF-E on the same dataset and compare.
End of explanation
"""
for k, v in perf.items():
print(k + ':', '%.4f' % v, end=' ')
"""
Explanation: Print performance results
End of explanation
"""
res_name = os.path.join(fld_name, 'onacid_results.hdf5')
cnm.save(res_name)
"""
Explanation: Save the results
End of explanation
"""
import matplotlib.lines as mlines
lp, hp = np.nanpercentile(cn_filter, [5, 98])
A_onacid = cnm.estimates.A_thr.toarray().copy()
A_onacid /= A_onacid.max(0)
A_TP = estimates_gt.A[:, m1].toarray() #cnm.estimates.A[:, cnm.estimates.idx_components[m2]].toarray()
A_TP = A_TP.reshape(dims + (-1,), order='F').transpose(2,0,1)
A_FN = estimates_gt.A[:, nm1].toarray()
A_FN = A_FN.reshape(dims + (-1,), order='F').transpose(2,0,1)
A_FP = A_onacid[:,cnm.estimates.idx_components[nm2]]
A_FP = A_FP.reshape(dims + (-1,), order='F').transpose(2,0,1)
plt.figure(figsize=(15, 12))
plt.imshow(cn_filter, vmin=lp, vmax=hp, cmap='viridis')
plt.colorbar();
for aa in A_TP:
plt.contour(aa, [0.05], colors='k');
for aa in A_FN:
plt.contour(aa, [0.05], colors='r');
for aa in A_FP:
plt.contour(aa, [0.25], colors='w');
cl = ['k', 'r', 'w']
lb = ['both', 'CNMF-E only', 'ring CNN only']
day = [mlines.Line2D([], [], color=cl[i], label=lb[i]) for i in range(3)]
plt.legend(handles=day, loc=3)
plt.axis('off');
plt.margins(0, 0);
plt.savefig(os.path.join(fld_name, 'ring_CNN_contours_gSig_3.pdf'), bbox_inches='tight', pad_inches=0)
A_rej = cnm.estimates.A[:, cnm.estimates.idx_components_bad].toarray()
A_rej = A_rej.reshape(dims + (-1,), order='F').transpose(2,0,1)
plt.figure(figsize=(15, 15))
plt.imshow(cn_filter, vmin=lp, vmax=hp, cmap='viridis')
plt.title('Rejected Components')
for aa in A_rej:
plt.contour(aa, [0.05], colors='w');
"""
Explanation: Make some plots
End of explanation
"""
from caiman.utils.nn_models import create_LN_model
model_LN = create_LN_model(images, shape=opts.data['dims'] + (1,), n_channels=opts.ring_CNN['n_channels'],
width=opts.ring_CNN['width'], use_bias=opts.ring_CNN['use_bias'], gSig=gSig[0],
use_add=opts.ring_CNN['use_add'])
model_LN.load_weights(cnm.params.ring_CNN['path_to_model'])
W = model_LN.get_weights()
plt.figure(figsize=(10, 10))
plt.subplot(2,2,1); plt.imshow(np.squeeze(W[0][:,:,:,0])); plt.colorbar(); plt.title('Ring Kernel 1')
plt.subplot(2,2,2); plt.imshow(np.squeeze(W[0][:,:,:,1])); plt.colorbar(); plt.title('Ring Kernel 2')
plt.subplot(2,2,3); plt.imshow(np.squeeze(W[-1][:,:,0])); plt.colorbar(); plt.title('Multiplicative Layer 1')
plt.subplot(2,2,4); plt.imshow(np.squeeze(W[-1][:,:,1])); plt.colorbar(); plt.title('Multiplicative Layer 2');
"""
Explanation: Show the learned filters
End of explanation
"""
m1 = cm.load(fnames, var_name_hdf5='Y') # original data
m2 = cm.load(fname_mmap) # background subtracted data
m3 = m1 - m2 # estimated background
m4 = cm.movie(cnm.estimates.A[:,cnm.estimates.idx_components].dot(cnm.estimates.C[cnm.estimates.idx_components])).reshape(dims + (T,)).transpose(2,0,1)
# estimated components
nn = 0.01
mm = 1 - nn/4 # normalize movies by quantiles
m1 = (m1 - np.quantile(m1[:1000], nn))/(np.quantile(m1[:1000], mm) - np.quantile(m1[:1000], nn))
m2 = (m2 - np.quantile(m2[:1000], nn))/(np.quantile(m2[:1000], mm) - np.quantile(m2[:1000], nn))
m3 = (m3 - np.quantile(m3[:1000], nn))/(np.quantile(m3[:1000], mm) - np.quantile(m3[:1000], nn))
m4 = (m4 - np.quantile(m4[:1000], nn))/(np.quantile(m4[:1000], mm) - np.quantile(m4[:1000], nn))
m = cm.concatenate((cm.concatenate((m1.transpose(0,2,1), m3.transpose(0,2,1)), axis=2),
cm.concatenate((m2.transpose(0,2,1), m4), axis=2)), axis=1)
m[:3000].play(magnification=2, q_min=1, plot_text=True,
save_movie=True, movie_name=os.path.join(fld_name, 'movie.avi'))
"""
Explanation: Make a movie
End of explanation
"""
|
Kaggle/learntools | notebooks/deep_learning_intro/raw/tut3.ipynb | apache-2.0 | #$HIDE_INPUT$
import pandas as pd
from IPython.display import display
red_wine = pd.read_csv('../input/dl-course-data/red-wine.csv')
# Create training and validation splits
df_train = red_wine.sample(frac=0.7, random_state=0)
df_valid = red_wine.drop(df_train.index)
display(df_train.head(4))
# Scale to [0, 1]
max_ = df_train.max(axis=0)
min_ = df_train.min(axis=0)
df_train = (df_train - min_) / (max_ - min_)
df_valid = (df_valid - min_) / (max_ - min_)
# Split features and target
X_train = df_train.drop('quality', axis=1)
X_valid = df_valid.drop('quality', axis=1)
y_train = df_train['quality']
y_valid = df_valid['quality']
"""
Explanation: Introduction
In the first two lessons, we learned how to build fully-connected networks out of stacks of dense layers. When first created, all of the network's weights are set randomly -- the network doesn't "know" anything yet. In this lesson we're going to see how to train a neural network; we're going to see how neural networks learn.
As with all machine learning tasks, we begin with a set of training data. Each example in the training data consists of some features (the inputs) together with an expected target (the output). Training the network means adjusting its weights in such a way that it can transform the features into the target. In the 80 Cereals dataset, for instance, we want a network that can take each cereal's 'sugar', 'fiber', and 'protein' content and produce a prediction for that cereal's 'calories'. If we can successfully train a network to do that, its weights must represent in some way the relationship between those features and that target as expressed in the training data.
In addition to the training data, we need two more things:
- A "loss function" that measures how good the network's predictions are.
- An "optimizer" that can tell the network how to change its weights.
The Loss Function
We've seen how to design an architecture for a network, but we haven't seen how to tell a network what problem to solve. This is the job of the loss function.
The loss function measures the disparity between the the target's true value and the value the model predicts.
Different problems call for different loss functions. We have been looking at regression problems, where the task is to predict some numerical value -- calories in 80 Cereals, rating in Red Wine Quality. Other regression tasks might be predicting the price of a house or the fuel efficiency of a car.
A common loss function for regression problems is the mean absolute error or MAE. For each prediction y_pred, MAE measures the disparity from the true target y_true by an absolute difference abs(y_true - y_pred).
The total MAE loss on a dataset is the mean of all these absolute differences.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/VDcvkZN.png" width="500" alt="A graph depicting error bars from data points to the fitted line..">
<figcaption style="textalign: center; font-style: italic"><center>The mean absolute error is the average length between the fitted curve and the data points.
</center></figcaption>
</figure>
Besides MAE, other loss functions you might see for regression problems are the mean-squared error (MSE) or the Huber loss (both available in Keras).
During training, the model will use the loss function as a guide for finding the correct values of its weights (lower loss is better). In other words, the loss function tells the network its objective.
The Optimizer - Stochastic Gradient Descent
We've described the problem we want the network to solve, but now we need to say how to solve it. This is the job of the optimizer. The optimizer is an algorithm that adjusts the weights to minimize the loss.
Virtually all of the optimization algorithms used in deep learning belong to a family called stochastic gradient descent. They are iterative algorithms that train a network in steps. One step of training goes like this:
1. Sample some training data and run it through the network to make predictions.
2. Measure the loss between the predictions and the true values.
3. Finally, adjust the weights in a direction that makes the loss smaller.
Then just do this over and over until the loss is as small as you like (or until it won't decrease any further.)
<figure style="padding: 1em;">
<img src="https://i.imgur.com/rFI1tIk.gif" width="1600" alt="Fitting a line batch by batch. The loss decreases and the weights approach their true values.">
<figcaption style="textalign: center; font-style: italic"><center>Training a neural network with Stochastic Gradient Descent.
</center></figcaption>
</figure>
Each iteration's sample of training data is called a minibatch (or often just "batch"), while a complete round of the training data is called an epoch. The number of epochs you train for is how many times the network will see each training example.
The animation shows the linear model from Lesson 1 being trained with SGD. The pale red dots depict the entire training set, while the solid red dots are the minibatches. Every time SGD sees a new minibatch, it will shift the weights (w the slope and b the y-intercept) toward their correct values on that batch. Batch after batch, the line eventually converges to its best fit. You can see that the loss gets smaller as the weights get closer to their true values.
Learning Rate and Batch Size
Notice that the line only makes a small shift in the direction of each batch (instead of moving all the way). The size of these shifts is determined by the learning rate. A smaller learning rate means the network needs to see more minibatches before its weights converge to their best values.
The learning rate and the size of the minibatches are the two parameters that have the largest effect on how the SGD training proceeds. Their interaction is often subtle and the right choice for these parameters isn't always obvious. (We'll explore these effects in the exercise.)
Fortunately, for most work it won't be necessary to do an extensive hyperparameter search to get satisfactory results. Adam is an SGD algorithm that has an adaptive learning rate that makes it suitable for most problems without any parameter tuning (it is "self tuning", in a sense). Adam is a great general-purpose optimizer.
Adding the Loss and Optimizer
After defining a model, you can add a loss function and optimizer with the model's compile method:
model.compile(
optimizer="adam",
loss="mae",
)
Notice that we are able to specify the loss and optimizer with just a string. You can also access these directly through the Keras API -- if you wanted to tune parameters, for instance -- but for us, the defaults will work fine.
<blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;">
<strong>What's In a Name?</strong><br>
The <strong>gradient</strong> is a vector that tells us in what direction the weights need to go. More precisely, it tells us how to change the weights to make the loss change <em>fastest</em>. We call our process gradient <strong>descent</strong> because it uses the gradient to <em>descend</em> the loss curve towards a minimum. <strong>Stochastic</strong> means "determined by chance." Our training is <em>stochastic</em> because the minibatches are <em>random samples</em> from the dataset. And that's why it's called SGD!
</blockquote>
Example - Red Wine Quality
Now we know everything we need to start training deep learning models. So let's see it in action! We'll use the Red Wine Quality dataset.
This dataset consists of physiochemical measurements from about 1600 Portuguese red wines. Also included is a quality rating for each wine from blind taste-tests. How well can we predict a wine's perceived quality from these measurements?
We've put all of the data preparation into this next hidden cell. It's not essential to what follows so feel free to skip it. One thing you might note for now though is that we've rescaled each feature to lie in the interval $[0, 1]$. As we'll discuss more in Lesson 5, neural networks tend to perform best when their inputs are on a common scale.
End of explanation
"""
print(X_train.shape)
"""
Explanation: How many inputs should this network have? We can discover this by looking at the number of columns in the data matrix. Be sure not to include the target ('quality') here -- only the input features.
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=[11]),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
"""
Explanation: Eleven columns means eleven inputs.
We've chosen a three-layer network with over 1500 neurons. This network should be capable of learning fairly complex relationships in the data.
End of explanation
"""
model.compile(
optimizer='adam',
loss='mae',
)
"""
Explanation: Deciding the architecture of your model should be part of a process. Start simple and use the validation loss as your guide. You'll learn more about model development in the exercises.
After defining the model, we compile in the optimizer and loss function.
End of explanation
"""
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=256,
epochs=10,
)
"""
Explanation: Now we're ready to start the training! We've told Keras to feed the optimizer 256 rows of the training data at a time (the batch_size) and to do that 10 times all the way through the dataset (the epochs).
End of explanation
"""
import pandas as pd
# convert the training history to a dataframe
history_df = pd.DataFrame(history.history)
# use Pandas native plot method
history_df['loss'].plot();
"""
Explanation: You can see that Keras will keep you updated on the loss as the model trains.
Often, a better way to view the loss though is to plot it. The fit method in fact keeps a record of the loss produced during training in a History object. We'll convert the data to a Pandas dataframe, which makes the plotting easy.
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | model_serving/caip-load-testing/03-analyze-results.ipynb | apache-2.0 | import time
from datetime import datetime
from typing import List
import numpy as np
import pandas as pd
import google.auth
from google.cloud import logging_v2
from google.cloud.monitoring_dashboard.v1 import DashboardsServiceClient
from google.cloud.logging_v2 import MetricsServiceV2Client
from google.cloud.monitoring_v3.query import Query
from google.cloud.monitoring_v3 import MetricServiceClient
import matplotlib.pyplot as plt
"""
Explanation: Analyzing Locust Load Testing Results
This Notebook demonstrates how to analyze AI Platform Prediction load testing runs using metrics captured in Cloud Monitoring.
This Notebook build on the 02-perf-testing.ipynb notebook that shows how to configure and run load tests against AI Platform Prediction using Locust.io. The outlined testing process results in a Pandas dataframe that aggregates the standard AI Platform Prediction metrics with a set of custom, log-based metrics generated from log entries captured by the Locust testing script.
The Notebook covers the following steps:
1. Retrieve and consolidate test results from Cloud Monitoring
2. Analyze and visualize utilization and latency results
Setup
This notebook was tested on AI Platform Notebooks using the standard TF 2.2 image.
Import libraries
End of explanation
"""
PROJECT_ID = '[your-project-id]' # Set your project Id
MODEL_NAME = 'resnet_classifier'
MODEL_VERSION = 'v1'
LOG_NAME = 'locust' # Set your log name
TEST_ID = 'test-20200829-190943' # Set your test Id
TEST_START_TIME = datetime.fromisoformat('2020-08-28T21:30:00-00:00') # Set your test start time
TEST_END_TIME = datetime.fromisoformat('2020-08-29T22:00:00-00:00') # Set your test end time
"""
Explanation: Configure GCP environment settings
End of explanation
"""
creds , _ = google.auth.default()
client = MetricServiceClient(credentials=creds)
project_path = client.project_path(PROJECT_ID)
filter = 'metric.type=starts_with("ml.googleapis.com/prediction")'
for descriptor in client.list_metric_descriptors(project_path, filter_=filter):
print(descriptor.type)
"""
Explanation: 1. Retrieve and consolidate test results
Locust's web interface along with a Cloud Monitoring dashboard provide a cursory view into performance of a tested AI Platform Prediction model version. A more thorough analysis can be performed by consolidating metrics collected during a test and using data analytics and visualization tools.
In this section, you will retrieve the metrics captured in Cloud Monitoring and consolidate them into a single Pandas dataframe.
1.1 List available AI Platform Prediction metrics
End of explanation
"""
filter = 'metric.type=starts_with("logging.googleapis.com/user")'
for descriptor in client.list_metric_descriptors(project_path, filter_=filter):
print(descriptor.type)
"""
Explanation: 1.2. List custom log based metrics
End of explanation
"""
def retrieve_metrics(client, project_id, start_time, end_time, model, model_version, test_id, log_name):
"""
Retrieves test metrics from Cloud Monitoring.
"""
def _get_aipp_metric(metric_type: str, labels: List[str]=[], metric_name=None)-> pd.DataFrame:
"""
Retrieves a specified AIPP metric.
"""
query = Query(client, project_id, metric_type=metric_type)
query = query.select_interval(end_time, start_time)
query = query.select_resources(model_id=model)
query = query.select_resources(version_id=model_version)
if metric_name:
labels = ['metric'] + labels
df = query.as_dataframe(labels=labels)
if not df.empty:
if metric_name:
df.columns.set_levels([metric_name], level=0, inplace=True)
df = df.set_index(df.index.round('T'))
return df
def _get_locust_metric(metric_type: str, labels: List[str]=[], metric_name=None)-> pd.DataFrame:
"""
Retrieves a specified custom log-based metric.
"""
query = Query(client, project_id, metric_type=metric_type)
query = query.select_interval(end_time, start_time)
query = query.select_metrics(log=log_name)
query = query.select_metrics(test_id=test_id)
if metric_name:
labels = ['metric'] + labels
df = query.as_dataframe(labels=labels)
if not df.empty:
if metric_name:
df.columns.set_levels([metric_name], level=0, inplace=True)
df = df.apply(lambda row: [metric.mean for metric in row])
df = df.set_index(df.index.round('T'))
return df
# Retrieve GPU duty cycle
metric_type = 'ml.googleapis.com/prediction/online/accelerator/duty_cycle'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'duty_cycle')
df = metric
# Retrieve CPU utilization
metric_type = 'ml.googleapis.com/prediction/online/cpu/utilization'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'cpu_utilization')
if not metric.empty:
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve prediction count
metric_type = 'ml.googleapis.com/prediction/prediction_count'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'prediction_count')
if not metric.empty:
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve responses per second
metric_type = 'ml.googleapis.com/prediction/response_count'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'response_rate')
if not metric.empty:
metric = (metric/60).round(2)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve backend latencies
metric_type = 'ml.googleapis.com/prediction/latencies'
metric = _get_aipp_metric(metric_type, ['latency_type', 'replica_id', 'signature'])
if not metric.empty:
metric = metric.apply(lambda row: [round(latency.mean/1000,1) for latency in row])
metric.columns.set_names(['metric', 'replica_id', 'signature'], inplace=True)
level_values = ['Latency: ' + value for value in metric.columns.get_level_values(level=0)]
metric.columns.set_levels(level_values, level=0, inplace=True)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust latency
metric_type = 'logging.googleapis.com/user/locust_latency'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Latency: client')
if not metric.empty:
metric = metric.round(2).replace([0], np.nan)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust user count
metric_type = 'logging.googleapis.com/user/locust_users'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'User count')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust num_failures
metric_type = 'logging.googleapis.com/user/num_failures'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Num of failures')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust num_failures
metric_type = 'logging.googleapis.com/user/num_requests'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Num of requests')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
return df
test_result = retrieve_metrics(
client,
PROJECT_ID,
TEST_START_TIME,
TEST_END_TIME,
MODEL_NAME,
MODEL_VERSION,
TEST_ID,
LOG_NAME
)
test_result.head().T
"""
Explanation: 1.3. Retrieve test metrics
Define a helper function that retrieves test metrics from Cloud Monitoring
End of explanation
"""
gpu_utilization_results = test_result['duty_cycle']
gpu_utilization_results.columns = gpu_utilization_results.columns.get_level_values(0)
ax = gpu_utilization_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Utilization ratio', fontsize=16)
_ = ax.set_title("GPU Utilization", fontsize=20)
"""
Explanation: The retrieved dataframe uses hierarchical indexing for column names. The reason is that some metrics contain multiple time series. For example, the GPU duty_cycle metric includes a time series of measures per each GPU used in the deployment (denoted as replica_id). The top level of the column index is a metric name. The second level is a replica_id. The third level is a signature of a model.
All metrics are aligned on the same timeline.
2. Analyzing and Visualizing test results
In the context of our scenario the key concern is GPU utilization at various levels of throughput and latency. The primary metric exposed by AI Platform Prediction to monitor GPU utilization is duty cycle. This metric captures an average fraction of time over the 60 second period during which the accelerator(s) were actively processing.
2.1. GPU utilization
End of explanation
"""
cpu_utilization_results = test_result['cpu_utilization']
cpu_utilization_results.columns = cpu_utilization_results.columns.get_level_values(0)
ax = cpu_utilization_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Utilization ratio', fontsize=16)
_ = ax.set_title("CPU Utilization", fontsize=20)
"""
Explanation: 2.2. CPU utilization
End of explanation
"""
latency_results = test_result[['Latency: model', 'Latency: client']]
latency_results.columns = latency_results.columns.get_level_values(0)
ax = latency_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('milisecond', fontsize=16)
_ = ax.set_title("Latency", fontsize=20)
"""
Explanation: 2.3. Latency
End of explanation
"""
throughput_results = test_result[['response_rate', 'User count']]
throughput_results.columns = throughput_results.columns.get_level_values(0)
ax = throughput_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Count', fontsize=16)
_ = ax.set_title("Response Rate vs User Count", fontsize=20)
"""
Explanation: 2.4. Request throughput
We are going to use the response_rate metric, which tracks a number of responses returned by AI Platform Prediction over a 1 minute interval.
End of explanation
"""
logging_client = MetricsServiceV2Client(credentials=creds)
parent = logging_client.project_path(PROJECT_ID)
for element in logging_client.list_log_metrics(parent):
metric_path = logging_client.metric_path(PROJECT_ID, element.name)
logging_client.delete_log_metric(metric_path)
print("Deleted metric: ", metric_path)
display_name = 'AI Platform Prediction and Locust'
dashboard_service_client = DashboardsServiceClient(credentials=creds)
parent = 'projects/{}'.format(PROJECT_ID)
for dashboard in dashboard_service_client.list_dashboards(parent):
if dashboard.display_name == display_name:
dashboard_service_client.delete_dashboard(dashboard.name)
print("Deleted dashboard:", dashboard.name)
"""
Explanation: Cleaning up: delete the log-based metrics and dasboard
End of explanation
"""
|
Neuroglycerin/neukrill-net-work | notebooks/augmentation/Preliminary Online Augmentation Results.ipynb | mit | import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
cd ..
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/replicate_8aug.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['alt_picklepath'])
c = 'train_objective'
channel = model.monitor.channels[c]
"""
Explanation: The following are the results we've got from online augmentation so far. Some bugs have been fixed by Scott since then so these might be redundant. If they're not redundant then they are very bad.
Loading the pickle
End of explanation
"""
plt.title(c)
plt.plot(channel.example_record,channel.val_record)
c = 'train_y_nll'
channel = model.monitor.channels[c]
plt.title(c)
plt.plot(channel.example_record,channel.val_record)
def plot_monitor(c = 'valid_y_nll'):
channel = model.monitor.channels[c]
plt.title(c)
plt.plot(channel.example_record,channel.val_record)
return None
plot_monitor()
plot_monitor(c="valid_objective")
"""
Explanation: Replicating 8aug
The DensePNGDataset run with 8 augmentations got us most of the way to our best score in one go. If we can replicate that results with online augmentation then we can be pretty confident that online augmentation is a good idea. Unfortunately, it looks like we can't:
End of explanation
"""
%run check_test_score.py run_settings/replicate_8aug.json
"""
Explanation: Would actually like to know what kind of score this model gets on the check_test_score script.
End of explanation
"""
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/online_manyaug.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['alt_picklepath'])
plot_monitor(c="valid_objective")
"""
Explanation: So we can guess that the log loss score we're seeing is in fact correct. There are definitely some bugs in the ListDataset code.
Many Augmentations
We want to be able to use online augmentations to run large combinations of different augmentations on the images. This model had almost everything turned on, a little:
End of explanation
"""
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_onlineaug.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
plot_monitor(c="train_y_nll")
plot_monitor(c="valid_y_nll")
plot_monitor(c="train_objective")
plot_monitor(c="valid_objective")
"""
Explanation: Looks like it's completely incapable of learning.
These problems suggest that the augmentation might be garbling the images; making them useless for learning from. Or worse, garbling the order so each image doesn't correspond to its label.
Transformer Results
We also have results from a network trained using a Transformer dataset, which is how online augmentation is supposed to be supported in Pylearn2.
End of explanation
"""
|
AEW2015/PYNQ_PR_Overlay | Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb | bsd-3-clause | from pprint import pprint
from time import sleep
from pynq import PL
from pynq import Overlay
from pynq.drivers import Trace_Buffer
from pynq.iop import Pmod_TMP2
from pynq.iop import PMODA
from pynq.iop import PMODB
from pynq.iop import ARDUINO
ol = Overlay("base.bit")
ol.download()
pprint(PL.ip_dict)
"""
Explanation: Trace Buffer - Tracing IIC Transactions
The Trace_Buffer class can monitor the waveform and transations on PMODA, PMODB, and ARDUINO connectors.
This demo shows how to use this class to track IIC transactions. For this demo, users have to connect the Pmod TMP2 sensor to PMODA.
Step 1: Overlay Management
Users have to import all the necessary classes. Make sure to use the right bitstream.
End of explanation
"""
tmp2 = Pmod_TMP2(PMODA)
tmp2.set_log_interval_ms(1)
"""
Explanation: Step 2: Instantiating Temperature Sensor
Although this demo can also be done on PMODB, we use PMODA in this demo.
Set the log interval to be 1ms. This means the IO Processor (IOP) will read temperature values every 1ms.
End of explanation
"""
tr_buf = Trace_Buffer(PMODA,"i2c",samplerate=1000000)
# Start the trace buffer
tr_buf.start()
# Issue reads for 1 second
tmp2.start_log()
sleep(1)
tmp2_log = tmp2.get_log()
# Stop the trace buffer
tr_buf.stop()
"""
Explanation: Step 3: Tracking Transactions
Instantiating the trace buffer with IIC protocol. The sample rate is set to 1MHz. Although the IIC clock is only 100kHz, we still have to use higher sample rate to keep track of IIC control signals from IOP.
After starting the trace buffer DMA, also start to issue IIC reads for 1 second. Then stop the trace buffer DMA.
End of explanation
"""
# Configuration for PMODA
start = 600
stop = 10000
tri_sel=[0x40000,0x80000]
tri_0=[0x4,0x8]
tri_1=[0x400,0x800]
mask = 0x0
# Parsing and decoding
tr_buf.parse("i2c_trace.csv",
start,stop,mask,tri_sel,tri_0,tri_1)
tr_buf.set_metadata(['SDA','SCL'])
tr_buf.decode("i2c_trace.pd")
"""
Explanation: Step 4: Parsing and Decoding Transactions
The trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script). The input arguments for the parsing method is:
* start : the starting sample number of the trace.
* stop : the stopping sample number of the trace.
* tri_sel: masks for tri-state selection bits.
* tri_0: masks for pins selected when the corresponding tri_sel = 0.
* tri_0: masks for pins selected when the corresponding tri_sel = 1.
* mask: mask for pins selected always.
For PMODB, the configuration of the masks can be:
* tri_sel=[0x40000<<32,0x80000<<32]
* tri_0=[0x4<<32,0x8<<32]
* tri_1=[0x400<<32,0x800<<32]
* mask = 0x0
Then the trace buffer object can also decode the transactions using the open-source sigrok decoders. The decoded file (*.pd) is saved into the same folder as this script.
Reference:
https://sigrok.org/wiki/Main_Page
End of explanation
"""
s0 = 1
s1 = 5000
tr_buf.display(s0,s1)
"""
Explanation: Step 5: Displaying the Result
The final waveform and decoded transactions are shown using the open-source wavedrom library. The two input arguments (s0 and s1 ) indicate the starting and stopping location where the waveform is shown.
The valid range for s0 and s1 is: 0 < s0 < s1 < (stop-start), where start and stop are defined in the last step.
Reference:
https://www.npmjs.com/package/wavedrom
End of explanation
"""
|
rnder/data-science-from-scratch | notebook/ch21_network_analysis.ipynb | unlicense | from __future__ import division
import math, random, re
from collections import defaultdict, Counter, deque
from linear_algebra import dot, get_row, get_column, make_matrix, magnitude, scalar_multiply, shape, distance
from functools import partial
users = [
{ "id": 0, "name": "Hero" },
{ "id": 1, "name": "Dunn" },
{ "id": 2, "name": "Sue" },
{ "id": 3, "name": "Chi" },
{ "id": 4, "name": "Thor" },
{ "id": 5, "name": "Clive" },
{ "id": 6, "name": "Hicks" },
{ "id": 7, "name": "Devin" },
{ "id": 8, "name": "Kate" },
{ "id": 9, "name": "Klein" }
]
"""
Explanation: 21์ฅ ๋คํธ์ํฌ ๋ถ์
๋ง์ ๋ฐ์ดํฐ ๋ฌธ์ ๋ ๋
ธ๋(node)์ ๊ทธ ์ฌ์ด๋ฅผ ์ฐ๊ฒฐํ๋ ์ฃ์ง(edge)๋ก ๊ตฌ์ฑ๋ ๋คํธ์ํฌ(network)์ ๊ด์ ์์ ๋ณผ ์ ์๋ค.
์๋ฅผ๋ค์ด, ํ์ด์ค๋ถ์์๋ ์ฌ์ฉ์๊ฐ ๋
ธ๋๋ผ๋ฉด ๊ทธ๋ค์ ์น๊ตฌ ๊ด๊ณ๋ ์ฃ์ง๊ฐ ๋๋ค.
์น์์๋ ๊ฐ ์นํ์ด์ง๊ฐ ๋
ธ๋์ด๊ณ ํ์ด์ง ์ฌ์ด๋ฅผ ์ฐ๊ฒฐํ๋ ํ์ดํผ๋งํฌ๊ฐ ์ฃ์ง๊ฐ ๋๋ค.
ํ์ด์ค๋ถ์ ์น๊ตฌ ๊ด๊ณ๋ ์ํธ์ ์ด๋ค.
๋ด๊ฐ ๋น์ ๊ณผ ์น๊ตฌ๋ผ๋ฉด ๋น์ ์ ๋ฐ๋์ ๋์ ์น๊ตฌ์ด๋ค.
์ฆ, ์ด๋ฐ ๊ฒฝ์ฐ๋ฅผ ์ฃ์ง์ ๋ฐฉํฅ์ด ์๋ค(undirected)๊ณ ํ๋ค.
๋ฐ๋ฉด ํ์ดํผ๋งํฌ๋ ๊ทธ๋ ์ง ์๋ค.
๋ด ํํ์ด์ง์๋ ๋ํ๋ฏผ๊ตญ ๊ตญํ ํํ์ด์ง์ ๋ํ ๋งํฌ๊ฐ ์์ด๋,
๋ฐ๋๋ก ๋ํ๋ฏผ๊ตญ ๊ตญํ ํํ์ด์ง์๋ ๋ด ํํ์ด์ง์ ๋ํ ๋งํฌ๊ฐ ์์ ์ ์๋ค.
์ด๋ฐ ๋คํธ์ํฌ์๋ ๋ฐฉํฅ์ด ์๊ธฐ ๋๋ฌธ์ ๋ฐฉํฅ์ฑ ๋คํธ์ํฌ(directed network)๋ผ๊ณ ํ๋ค.
21.1 ๋งค๊ฐ ์ค์ฌ์ฑ
1์ฅ์์ ์ฐ๋ฆฌ๋ ๋ฐ์ดํ
๋คํธ์ํฌ์์ ์น๊ตฌ์ ์๋ฅผ ์
์ผ๋ก์จ ์ค์ฌ์ด ๋๋ ์ฃผ์ ํต์ฌ ์ธ๋ฌผ์ ์ฐพ์๋ค.
์ฌ๊ธฐ์๋ ๋ช ๊ฐ์ง ์ถ๊ฐ์ ์ธ ์ ๊ทผ๋ฒ์ ์ดํด๋ณด์.
End of explanation
"""
friendships = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 4),
(4, 5), (5, 6), (5, 7), (6, 8), (7, 8), (8, 9)]
"""
Explanation: ๋คํธ์ํฌ๋ ์ฌ์ฉ์์ ์น๊ตฌ ๊ด๊ณ๋ฅผ ๋ํ๋ธ๋ค.
End of explanation
"""
# give each user a friends list
for user in users:
user["friends"] = []
# and populate it
for i, j in friendships:
# this works because users[i] is the user whose id is i
users[i]["friends"].append(users[j]) # add i as a friend of j
users[j]["friends"].append(users[i]) # add j as a friend of i
"""
Explanation: ์น๊ตฌ ๋ชฉ๋ก์ ๊ฐ ์ฌ์ฉ์์ dict์ ์ถ๊ฐํ๊ธฐ๋ ํ๋ค.
End of explanation
"""
#
# Betweenness Centrality
#
def shortest_paths_from(from_user):
# ํน์ ์ฌ์ฉ์๋ก๋ถํฐ ๋ค๋ฅธ ์ฌ์ฉ์๊น์ง์ ๋ชจ๋ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ํฌํจํ๋ dict
shortest_paths_to = { from_user["id"] : [[]] }
# ํ์ธํด์ผ ํ๋ (์ด์ ์ฌ์ฉ์, ๋ค์ ์ฌ์ฉ์) ํ
# ๋ชจ๋ (from_user, from_user์ ์น๊ตฌ) ์์ผ๋ก ์์
frontier = deque((from_user, friend)
for friend in from_user["friends"])
# ํ๊ฐ ๋น ๋๊น์ง ๋ฐ๋ณต
while frontier:
prev_user, user = frontier.popleft() # ํ์ ์ฒซ ๋ฒ์งธ ์ฌ์ฉ์๋ฅผ
user_id = user["id"] # ์ ๊ฑฐ
# ํ์ ์ฌ์ฉ์๋ฅผ ์ถ๊ฐํ๋ ๋ฐฉ๋ฒ์ ๊ณ ๋ คํด ๋ณด๋ฉด
# prev_user๊น์ง์ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ์ด๋ฏธ ์๊ณ ์์ ์๋ ์๋ค.
paths_to_prev = shortest_paths_to[prev_user["id"]]
paths_via_prev = [path + [user_id] for path in paths_to_prev]
# ๋ง์ฝ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ์ด๋ฏธ ์๊ณ ์๋ค๋ฉด
old_paths_to_here = shortest_paths_to.get(user_id, [])
# ์ง๊ธ๊น์ง์ ์ต๋จ ๊ฒฝ๋ก๋ ๋ฌด์์ผ๊น?
if old_paths_to_here:
min_path_length = len(old_paths_to_here[0])
else:
min_path_length = float('inf')
# ๊ธธ์ง ์์ ์๋ก์ด ๊ฒฝ๋ก๋ง ์ ์ฅ
new_paths_to_here = [path_via_prev
for path_via_prev in paths_via_prev
if len(path_via_prev) <= min_path_length
and path_via_prev not in old_paths_to_here]
shortest_paths_to[user_id] = old_paths_to_here + new_paths_to_here
# ์์ง ํ๋ฒ๋ ๋ณด์ง ๋ชปํ ์ด์์ frontier์ ์ถ๊ฐ
frontier.extend((user, friend)
for friend in user["friends"]
if friend["id"] not in shortest_paths_to)
return shortest_paths_to
"""
Explanation: 1์ฅ์์ ์ฐ๊ฒฐ ์ค์ฌ์ฑ(degree centrality)์ ์ดํด๋ณผ ๋๋, ์ฐ๋ฆฌ๊ฐ ์ง๊ด์ ์ผ๋ก ์๊ฐํ๋ ์ฃผ์ ์ฐ๊ฒฐ๊ณ ๋ฆฌ๋ค์ด ์ ์ ๋์ง ์์ ์ฝ๊ฐ ์์ฌ์ ๋ค.
๋์์ผ๋ก ์ฌ์ฉํ ์ ์๋ ์ง์ ์ค ํ๋๋ ๋งค๊ฐ ์ค์ฌ์ฑ(betweenness centrality)์ธ๋ฐ, ์ด๋ ๋ ์ฌ๋ ์ฌ์ด์ ์ต๋จ ๊ฒฝ๋ก์์ ๋น๋ฒํ๊ฒ ๋ฑ์ฅํ๋ ์ฌ๋๋ค์ด ํฐ ๊ฐ์ ๊ฐ์ง๋ ์ง์์ด๋ค.
๊ตฌ์ฒด์ ์ผ๋ก๋, ๋
ธ๋ $i$์ ๋งค๊ฐ ์ค์ฌ์ฑ์ ๋ค๋ฅธ ๋ชจ๋ ๋
ธ๋ $j,k$ ์์ ์ต๋จ ๊ฒฝ๋ก ์ค์, $i$๋ฅผ ๊ฑฐ์น๋ ๊ฒฝ๋ก์ ๋น์จ๋ก ๊ณ์ฐํ๋ค.
์์์ ๋ ์ฌ๋์ด ์ฃผ์ด์ก์ ๋ ๊ทธ๋ค ๊ฐ์ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ๊ตฌํด์ผ ํ๋ค.
์ด ์ฑ
์์๋ ๋ ํจ์จ์ ์ด๋๋ผ๋ ํจ์ฌ ์ดํดํ๊ธฐ ์ฌ์ด 'Breadth-first search'๋ผ๊ณ ๋ ์๋ ค์ง ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ๋ค.
End of explanation
"""
for user in users:
user["shortest_paths"] = shortest_paths_from(user)
"""
Explanation: ๊ทธ๋ฆฌ๊ณ ๊ฐ ๋
ธ๋์ ๋ํด ์์ฑ๋ dict๋ค์ ์ ์ฅํ์.
End of explanation
"""
for user in users:
user["betweenness_centrality"] = 0.0
for source in users:
source_id = source["id"]
for target_id, paths in source["shortest_paths"].items(): # python2์์๋ items ๋์ iteritems ์ฌ์ฉ
if source_id < target_id: # ์๋ชปํด์ ๋ ๋ฒ ์ธ์ง ์๋๋ก ์ฃผ์ํ์
num_paths = len(paths) # ์ต๋จ ๊ฒฝ๋ก๊ฐ ๋ช ๊ฐ ์กด์ฌํ๋๊ฐ?
contrib = 1 / num_paths # ์ค์ฌ์ฑ์ ๊ธฐ์ฌํ๋ ๊ฐ
for path in paths:
for id in path:
if id not in [source_id, target_id]:
users[id]["betweenness_centrality"] += contrib
for user in users:
print(user["id"], user["betweenness_centrality"])
"""
Explanation: ๊ทธ๋ฌ๋ฉด ์ด์ ๋งค๊ฐ ์ค์ฌ์ฑ์ ๊ตฌํ ์ค๋น๊ฐ ๋ค ๋์๋ค.
์ด์ ๊ฐ๊ฐ์ ์ต๋จ ๊ฒฝ๋ก์ ํฌํจ๋๋ ๊ฐ ๋
ธ๋์ ๋งค๊ฐ ์ค์ฌ์ฑ์ $1/n$์ ๋ํด ์ฃผ์.
End of explanation
"""
#
# closeness centrality
#
def farness(user):
"""๋ชจ๋ ์ฌ์ฉ์์์ ์ต๋จ ๊ฑฐ๋ฆฌ ํฉ"""
return sum(len(paths[0])
for paths in user["shortest_paths"].values())
"""
Explanation: ์ฌ์ฉ์ 0๊ณผ 9์ ์ต๋จ ๊ฒฝ๋ก ์ฌ์ด์๋ ๋ค๋ฅธ ์ฌ์ฉ์๊ฐ ์์ผ๋ฏ๋ก ๋งค๊ฐ ์ค์ฌ์ฑ์ด 0์ด๋ค.
๋ฐ๋ฉด ์ฌ์ฉ์ 3, 4, 5๋ ์ต๋จ ๊ฒฝ๋ก์์ ๋ฌด์ฒ ๋น๋ฒํ๊ฒ ์์นํ๊ธฐ ๋๋ฌธ์ ๋์ ๋งค๊ฐ ์ค์ฌ์ฑ์ ๊ฐ์ง๋ค.
๋๊ฒ ์ค์ฌ์ฑ์ ์ ๋๊ฐ ์์ฒด๋ ํฐ ์๋ฏธ๋ฅผ ๊ฐ์ง์ง ์๊ณ , ์๋๊ฐ๋ง์ด ์๋ฏธ๋ฅผ ๊ฐ์ง๋ค.
๊ทธ ์ธ์ ์ดํด๋ณผ ์ ์๋ ์ค์ฌ์ฑ ์งํ ์ค ํ๋๋ ๊ทผ์ ์ค์ฌ์ฑ(closeness centrality)์ด๋ค.
๋จผ์ ๊ฐ ์ฌ์ฉ์์ ์์ ์ฑ(farness)์ ๊ณ์ฐํ๋ค. ์์ ์ฑ์ด๋ from_user์ ๋ค๋ฅธ ๋ชจ๋ ์ฌ์ฉ์์ ์ต๋จ ๊ฒฝ๋ก๋ฅผ ํฉํ ๊ฐ์ด๋ค.
End of explanation
"""
for user in users:
user["closeness_centrality"] = 1 / farness(user)
for user in users:
print(user["id"], user["closeness_centrality"])
"""
Explanation: ์ด์ ๊ทผ์ ์ค์ฌ์ฑ์ ๊ฐ๋จํ ๊ณ์ฐํ ์ ์๋ค.
End of explanation
"""
def matrix_product_entry(A, B, i, j):
return dot(get_row(A, i), get_column(B, j))
def matrix_multiply(A, B):
n1, k1 = shape(A)
n2, k2 = shape(B)
if k1 != n2:
raise ArithmeticError("incompatible shapes!")
return make_matrix(n1, k2, partial(matrix_product_entry, A, B))
def vector_as_matrix(v):
"""(list ํํ์) ๋ฒกํฐ v๋ฅผ n x 1 ํ๋ ฌ๋ก ๋ณํ"""
return [[v_i] for v_i in v]
def vector_from_matrix(v_as_matrix):
"""n x 1 ํ๋ ฌ์ ๋ฆฌ์คํธ๋ก ๋ณํ"""
return [row[0] for row in v_as_matrix]
def matrix_operate(A, v):
v_as_matrix = vector_as_matrix(v)
product = matrix_multiply(A, v_as_matrix)
return vector_from_matrix(product)
"""
Explanation: ๊ณ์ฐ๋ ๊ทผ์ ์ค์ฌ์ฑ์ ํธ์ฐจ๋ ๋์ฑ ์๋ค. ๋คํธ์ํฌ ์ค์ฌ์ ์๋ ๋
ธ๋์กฐ์ฐจ ์ธ๊ณฝ์ ์์นํ ๋
ธ๋๋ค๋ก๋ถํฐ ๋ฉ๋ฆฌ ๋จ์ด์ ธ ์๊ธฐ ๋๋ฌธ์ด๋ค.
์ฌ๊ธฐ์ ๋ดค๋ฏ์ด ์ต๋จ ๊ฒฝ๋ก๋ฅผ ๊ณ์ฐํ๋ ๊ฒ์ ๊ฝค๋ ๋ณต์กํ๋ค. ๊ทธ๋ ๊ธฐ ๋๋ฌธ์ ํฐ ๋คํธ์ํฌ์์๋ ๊ทผ์ ์ค์ฌ์ฑ์ ์์ฃผ ์ฌ์ฉํ์ง ์๋๋ค.
๋ ์ง๊ด์ ์ด์ง๋ง ๋ณดํต ๋ ์ฝ๊ฒ ๊ณ์ฐํ ์ ์๋ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ(eigenvector centrality)์ ๋ ์์ฃผ ์ฌ์ฉํ๋ค.
21.2 ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ
๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ ๋ํด ์์๋ณด๊ธฐ ์ ์ ๋จผ์ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฌด์์ธ์ง ์ดํด๋ด์ผ ํ๊ณ , ๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฌด์์ธ์ง ์๊ธฐ ์ํด์๋ ๋จผ์ ํ๋ ฌ ์ฐ์ฐ์ ๋ํด ์์๋ด์ผ ํ๋ค.
21.2.1 ํ๋ ฌ ์ฐ์ฐ
End of explanation
"""
def find_eigenvector(A, tolerance=0.00001):
guess = [1 for __ in A]
while True:
result = matrix_operate(A, guess)
length = magnitude(result)
next_guess = scalar_multiply(1/length, result)
if distance(guess, next_guess) < tolerance:
return next_guess, length # eigenvector, eigenvalue
guess = next_guess
"""
Explanation: ํ๋ ฌ A์ ๊ณ ์ ๋ฒกํฐ๋ฅผ ์ฐพ๊ธฐ ์ํด, ์์์ ๋ฒกํฐ $v$๋ฅผ ๊ณจ๋ผ matrix_operate๋ฅผ ์ํํ๊ณ , ๊ฒฐ๊ณผ๊ฐ์ ํฌ๊ธฐ๊ฐ 1์ด ๋๊ฒ ์ฌ์กฐ์ ํ๋ ๊ณผ์ ์ ๋ฐ๋ณต ์ํํ๋ค.
End of explanation
"""
rotate = [[0, 1],
[-1, 0]]
"""
Explanation: ๊ฒฐ๊ณผ๊ฐ์ผ๋ก ๋ฐํ๋๋ guess๋ฅผ matrix_operate๋ฅผ ํตํด ๊ฒฐ๊ณผ๊ฐ์ ํฌ๊ธฐ๊ฐ 1์ธ ๋ฒกํฐ๋ก ์ฌ์กฐ์ ํ๋ฉด, ์๊ธฐ ์์ ์ด ๋ฐํ๋๋ค. ์ฆ, ์ฌ๊ธฐ์ guess๋ ๊ณ ์ ๋ฒกํฐ๋ผ๋ ๊ฒ์ ์๋ฏธํ๋ค.
๋ชจ๋ ์ค์ ํ๋ ฌ์ ๊ณ ์ ๋ฒกํฐ์ ๊ณ ์ ๊ฐ์ด ์๋ ๊ฒ์ ์๋๋ค. ์๋ฅผ ๋ค์ด ์๊ณ ๋ฐฉํฅ์ผ๋ก 90๋ ํ์ ํ๋ ์ฐ์ฐ์ ํ๋ ๋ค์ ํ๋ ฌ์๋ ๊ณฑํ์ ๋ ๊ฐ์ง ์์ ์ด ๋๋ ๋ฒกํฐ๋ ์๋ฒกํฐ๋ฐ์ ์๋ค.
End of explanation
"""
flip = [[0, 1],
[1, 0]]
"""
Explanation: ์ด ํ๋ ฌ๋ก ์์ ๊ตฌํํ find_eignevector(rotate)๋ฅผ ์ํํ๋ฉด, ์์ํ ๋๋์ง ์์ ๊ฒ์ด๋ค.
ํํธ, ๊ณ ์ ๋ฒกํฐ๊ฐ ์๋ ํ๋ ฌ๋ ๋๋ก๋ ๋ฌดํ๋ฃจํ์ ๋น ์ง ์ ์๋ค.
End of explanation
"""
#
# eigenvector centrality
#
def entry_fn(i, j):
return 1 if (i, j) in friendships or (j, i) in friendships else 0
n = len(users)
adjacency_matrix = make_matrix(n, n, entry_fn)
adjacency_matrix
"""
Explanation: ์ด ํ๋ ฌ์ ๋ชจ๋ ๋ฒกํฐ [x, y]๋ฅผ [y, x]๋ก ๋ณํํ๋ค. ๋ฐ๋ผ์ [1, 1]์ ๊ณ ์ ๊ฐ์ด 1์ธ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋๋ค.
ํ์ง๋ง x, y๊ฐ์ด ๋ค๋ฅธ ์์์ ๋ฒกํฐ์์ ์ถ๋ฐํด์ find_eigenvector๋ฅผ ์ํํ๋ฉด x, y๊ฐ์ ๋ฐ๊พธ๋ ์ฐ์ฐ๋ง ๋ฌดํํ ์ํํ ๊ฒ์ด๋ค.
(NumPy๊ฐ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์๋ ์ด๋ฐ ์ผ์ด์ค๊น์ง ๋ค๋ฃฐ ์ ์๋ ๋ค์ํ ๋ฐฉ๋ฒ๋ค์ด ๊ตฌํ๋์ด ์๋ค.)
์ด๋ฐ ์ฌ์ํ ๋ฌธ์ ์๋ ๋ถ๊ตฌํ๊ณ , ์ด์จ๋ find_eigenvector๊ฐ ๊ฒฐ๊ณผ๊ฐ์ ๋ฐํํ๋ค๋ฉด, ๊ทธ ๊ฒฐ๊ณผ๊ฐ์ ๊ณง ๊ณ ์ ๋ฒกํฐ์ด๋ค.
21.2.2 ์ค์ฌ์ฑ
๊ณ ์ ๋ฒกํฐ๊ฐ ๋ฐ์ดํฐ ๋คํธ์ํฌ๋ฅผ ์ดํดํ๋๋ฐ ์ด๋ป๊ฒ ๋์์ ์ค๊น?
์๊ธฐ๋ฅผ ํ๊ธฐ ์ ์ ๋จผ์ ๋คํธ์ํฌ๋ฅผ ์ธ์ ํ๋ ฌ(adjacency matrix)์ ํํ๋ก ๋ํ๋ด ๋ณด์. ์ด ํ๋ ฌ์ ์ฌ์ฉ์ i์ ์ฌ์ฉ์ j๊ฐ ์น๊ตฌ์ธ ๊ฒฝ์ฐ (i, j)๋ฒ์งธ ํญ๋ชฉ์ 1์ด ์๊ณ , ์น๊ตฌ๊ฐ ์๋ ๊ฒฝ์ฐ 0์ด ์๋ ํ๋ ฌ์ด๋ค.
End of explanation
"""
eigenvector_centralities, _ = find_eigenvector(adjacency_matrix)
for user_id, centrality in enumerate(eigenvector_centralities):
print(user_id, centrality)
"""
Explanation: ๊ฐ ์ฌ์ฉ์์ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ด๋ find_eigenvector๋ก ์ฐพ์ ์ฌ์ฉ์์ ๊ณ ์ ๋ฒกํฐ๊ฐ ๋๋ค.
End of explanation
"""
#
# directed graphs
#
endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3),
(2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)]
for user in users:
user["endorses"] = [] # add one list to track outgoing endorsements
user["endorsed_by"] = [] # and another to track endorsements
for source_id, target_id in endorsements:
users[source_id]["endorses"].append(users[target_id])
users[target_id]["endorsed_by"].append(users[source_id])
"""
Explanation: ์ฐ๊ฒฐ์ ์๊ฐ ๋ง๊ณ , ์ค์ฌ์ฑ์ด ๋์ ์ฌ์ฉ์๋คํํ
์ฐ๊ฒฐ๋ ์ฌ์ฉ์๋ค์ ๊ณ ์ ๋ฒกํฐ ์ค์ฌ์ฑ์ด ๋๋ค.
์์ ๊ฒฐ๊ณผ์ ๋ฐ๋ฅด๋ฉด ์ฌ์ฉ์ 1, ์ฌ์ฉ์ 2์ ์ค์ฌ์ฑ์ด ๊ฐ์ฅ ๋์๋ฐ, ์ด๋ ์ค์ฌ์ฑ์ด ๋์ ์ฌ๋๋ค๊ณผ ์ธ๋ฒ์ด๋ ์ฐ๊ฒฐ๋์๊ธฐ ๋๋ฌธ์ด๋ค.
์ด๋ค๋ก๋ถํฐ ๋ฉ์ด์ง์๋ก ์ฌ์ฉ์๋ค์ ์ค์ฌ์ฑ์ ์ ์ฐจ ์ค์ด๋ ๋ค.
21.3 ๋ฐฉํฅ์ฑ ๊ทธ๋ํ(Directed graphs)์ ํ์ด์ง๋ญํฌ
๋ฐ์ดํ
์ด ์ธ๊ธฐ๋ฅผ ๋ณ๋ก ๋์ง ๋ชปํ์, ์์ด์ต ํ์ ๋ถ์ฌ์ฅ์ ์น๊ตฌ ๋ชจ๋ธ์์ ๋ณด์ฆ(endorsement)๋ชจ๋ธ๋ก ์ ํฅํ๋ ๊ฒ์ ๊ณ ๋ ค ์ค์ด๋ค.
์๊ณ ๋ณด๋ ์ฌ๋๋ค์ ์ด๋ค ๋ฐ์ดํฐ ๊ณผํ์๋ค๋ผ๋ฆฌ ์น๊ตฌ์ธ์ง์ ๋ํด์๋ ๋ณ๋ก ๊ด์ฌ์ด ์์์ง๋ง, ํค๋ํํฐ๋ค์ ๋ค๋ฅธ ๋ฐ์ดํฐ ๊ณผํ์๋ก๋ถํฐ ์กด๊ฒฝ ๋ฐ๋ ๋ฐ์ดํฐ ๊ณผํ์๊ฐ ๋๊ตฌ์ธ์ง์ ๋ํด ๊ด์ฌ์ด ๋ง๋ค.
์ด ์๋ก์ด ๋ชจ๋ธ์์ ๊ด๊ณ๋ ์ํธ์ ์ธ ๊ฒ์ด ์๋๋ผ, ํ ์ฌ๋(source)์ด ๋ค๋ฅธ ๋ฉ์ง ํ ์ฌ๋(target)์ ์ค๋ ฅ์ ๋ณด์ฆ์ ์์ฃผ๋ (source, target) ์์ผ๋ก ๋น๋์นญ์ ์ธ ๊ด๊ณ๋ฅผ ํํํ๊ฒ ๋๋ค.
End of explanation
"""
endorsements_by_id = [(user["id"], len(user["endorsed_by"]))
for user in users]
sorted(endorsements_by_id,
key=lambda x: x[1], # (user_id, num_endorsements)
reverse=True)
"""
Explanation: ๊ทธ๋ฆฌ๊ณ ๊ฐ์ฅ ๋ณด์ฆ์ ๋ง์ด ๋ฐ์ ๋ฐ์ดํฐ ๊ณผํ์๋ค์ ๋ฐ์ดํฐ๋ฅผ ์์งํด์, ๊ทธ๊ฒ์ ํค๋ํํฐ๋คํํ
ํ๋ฉด ๋๋ค.
End of explanation
"""
def page_rank(users, damping = 0.85, num_iters = 100):
# ๋จผ์ ํ์ด์ง๋ญํฌ๋ฅผ ๋ชจ๋ ๋
ธ๋์ ๊ณ ๋ฅด๊ฒ ๋ฐฐ๋น
num_users = len(users)
pr = { user["id"] : 1 / num_users for user in users }
# ๋งค ์คํ
๋ง๋ค ๊ฐ ๋
ธ๋๊ฐ ๋ฐ๋
# ์ ์ ์์ ํ์ด์ง๋ญํฌ
base_pr = (1 - damping) / num_users
for __ in range(num_iters):
next_pr = { user["id"] : base_pr for user in users }
for user in users:
# ํ์ด์ง๋ญํฌ๋ฅผ ์ธ๋ถ๋ก ํฅํ๋ ๋งํฌ์ ๋ฐฐ๋นํ๋ค.
links_pr = pr[user["id"]] * damping
for endorsee in user["endorses"]:
next_pr[endorsee["id"]] += links_pr / len(user["endorses"])
pr = next_pr
return pr
for user_id, pr in page_rank(users).items():
print(user_id, pr)
"""
Explanation: ์ฌ์ค '๋ณด์ฆ์ ์'์ ๊ฐ์ ์ซ์๋ ์กฐ์ํ๊ธฐ๊ฐ ๋งค์ฐ ์ฝ๋ค.
๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ ์ค ํ๋๋, ๊ฐ์ง ๊ณ์ ์ ์ฌ๋ฌ ๊ฐ ๋ง๋ค์ด์ ๊ทธ๊ฒ๋ค๋ก ๋ด ๊ณ์ ์ ๋ํ ๋ณด์ฆ์ ์๋ ๊ฒ์ด๋ค.
๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์, ์น๊ตฌ๋ค๋ผ๋ฆฌ ์ง๊ณ ์๋ก๊ฐ ์๋ก๋ฅผ ๋ณด์ฆํด ์ฃผ๋ ๊ฒ์ด๋ค. (์๋ง ์ฌ์ฉ์ 0, 1, 2๊ฐ ์ด๋ฐ ๊ด๊ณ์ผ ๊ฐ๋ฅ์ฑ์ด ํฌ๋ค.)
์ข ๋ ๋์ ์ง์๋, '๋๊ฐ' ๋ณด์ฆ์ ์๋์ง๋ฅผ ๊ณ ๋ คํ๋ ๊ฒ์ด๋ค.
๋ณด์ฆ์ ๋ง์ด ๋ฐ์ ์ฌ์ฉ์๊ฐ ๋ณด์ฆ์ ์ค ๋๋, ๋ณด์ฆ์ ์ ๊ฒ ๋ฐ์ ์ฌ์ฉ์๊ฐ ๋ณด์ฆ์ ์ค ๋๋ณด๋ค ๋ ์ค์ํ ๊ฒ์ผ๋ก ๋ฐ์๋ค์ฌ์ง๋ ๊ฒ์ด ํ๋นํ๋ค.
๊ทธ๋ฆฌ๊ณ ์ฌ์ค ์ด๊ฒ์ ์ ๋ช
ํ ํ์ด์ง๋ญํฌ(PageRank) ์๊ณ ๋ฆฌ์ฆ์ ๊ธฐ๋ณธ ์ฒ ํ์ด๊ธฐ๋ ํ๋ค.
1. ๋คํธ์ํฌ ์ ์ฒด์๋ 1.0(๋๋ 100%)์ ํ์ด์ง๋ญํฌ๊ฐ ์๋ค.
2. ์ด๊ธฐ์ ์ด ํ์ด์ง๋ญํฌ๋ฅผ ๋ชจ๋ ๋
ธ๋์ ๊ณ ๋ฅด๊ฒ ๋ฐฐ๋นํ๋ค.
3. ๊ฐ ์คํ
์ ๊ฑฐ์น ๋๋ง๋ค ๊ฐ ๋
ธ๋์ ๋ฐฐ๋น๋ ํ์ด์ง๋ญํฌ์ ๋๋ถ๋ถ์ ์ธ๋ถ๋ก ํฅํ๋ ๋งํฌ์ ๊ท ๋ฑํ๊ฒ ๋ฐฐ๋นํ๋ค.
4. ๊ฐ ์คํ
์ ๊ฑฐ์น ๋๋ง๋ค ๊ฐ ๋
ธ๋์ ๋จ์ ์๋ ํ์ด์ง๋ญํฌ๋ฅผ ๋ชจ๋ ๋
ธ๋์ ๊ณ ๋ฅด๊ฒ ๋ฐฐ๋นํ๋ค.
End of explanation
"""
|
This dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs from this dataset that were originally code and markdown cells in Jupyter Notebooks.
The content of each example the following:
[CODE]
"""
Explanation: [TEXT]
End of explanation
"""
[CODE]
"""
Explanation: [TEXT]
End of explanation
"""
...
from datasets import load_dataset
ds = load_dataset("codeparrot/github-jupyter-code-to-text", split="train")
Dataset({
features: ['repo_name', 'path', 'license', 'content'],
num_rows: 47452
})