Distributed Hyperparameter Optimization on MNIST Dataset
Maggy Distributed Hyper Parameters Optimization example
Created: 24/04/2019 Updated: 2021
This notebook illustrates the usage of the maggy framework for asynchronous hyperparameter optimization on the famous MNIST dataset.
In this specific example we are using random search over three parameters and we are deploying the median early stopping rule in order to make use of the asynchrony of the framework. The Median Stopping Rule implements the simple strategy of stopping a trial if its performance falls below the median of other trials at similar points in time.
We are using Keras for this example. This notebook works with any Spark cluster given that you are using maggy 0.1. In future versions we will add functionality that relies on Hopsworks.
This notebook has been tested with TensorFlow 1.11.0 and Spark 2.4.0.
Requires Python 3.6 or higher.
1. Spark Session
Make sure you have a running Spark Session/Context available. On Hopsworks just execute a simple command to start the spark application.
print("Hello World!")
Starting Spark application
ID | YARN Application ID | Kind | State | Spark UI | Driver log | Current session? |
---|---|---|---|---|---|---|
9 | application_1556201759536_0001 | pyspark | idle | Link | Link | ✔ |
SparkSession available as 'spark'.
Hello World!
2. Searchspace definition
We want to conduct random search for the MNIST example on three hyperparameters: Kernel size, pooling size and dropout rate. Hence, we have two continuous integer valued parameters and one double valued parameter.
from maggy import Searchspace
# The searchspace can be instantiated with parameters
sp = Searchspace(kernel=('INTEGER', [2, 8]), pool=('INTEGER', [2, 8]))
# Or additional parameters can be added one by one
sp.add('dropout', ('DOUBLE', [0.01, 0.99]))
Hyperparameter added: kernel
Hyperparameter added: pool
Hyperparameter added: dropout
3. Model training definition
The programming model is that you wrap the code containing the model training inside a wrapper function. Inside that wrapper function provide all imports and parts that make up your experiment.
There are several requirements for this wrapper function:
- The function should take the hyperparameters as arguments, plus one additional parameter
reporter
which is needed for reporting the current metric to the experiment driver. - The function should return the metric that you want to optimize for. This should coincide with the metric being reported in the Keras callback (see next point).
- In order to leverage on the early stopping capabilities of maggy, you need to make use of the maggy reporter API. By including the reporter in your training loop, you are telling maggy which metric to report back to the experiment driver for optimization and to check for early stopping. It is as easy as adding
reporter.broadcast(metric=YOUR_METRIC)
for example at the end of your epoch or batch training step and adding areporter
argument to your function signature. If you are not writing your own training loop you can use the pre-written Keras callbacks:- KerasBatchEnd
- KerasEpochEnd
(Please see documentation for a detailed explanation.)
We are going to use the KerasBatchEnd
callback to report back the accuracy after each batch. However, note that in the BatchEnd callback we have only access to training accuracy since validation after each batch would be too expensive.
from maggy import experiment
from maggy.callbacks import KerasBatchEnd
Definition of the training wrapper function: (maggy specific parts are highlighted with comments and correspond to the three points described above.)
#########
### maggy: hyperparameters as arguments and including the reporter
#########
def training_function(kernel, pool, dropout, reporter):
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
from maggy import tensorboard
from hops import hdfs
log_dir = tensorboard.logdir()
batch_size = 512
num_classes = 10
epochs = 1
# Input image dimensions
img_rows, img_cols = 28, 28
train_filenames = [hdfs.project_path() + "TourData/mnist/train/train.tfrecords"]
validation_filenames = [hdfs.project_path() + "TourData/mnist/validation/validation.tfrecords"]
# Create an iterator over the dataset
def data_input(filenames, batch_size=128, shuffle=False, repeat=None):
def parser(serialized_example):
"""Parses a single tf.Example into image and label tensors."""
features = tf.io.parse_single_example(
serialized_example,
features={
'image_raw': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.int64),
})
image = tf.io.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([28 * 28])
# Normalize the values of the image from the range [0, 255] to [-0.5, 0.5]
image = tf.cast(image, tf.float32) / 255 - 0.5
label = tf.cast(features['label'], tf.int32)
# Reshape the tensor
image = tf.reshape(image, [img_rows, img_cols, 1])
# Create a one hot array for your labels
label = tf.one_hot(label, num_classes)
return image, label
# Import MNIST data
dataset = tf.data.TFRecordDataset(filenames)
num_samples = sum(1 for _ in dataset)
# Map the parser over dataset, and batch results by up to batch_size
dataset = dataset.map(parser)
if shuffle:
dataset = dataset.shuffle(buffer_size=128)
dataset = dataset.batch(batch_size)
dataset = dataset.repeat(repeat)
return dataset, num_samples
input_shape = (28, 28, 1)
model = Sequential()
model.add(Conv2D(32, kernel_size=(kernel, kernel),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (kernel, kernel), activation='relu'))
model.add(MaxPooling2D(pool_size=(pool, pool)))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(num_classes, activation='softmax'))
opt = keras.optimizers.Adadelta(1.0)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=opt,
metrics=['accuracy'])
# Setup TensorBoard
tb_callback = TensorBoard(
log_dir,
update_freq='batch',
profile_batch=0, # workaround for issue #2084
)
#########
### maggy: REPORTER API through keras callback
#########
callbacks = [KerasBatchEnd(reporter, metric='accuracy'), tb_callback]
# Initialize the datasets
train_input, num_train = data_input(train_filenames[0], batch_size=batch_size)
eval_input, num_val = data_input(validation_filenames[0], batch_size=batch_size)
model.fit(train_input,
steps_per_epoch = num_train//batch_size,
callbacks=callbacks, # add callback
epochs=epochs,
verbose=1,
validation_data=eval_input,
validation_steps=num_val//batch_size)
score = model.evaluate(eval_input, steps=num_val//batch_size, verbose=1)
# Using print in the wrapper function will print underneath the Jupyter Cell with a
# prefix to indicate which prints come from the same executor
print('Test loss:', score[0])
print('Test accuracy:', score[1])
#########
### maggy: return the metric to be optimized, test accuracy in this case
#########
return score[1]
4. Configuring the experiment
Finally, we have to configure the maggy experiment.
There are a variety of parameters to specify, some of which are optional:
1. num_trials
: number of different parameter combinations to be evaluated
2. optimizer
: the optimization algorithm to be used (only ‘randomsearch’ available at the moment)
3. searchspace
: the searchspace object
4. direction
: maximize or minimize the specified metric
5. es_interval
: Interval in seconds, specifying how often the currently running trials should be checked for early stopping. Should be bigger than the hb_interval
.
6. es_min
: Minimum number of trials to be finished before starting to check for early stopping. For example, the median stopping rule implements the simple strategy of stopping a trial if its performance falls below the median of finished trials at similar points in time. We only want to start comparing to the median once there are several trials finished.
7. name
: An experiment name
8. description
: A description of the experiments that is used in the experiment’s logs.
9. hb_interval
: Time in seconds between the heartbeat messages with the metric to the experiment driver. A sensible value is not much smaller than the frequency in which your training loop updates the metric. So using the KerasBatchEnd reporter callback, it does not make sense having a much smaller interval than the amount of time a batch takes to be processed.
from maggy.experiment_config import OptimizationConfig
config = OptimizationConfig(num_trials=4, optimizer="randomsearch", searchspace=sp, direction="max", es_interval=1, es_min=5, name="hp_tuning_test")
5. Running the experiment
With all necessary configurations done, we can now run the hyperparameter tuning calling lagom with our prepared training function and the previously created config object.
result = experiment.lagom(train_fn=training_function, config=config)
To observe the progress, you can check the sterr of the spark executors. TensorBoard support is added in the coming version.