Exporting an MNIST Classifier in SavedModel Format
In this exercise, we will learn on how to create models for TensorFlow Hub. You will be tasked with performing the following tasks:
- Creating a simple MNIST classifier and evaluating its accuracy.
- Exporting it into SavedModel.
- Hosting the model as TF Hub Module.
- Importing this TF Hub Module to be used with Keras Layers.
1 | import numpy as np |
Create an MNIST Classifier
We will start by creating a class called MNIST
. This class will load the MNIST dataset, preprocess the images from the dataset, and build a CNN based classifier. This class will also have some methods to train, test, and save our model.
In the cell below, fill in the missing code and create the following Keras Sequential
model:
1 | Model: "sequential" |
Notice that we are using a tf.keras.layers.Lambda
layer at the beginning of our model. Lambda
layers are used to wrap arbitrary expressions as a Layer
object:
1 | tf.keras.layers.Lambda(expression) |
The Lambda
layer exists so that arbitrary TensorFlow functions can be used when constructing Sequential
and Functional API models. Lambda
layers are best suited for simple operations.
1 | class MNIST: |
Train, Evaluate, and Save the Model
We will now use the MNIST
class we created above to create an mnist
object. When creating our mnist
object we will use a dictionary to pass our training parameters. We will then call the train
and export_model
methods to train and save our model, respectively. Finally, we call the test
method to evaluate our model after training.
NOTE: It will take about 12 minutes to train the model for 5 epochs.
1 | # Define the training parameters. |
WARNING:absl:Found a different version 3.0.0 of dataset mnist in data_dir /tf/week2/../tmp2. Using currently defined version 1.0.0.
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 28, 28, 1) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 28, 28, 8) 80
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 14, 14, 8) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 14, 14, 16) 1168
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 7, 16) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 7, 7, 32) 4640
_________________________________________________________________
flatten_1 (Flatten) (None, 1568) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 200832
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
=================================================================
Total params: 208,010
Trainable params: 208,010
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/5
1875/1875 [==============================] - 135s 72ms/step - loss: 0.1548 - accuracy: 0.9532548 - accuracy: 0.95
Epoch 2/5
563/1875 [========>.....................] - ETA: 1:36 - loss: 0.0868 - accuracy: 0.9733
Create a Tarball
The export_model
method saved our model in the TensorFlow SavedModel format in the ./saved_model
directory. The SavedModel format saves our model and its weights in various files and directories. This makes it difficult to distribute our model. Therefore, it is convenient to create a single compressed file that contains all the files and folders of our model. To do this, we will use the tar
archiving program to create a tarball (similar to a Zip file) that contains our SavedModel.
1 | # Create a tarball from the SavedModel. |
Inspect the Tarball
We can uncompress our tarball to make sure it has all the files and folders from our SavedModel.
1 | # Inspect the tarball. |
./
./variables/
./variables/variables.data-00001-of-00002
./variables/variables.data-00000-of-00002
./variables/variables.index
./saved_model.pb
./assets/
Simulate Server Conditions
Once we have verified our tarball, we can now simulate server conditions. In a normal scenario, we will fetch our TF Hub module from a remote server using the module’s handle. However, since this notebook cannot host the server, we will instead point the module handle to the directory where our SavedModel is stored.
1 | !rm -rf ./module |
./
./variables/
./variables/variables.data-00001-of-00002
./variables/variables.data-00000-of-00002
./variables/variables.index
./saved_model.pb
./assets/
1 | # Define the module handle. |
Load the TF Hub Module
1 | # EXERCISE: Load the TF Hub module using the hub.load API. |
Test the TF Hub Module
We will now test our TF Hub module with images from the test
split of the MNIST dataset.
1 | filePath = f"{getcwd()}/../tmp2" |
WARNING:absl:Found a different version 3.0.0 of dataset mnist in data_dir /tf/week2/../tmp2. Using currently defined version 1.0.0.
1 | # Test the TF Hub module for a single batch of data |
Predicted Labels: [6 2 3 7 2 2 3 4 7 6 6 9 2 0 9 6 2 0 6 5 1 4 8 1 9 8 4 0 0 5 8 4]
True Labels: [6 2 3 7 2 2 3 4 7 6 6 9 2 0 9 6 8 0 6 5 1 4 8 1 9 8 4 0 0 5 2 4]
We can see that the model correctly predicts the labels for most images in the batch.
Evaluate the Model Using Keras
In the cell below, you will integrate the TensorFlow Hub module into the high level Keras API.
1 | # EXERCISE: Integrate the TensorFlow Hub module into a Keras |
1 | # Evaluate the model on the test_dataset. |
313/313 [==============================] - 27s 88ms/step - loss: 0.0605 - accuracy: 0.9824
1 | # Print the metric values on which the model is being evaluated on. |
loss: 0.061
accuracy: 0.982