{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# COMP2211 PA2 Image Classification using CNN\n",
"\n",
"## Introduction\n",
"\n",
"In this assignment, you will build a CNN image classification model using the Keras library,\n",
"which is a simplified set of API1 on top of the famous TensorFlow library.\n",
"The links to the documentation of each task's relevant function are provided in the task description.\n",
"You can generally refer to them to see how to call each Keras functions.\n",
"\n",
"In addition, Keras provides multiple coding paradigms and API patterns for some functions.\n",
"And some are not comprehensively documented!\n",
"(E.g. string shorthands for activation functions, kernel initializers, losses, etc.)\n",
"\n",
"Nonetheless,\n",
"they provided many official code examples, where many alternative paradigms and underdocumented API shorthands are demonstrated:\n",
"* [Simple MNIST convnet](https://keras.io/examples/vision/mnist_convnet/) (MNIST is another image dataset on handwritten digits)\n",
"* [Basic image classification](https://www.tensorflow.org/tutorials/keras/classification)\n",
"* [Image classification](https://keras.io/examples/vision/image_classification_from_scratch/)\n",
"* [Intro to Keras for engineers](https://keras.io/getting_started/intro_to_keras_for_engineers/)\n",
"\n",
"\n",
"## In this programming assignment..\n",
"\n",
"* All your mandatory tasks go to `pa2.py`.\n",
" * But this notebook executes your codes and presents the entire work.\n",
" * You can follow this notebook and go to `pa2.py` when encountering a task.\n",
" * After re-writing the functions and saving the file, go back to this notebook and proceed.\n",
" * After you modify and save `pa2.py`, re-running the relevant code cells in this notebook will use the updated codes. (This is not standard Python behavior, but notebook-specific magic of the `autoreload` extension.)\n",
"* There is one function for each task.\n",
"* We don't expect you as a pro in defining Python functions. You only need to:\n",
" * Utilize only the parameters provided in the function definition\n",
" * Write your code between the comments *START YOUR CODE HERE* and *END YOUR CODE HERE*.\n",
" * When writing each task, imagine the provided parameters in the function and the imported libraries are all you need.\n",
" * Don't modify the parameter list of each function.\n",
" Or you will confuse Zinc when we call these functions to evaluate your tasks.\n",
"* There are some dummy codes in each task already.\n",
" * They are just to ensure the notebook will run without errors even if you do nothing.\n",
" * Feel free to remove and replace them, or refer to them (e.g., the expected shape, how to call `keras.Sequential`, etc.)\n",
" * The yellow `print` statements are merely informational.\n",
" We will *not* grade by checking the presence of these `print`s.\n",
" Feel free to remove them or keep them (if you don't feel bothered).\n",
"\n",
"---------\n",
"\n",
"1: API Stands for *Application Programming Interfaces*. Here we refer to all the functions, classes, etc., provided by libraries like Keras & TensorFlow.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following block download relevant files.\n",
"\n",
"[pa2.py](./pa2.py), \n",
"[draw.npz](./draw.npz), \n",
"[draw_example.zip](./draw_example.zip)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "--rnRL8WqNcl"
},
"outputs": [],
"source": [
"# Provided: notebook bootstrapping\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"from keras import layers, initializers\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# These two lines allows reloading of specific imported modules\n",
"# https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html\n",
"%load_ext autoreload\n",
"%autoreload 2\n",
"import pa2\n",
"\n",
"# This line sets a random seed, making all random behaviors deterministic and reproducible\n",
"keras.utils.set_random_seed(2211)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quick Draw Dataset\n",
"\n",
"[Quick, Draw! Dataset](https://github.com/googlecreativelab/quickdraw-dataset) is a very large dataset of users' drawing in the game [Quick, Draw!](https://quickdraw.withgoogle.com/) (345 categories * ~100k images per category).\n",
"\n",
"The dataset we use is a randomly drawn subset containing **8 categories and 5000 images per category**. Each image is **28 * 28 grayscale (not RGB)**.\n",
"\n",
"### Peaking the dataset\n",
"\n",
"We first provide you 4 exemplar images for each label, stored in png file format.\n",
"We also provide you with a one-line code demonstrating the use of `keras.utils.image_dataset_from_directory`\n",
"to batch load an entire folder of image data.\n",
"\n",
"This code is for demonstration and peeking at the dataset only.\n",
"We will load the entire dataset in traditional NumPy arrays in the following tasks.\n",
"\n",
"You can refer to [the function documentation](https://keras.io/api/data_loading/image/#image_dataset_from_directory-function)\n",
"and [the `tf.data.Dataset` object documentation it returns](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take)\n",
"for more information, and use these data loading utilities in your own future projects.\n",
"It takes care of managing batch sizes, shuffling at each epoch,\n",
"splitting validation sets if needed, and much more.\n",
"Note that the returning dataset object contains both x & y,\n",
"which may further lead to different usages of `model.fit`, `model.evaluate`, etc."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provided\n",
"example_set = keras.utils.image_dataset_from_directory(\n",
" \"draw_example\",\n",
" label_mode=\"categorical\", # This makes label one-hot\n",
" color_mode=\"grayscale\", # This decides how many channels\n",
" shuffle=False, # Turn off dataset shuffling for demo purpose\n",
" batch_size=None,\n",
" image_size=(28,28),\n",
")\n",
"\n",
"for image, label in example_set:\n",
" print('single image.shape', image.shape, 'single label shape', label.shape)\n",
" break\n",
"\n",
"plt.figure(figsize=(15,5))\n",
"for i, (image, label) in enumerate(example_set):\n",
" plt.subplot(4, 8, (i % 4) * 8 + (i // 4) + 1)\n",
" plt.imshow(tf.squeeze(image), cmap='gray')\n",
" plt.axis(False)\n",
" plt.title(str(label.numpy().astype(int).tolist()))\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading the entire dataset\n",
"\n",
"Recall: the dataset we use is a randomly drawn subset containing **8 categories and 5000 images per category**. Each image is **28 * 28 grayscale (not RGB)**.\n",
"\n",
"All images are deliberately stored in NumPy arrays instead of image files to save space. We wrote the NumPy file loading part for you.\n",
"\n",
"* `x_train_raw` is 2D containing the pixel values of training dataset images. Each row is an image and has a length of 784 (28*28).\n",
"* `y_train_raw` is 1D containing the ground-truth label of those images, each being an integer in `[0, 7]`.\n",
"* Similar for `_val_raw` & `_test_raw`.\n",
"* `y_names` is an array of 8 strings containing the human-readable names for each label value from 0 to 7."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "O7JpSZnuqNcm",
"outputId": "dcb60ced-caed-4364-e288-53f92e52b286"
},
"outputs": [],
"source": [
"# Provided: load numpy file\n",
"np_data = np.load(\"draw.npz\")\n",
"x_train_raw = np_data[\"x_train\"]\n",
"y_train_raw = np_data[\"y_train\"]\n",
"x_val_raw = np_data[\"x_val\"]\n",
"y_val_raw = np_data[\"y_val\"]\n",
"x_test_raw = np_data[\"x_test\"]\n",
"y_test_raw = np_data[\"y_test\"]\n",
"y_names = np_data[\"y_names\"]\n",
"N_labels = len(y_names) # Number of labels in total\n",
"print(\"x_train_raw.shape\", x_train_raw.shape)\n",
"print(\"x_val_raw.shape\", x_val_raw.shape)\n",
"print(\"x_test_raw.shape\", x_test_raw.shape)\n",
"print(\"y_train_raw.shape\", y_train_raw.shape)\n",
"print(\"y_val_raw.shape\", y_val_raw.shape)\n",
"print(\"y_test_raw.shape\", y_test_raw.shape)\n",
"print(\"y_names: \", y_names)\n",
"print(\"There are\", N_labels, \"labels in total.\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Optional task: Explore the dataset\n",
"\n",
"Count the number of each label in the train/val/test set in the code block below.\n",
"This task will not be graded,\n",
"but is a great chance to practice your NumPy indexing and vectorization technique.\n",
"See if you can use *no `for` loop* in this task?\n",
"\n",
"You should see **each set containing equal numbers of all labels**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# *OPTIONAL* TODO\n",
"\n",
"### *OPTIONAL* START YOUR CODE HERE\n",
"train_count = np.zeros(8)\n",
"val_count = np.zeros(8)\n",
"test_count = np.zeros(8)\n",
"### *OPTIONAL* END YOUR CODE HERE\n",
"\n",
"print(train_count, val_count, test_count, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 1: Reshape images\n",
"\n",
"Note that each image is in an 1D array of length 784.\n",
"It follows that each of `x__raw` is 2D.\n",
"\n",
"Intuitively we want an image to be a 3D array like **height\\*width\\*channels**. In our case, the images are all grayscale. Therefore the channel dimension should be 1.\n",
"\n",
"Implement a function that reshapes 2D `x_train_raw`, `x_val_raw`, `x_test_raw`, respectively,\n",
"into 4D NumPy arrays like **\\\\*height\\*width\\*channels**.\n",
"It returns 3 4D arrays."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement reshape_x.\n",
"\n",
"x_train, x_val, x_test = pa2.reshape_x(x_train_raw, x_val_raw, x_test_raw)\n",
"print(\"x_train.shape\", x_train.shape)\n",
"print(\"x_val.shape\", x_val.shape)\n",
"print(\"x_test.shape\", x_test.shape)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 2: One-hot encode labels\n",
"\n",
"Note that each label is an integer in $[0, 7]$.\n",
"It follows that each of `y__raw` is 1D.\n",
"\n",
"Like previous labs, we need to transform a single integer to a one-hot encoding vector of length `N_labels`, which is the total number of possible labels.\n",
"\n",
"Implement a function that creates and returns one-hot encodings for `y_train_raw`, `y_val_raw`, `y_test_raw`, respectively.\n",
"The returned array shape is like **\\*N_labels**.\n",
"\n",
"Here we use [`tf.one_hot`](https://www.tensorflow.org/api_docs/python/tf/one_hot#for_example) to encode\n",
"for the maximum code compatibility and minimum library imports.\n",
"\n",
"> It returns a `tf.Tensor` type, in case you notice and worry about the type,\n",
"> imagine it as simply a TensorFlow-accelerated NumPy array with *mostly* compatible syntax.\n",
"> The documentation claims that the first parameter it accepts is also of type `tf.Tensor`. However, you can directly pass NumPy arrays as the first parameter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement encode_y.\n",
"\n",
"y_train, y_val, y_test = pa2.encode_y(y_train_raw, y_val_raw, y_test_raw, N_labels)\n",
"print(\"y_train.shape\", y_train.shape)\n",
"print(\"y_val.shape\", y_val.shape)\n",
"print(\"y_test.shape\", y_test.shape)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following block will call that function\n",
"and display the first 9 images and their labels in the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 624
},
"id": "WrrwHwSeqNcn",
"outputId": "b25b4824-de74-495b-abd2-d376d0bbbb9e"
},
"outputs": [],
"source": [
"# Provided: plot the first 9 images in the training set\n",
"image_9 = x_train[:9, :, :, :]\n",
"label_9 = y_train[:9, :]\n",
"plt.figure(figsize=(10, 10))\n",
"for i in range(9):\n",
" image = image_9[i, :, :, 0]\n",
" label = label_9[i, :]\n",
" label = 'Unknown' if np.sum(label) != 1 else y_names[np.argwhere(label == 1)[0, 0]]\n",
" ax = plt.subplot(3, 3, i + 1)\n",
" ax.imshow(image_9[i, :, :, 0], cmap=\"gray\")\n",
" ax.set_title(label)\n",
" ax.set_axis_off()\n",
"plt.tight_layout()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data augmentation\n",
"\n",
"Data augmentation is to randomly perturb our image data to generate more input data.\n",
"\n",
"### Task 3: Create data augmentation layers\n",
"\n",
"Implement a function that creates and returns a Keras model for data augmentation.\n",
"The model should do the following augmentation in sequence.\n",
"\n",
"1. Randomly flip (or not) an image horizontally\n",
"2. Randomly rotate an image within the range `[-0.1 * 2π,0.1 * 2π]`\n",
"and fill those points outside the boundary with a constant value of zero (black background)\n",
"\n",
"You may refer to the documentation for\n",
"[`RandomFlip`](https://keras.io/api/layers/preprocessing_layers/image_augmentation/random_flip/),\n",
"[`RandomRotation`](https://keras.io/api/layers/preprocessing_layers/image_augmentation/random_rotation/) and\n",
"[`Sequential`](https://keras.io/guides/sequential_model/).\n",
"\n",
"> Note: For those who run tf locally *using macOS and Apple's metal driver accelerator*, some data augmentation layers may raise internal errors.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "COfh2PIVqNco"
},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement AugmentationLayer.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now plot one image followed by the same augmentation 8 times on this image. Note that there is randomization in the augmentation process, so it can produce 8 different outputs on the same image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 610
},
"id": "jsiAY3BMqNcq",
"outputId": "0941dfe3-0f74-40fe-d4ba-e611be3000ef"
},
"outputs": [],
"source": [
"# Provided: plot the Augmented\n",
"augmentation_layer = pa2.AugmentationLayer()\n",
"plt.figure(figsize=(10, 10))\n",
"image = x_train[0, :, :, :]\n",
"label = y_train[0]\n",
"label = 'Unknown' if np.sum(label) != 1 else y_names[np.argwhere(label == 1)[0, 0]]\n",
"plt.suptitle(label + \" augmentation\")\n",
"plt.subplot(3, 3, 1)\n",
"plt.imshow(image[:, :, 0], cmap=\"gray\")\n",
"plt.axis(False)\n",
"for i in range(1, 9):\n",
" ax = plt.subplot(3, 3, i + 1)\n",
" augim = augmentation_layer(image, training=True)\n",
" augim = augim[:, :, 0].numpy()\n",
" ax.imshow(augim, cmap=\"gray\")\n",
" ax.set_axis_off()\n",
"plt.tight_layout()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The CNN Model\n",
"\n",
"### Task 4: Build a model\n",
"\n",
"Before building the main model architecture,\n",
"we need to include the data augmentation layer first1.\n",
"Note that a Keras Sequential model can contain a nested Sequential model.\n",
"Thus, you can directly call `AugmentationLayer()` you defined above\n",
"as one layer of your main model below.\n",
"\n",
"After augmentation, we need to standardize the model like what we have done in PA1.\n",
"Here all pixel values in the images are `[0, 255]` integers.\n",
"We simply need to rescale each pixel value to `[0, 1]` decimal numbers.\n",
"You can refer to the documentation on\n",
"[`Rescaling`](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/rescaling/)\n",
"for this layer.\n",
"\n",
"In summary, you should create a `Sequential` model of the following architecture:\n",
"* Your data augmentation layer\n",
"* A proper `Rescaling` layer described above\n",
"* A convolutional layer with 16 3*3 kernels, ReLU activation & [\"He uniform\"](https://keras.io/api/layers/initializers/#heuniform-class) kernel initializer\n",
"* A 2*2 max pooling layer\n",
"* A convolutional layer with 32 3*3 kernels, ReLU activation & [\"He uniform\"](https://keras.io/api/layers/initializers/#heuniform-class) kernel initializer\n",
"* A 2*2 max pooling layer\n",
"* A convolutional layer with 64 3*3 kernels, ReLU activation & [\"He uniform\"](https://keras.io/api/layers/initializers/#heuniform-class) kernel initializer\n",
"* A 2*2 max-pooling layer\n",
"* A flatten layer to squash the 3D data to 1D\n",
"* A dropout layer with a 0.2 probability\n",
"* A dense layer with output dimension equal to `N_labels` and softmax activation\n",
"\n",
"Please refer to [their documentation](https://keras.io/api/layers/) for detailed usage.\n",
"\n",
"Usually, we pass in the `input_shape` parameter to the first layer.\n",
"But since the first layer is our customized function receiving no parameters,\n",
"we tell the model the input shape by a `model.build` function after definition.\n",
"This line is provided for you.\n",
"\n",
"> Note that in lecture notes and many other demonstrations codes,\n",
"> we can provide a string name shorthand for\n",
"> initializers and activation functions,\n",
"> as long as we use them *with their default parameters*.\n",
"> In this case, you can use either string shorthands or\n",
"> the longer version in PA.\n",
"\n",
"-----\n",
"\n",
"1: Extended reading: [Preprocessing data before the model or inside the model](https://keras.io/guides/preprocessing_layers/#preprocessing-data-before-the-model-or-inside-the-model)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "GWbIT1nJqNcs",
"outputId": "a68868c9-6346-4e1d-d23c-7f7957590e8b"
},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement build_model.\n",
"\n",
"model = pa2.build_model(N_labels)\n",
"model.summary()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 5: Compile the model\n",
"\n",
"In Keras' terms, compiling a model is to set\n",
"the loss function,\n",
"the optimizer (a.k.a. the learning rate and other related stuff), and\n",
"the evaluation metrics (e.g., accuracy).\n",
"\n",
"Refer to the [`model.compile`](https://keras.io/api/models/model_training_apis/#compile-method) documentation.\n",
"Implement a function that **receives a model and a learning rate as parameters** and compiles the model using\n",
"* [`categorical_crossentropy`](https://keras.io/api/losses/probabilistic_losses/#categoricalcrossentropy-class) as loss function,\n",
"* [`SGD`](https://keras.io/api/optimizers/sgd/) **with given learning rate**, and\n",
"* only [`accuracy`](https://keras.io/api/metrics/accuracy_metrics/#accuracy-class) metric.\n",
"\n",
"> Note that in lecture notes and many other demonstrations codes,\n",
"> we can provide a string name shorthand for\n",
"> losses, metrics, and optimizers,\n",
"> as long as we use them *with their default parameters*.\n",
"> In this case, you can use either string shorthands or\n",
"> the longer version in PA.\n",
"\n",
"
\n",
"\n",
"> If you found problems using `optimizers.SGD(…)` or `from optimizers import SGD`, try to use `keras.optimizers.SGD(…)`.\n",
"> Not sure why this import issue happens.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "8ZJDCLebqNcs",
"outputId": "92a505ac-fe70-4fda-d16e-83fe678034bf"
},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement compile_model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 6: Train the model\n",
"\n",
"Refer to [`model.fit`](https://keras.io/api/models/model_training_apis/#fit-method) documentation.\n",
"Implement a function that trains a given model using\n",
"given `x_train` & `y_train` in a batch size of 32 for given #epochs.\n",
"Also, provide `x_val` & `y_val` as validation data, with a validation batch size of 32 as well.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement train_model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Optional Task: Parameter tuning 1\n",
"\n",
"This section gives you the first taste of parameter tuning.\n",
"For tuning on more parameters, you can explore the Internet for a deeper understanding.\n",
"\n",
"Let's try training a model with a learning rate of `1` for 5 epochs.\n",
"\n",
"Note that if the loss doesn't go down (or even increases) during training, then probably lr is too large.\n",
"\n",
"You may call your previously implemented functions by `pa2.compile_model(...)`, and `pa2.train_model(...)`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# *OPTIONAL* TODO: Write your codes below\n",
"model = pa2.build_model(N_labels) # Build new model with newly initialized weights\n",
"\n",
"# Compile and train the model\n",
"### *OPTIONAL* START YOUR CODE HERE\n",
"\n",
"### *OPTIONAL* END YOUR CODE HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Optional Task: Parameter tuning 2\n",
"\n",
"Let's also try training the model with a learning rate of 0.0001 for 5 epochs.\n",
"\n",
"Note that if the loss goes down too slowly then probably it's too small.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# *OPTIONAL* TODO: Write your codes below\n",
"model = pa2.build_model(N_labels) # Build new model with newly initialized weights\n",
"\n",
"# Compile and train the model\n",
"### *OPTIONAL* START YOUR CODE HERE\n",
"\n",
"### *OPTIONAL* END YOUR CODE HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So far, you can have a basic idea of tuning the learning rate parameter.\n",
"The significance of the learning rate also depends on the optimizer we choose.\n",
"\n",
"In fact, a learning rate of 0.001 is good for our task. Let's use 0.001 and train for 20 epochs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provided\n",
"keras.utils.set_random_seed(2211) # Reset seed, you should get the same model since this code cell\n",
"model = pa2.build_model(N_labels) # Build new model with newly initialized weights\n",
"pa2.compile_model(model, 0.001)\n",
"pa2.train_model(model, 20, x_train, y_train, x_val, y_val)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 7: Evaluate the model on the test dataset\n",
"\n",
"Refer to [`model.evaluate`](https://keras.io/api/models/model_training_apis/#evaluate-method) documentation.\n",
"Implement a function that evaluates the model given `x_test` & `y_test` in a batch size of 32.\n",
"\n",
"Please make sure that your model does reach a test accuracy > 60%."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "xYAJaEjGqNct",
"outputId": "13700012-aba0-4cea-d64b-d50e9aff6f70"
},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement evaluate_model.\n",
"# TODO: Make sure test accuracy > 60%,\n",
"# or you may have something wrong in previous steps.\n",
"\n",
"pa2.evaluate_model(model, x_test, y_test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Play with the model on our data\n",
"\n",
"### Task 8: Predict labels ourselves\n",
"\n",
"The `model.evaluate` function calculates accuracies given an already known test dataset x & y.\n",
"But now, we'd like to use this model on our own data to predict y!\n",
"\n",
"Refer to [`model.predict`](https://keras.io/api/models/model_training_apis/#predict-method)1 documentation.\n",
"Implement a function that receives a NumPy array as a batch of images in shape `(num_images, 28, 28, 1)` and returns the predicted labels for those images in shape `(num_images,)`.\n",
"\n",
"Note that `model.predict` returns a NumPy array of shape `(num_images, N_labels)`\n",
"\n",
"-----\n",
"\n",
"1: Extended reading: [What's the difference between Model methods `predict()` and `__call__()`?](https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: Go to `pa2.py` and implement predict_images.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The codes below take 2 images from each category in the test set and use your function to predict the label."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Provided\n",
"test_N_per_cat = x_test.shape[0] // N_labels\n",
"plt.figure(figsize=(12, 4))\n",
"for i in range(2):\n",
" images = x_test[i::test_N_per_cat, :, :, :]\n",
" labels = y_test[i::test_N_per_cat, :]\n",
" preds = pa2.predict_images(model, images)\n",
" for j in range(8):\n",
" image = images[j, :, :, 0]\n",
" pred_label = y_names[preds[j]]\n",
" true_label = labels[j, :]\n",
" true_label = 'Unknown' if np.sum(true_label) != 1 else y_names[np.argwhere(true_label == 1)[0, 0]]\n",
" ax = plt.subplot(2, 8, i * 8 + j + 1)\n",
" ax.imshow(image, cmap=\"gray\")\n",
" ax.set_title(f\"True: {true_label}\\nPred: {pred_label}\")\n",
" ax.set_axis_off()\n",
"plt.tight_layout()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Saving the model\n",
"\n",
"This part is provided to you.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GLWT2ZmiqNcv"
},
"outputs": [],
"source": [
"model.save(\"draw_model.h5\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What's Next?\n",
"\n",
"If you want to explore more, here is something you can try.\n",
"But please keep in mind that they are not required,\n",
"or even not expected in your PA submission.\n",
"Make a copy of your work before you venture on!\n",
"\n",
"* Practice NumPy indexing and vectorization techniques\n",
" * Write your *own* code to calculate the confusion matrix on `x_test` and `y_test`.\n",
" Use *only* `model.predict` and *no* `tf.math.confusion_matrix`.\n",
" * Implement a convolution operation (given a piece/batch of data and a kernel) yourself, using as few for-loops as possible.\n",
"* Towards a better model\n",
" * Experiment with other model architectures.\n",
" * Experiment with other optimizers like [`Adam`](https://keras.io/api/optimizers/adam/)\n",
" * Tune other parameters. You may check lecture notes, Keras API documentation\n",
" and online materials to see what parameters you can tune.\n",
" * Try out Keras callbacks like [`ReduceLROnPlateau`.](https://keras.io/api/callbacks/reduce_lr_on_plateau/)\n",
"* Keep track of model statistics\n",
" * Try out [Tensorboard callback](https://keras.io/api/callbacks/tensorboard/).\n",
"* Play with other datasets\n",
" * Explore other popular datasets.\n",
" [TensorFlow](https://www.tensorflow.org/datasets/catalog/overview?hl=zh-cn#image_classification) and\n",
" [Keras](https://keras.io/api/datasets/)\n",
" also ship with some most popular datasets already.\n",
"* Engineer a better ML pipeline\n",
" * Explore other [keras callbacks](https://keras.io/api/callbacks/)\n",
" * Try to save the images in the NumPy arrays as separate `png` files in a hierarchy preferred by\n",
" [`keras.utils.image_dataset_from_directory`](https://keras.io/api/data_loading/image/#image_dataset_from_directory-function)\n",
" and do the following parts of model building, compiling, training & evaluation.\n",
" See what needs to be changed in the following parts.\n",
" * Try to wrap the NumPy array data in a `tf.data.Dataset` object using [`tf.data.Dataset.from_tensor_slices`](https://www.tensorflow.org/tutorials/load_data/numpy).\n",
" See how to specify dataset shuffling & batching in this case (and their order?),\n",
" and what needs to be changed in the following parts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"name": "draw.ipynb",
"provenance": []
},
"interpreter": {
"hash": "59f3145cc67fcda0343c2852f1f97113a2e6e98841e887156424448e7071ad54"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}