Important: This notebook is not guaranteed to work without modification. It’s a while since I wrote this and haven’t tested it since. It is also sparsely commented. For a summary of this project see Engine knock detection AI part 5/5

Background

With the training dataset created the model can be trained. This is copying almost verbatim from the second lesson in Practical Deep Learning for Coders, found at the github repo for the third installment of the course. The steps are described quite well in the linked notebook.

Method

Update fast.ai library

!curl -s https://course.fast.ai/setup/colab | bash
bash: line 1: syntax error near unexpected token `newline'
bash: line 1: `<!DOCTYPE html>'

Connect to google drive

from google.colab import drive
drive.mount('/content/drive')

Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code

Enter your authorization code:
··········
Mounted at /content/drive

Import fast.ai

from fastai.vision import *

Define classes

classes = ['knocking','normal']

Set path to working directory

path = Path('/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data')

Validate image files

for c in classes:
    print(c)
    verify_images(path/c)

Define data object

np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, size=224, num_workers=4).normalize(imagenet_stats)

Verify data classes

data.classes
['knocking', 'normal']

Display images from dataset

As demonstrated by these images the knocking displays as vertical spikes in the middle of the spectrum. Some spectrograms for non knocking engines show rythmic components in the lower frequencies (top row middle). It will be interesting how well the model will be able to distinguish from these.

data.show_batch(rows=3, figsize=(7,8))

Show image classes and counts

data.classes, data.c, len(data.train_ds), len(data.valid_ds)
(['knocking', 'normal'], 2, 349, 87)

Fetch resnet34 model

learn = cnn_learner(data, models.resnet34, metrics=error_rate)
Downloading: "https://download.pytorch.org/models/resnet34-333f7ec4.pth" to /root/.cache/torch/checkpoints/resnet34-333f7ec4.pth
100%|██████████| 87306240/87306240 [00:00<00:00, 114748481.59it/s]

Train model

learn.fit_one_cycle(4)
epoch train_loss valid_loss error_rate time
0 0.812924 0.703951 0.425287 00:07
1 0.695543 0.604884 0.321839 00:04
2 0.568838 0.543995 0.241379 00:04
3 0.474682 0.468180 0.229885 00:04

Save model

learn.save('stage-1')

Unfreeze top layers

learn.unfreeze()

Find learning rate

learn.lr_find()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.

Plot learning rate

# learn.lr_find(start_lr=1e-5, end_lr=1e-1)
learn.recorder.plot()

Retrain top layers

learn.fit_one_cycle(2, max_lr=slice(4e-6,4e-4))
epoch train_loss valid_loss error_rate time
0 0.018658 0.468953 0.195402 00:05
1 0.013044 0.466273 0.172414 00:05

Save model

learn.save('stage-2')

Interpret model

Load model

learn.load('stage-2');

Create interpreation from learner

interp = ClassificationInterpretation.from_learner(learn)

Plot confusion matrix

The confusion matrix shows that the model is quite confident in predicting wether the engine is knocking or not. Only once did it think that an engine ran normally when it was in fact knocking. This case could be seen as the more harmful on.

interp.plot_confusion_matrix()

Plot top losses

Looking at the cases where the model was the most unsure also gives the impression that it's quite good. The top row would be hard for me to classify correctly based on the spectrograms. Looking closely at the third one (top right) you can suspect somehing is going on with the vertical lines in the top middle of the image.

losses,idxs = interp.top_losses(10)

len(data.valid_ds)==len(losses)==len(idxs)

interp.plot_top_losses(9)

Show audiofile players for top losses

Listening to the audio for the previously mentioned third spectrogram (knocking/0042_1.wav below) it's pretty clear that the engine isn't running well.

import IPython.display as ipd
import os
for img_path in data.valid_ds.items[idxs]:
  filepath, extension = os.path.splitext(img_path)
  audio_slice_path = filepath + '.wav'
  print(filepath)
  ipd.display(ipd.Audio(audio_slice_path))
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/normal/0046_4
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/normal/0044_3
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/knocking/0042_1
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/normal/0012_4
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/knocking/0002_1
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/normal/0042_2
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/knocking/0016_2
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/knocking/0002_0
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/knocking/0035_1
/content/drive/My Drive/Colab Notebooks/fast.ai/KnockKnock/data/normal/0032_2

Export model

learn.export()