Introduction
The Seamagine dataset contains images labeled with various machine settings. The presence of such annotations can offer the following possibility: to use them to obtain a finetuned backbone network and subsequently to use the features produced by it to perform the downstream anomaly detection task. We can contrast this approach to using a backbone that have been trained on ImageNet (and other datasets) that is representative of natural images. The ImageNet pretraining gave the network a strong ability to recognize general visual features like edges, textures, and shapes but the finetuning tailors the features to the product sideseam domain. We call this model finetuned CNN to distinguish it from the ImageNet pretrained CNN model.It is important to highlight that in finetuning we do not use the
PASS or FAIL anomaly detection labels. We only use the supervision afforded by all the machine settings and bet that with all machine settings we have a complete coverage of variability of our data distribution via the various machine settings, to use the weights of the finetuned backbone network that has now learned seam-oriented features rather than general features. In more general terms this is a form known as transfer learning.Finetuning Architecture
The finetuning process uses the following architecture:Training Results


Finetuned Embeddings
To replicate the finetuning you need to use the06_finetuning notebook. The embeddings from the finetuned model are shown below.


