add stuff for CAML
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 15s
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 15s
This commit is contained in:
@ -15,6 +15,10 @@ For all of the three methods we test the following use-cases:#todo[maybe write m
|
||||
|
||||
Those experiments were conducted on the MVTEC AD dataset on the bottle and cable classes.
|
||||
|
||||
|
||||
== Experiment Setup
|
||||
#todo[Setup of experiments, which classes used, nr of samples]
|
||||
|
||||
== ResNet50
|
||||
=== Approach
|
||||
The simplest approach is to use a pre-trained ResNet50 model as a feature extractor.
|
||||
@ -79,23 +83,27 @@ After creating the embeddings for the support and query set the euclidean distan
|
||||
The class with the smallest distance is chosen as the predicted class.
|
||||
|
||||
=== Results
|
||||
This method perofrmed better than expected wich such a simple method.
|
||||
|
||||
|
||||
== CAML
|
||||
#todo[Add images of graphs with ResNet50 stuff only]
|
||||
|
||||
== P>M>F
|
||||
=== Approach
|
||||
=== Results
|
||||
|
||||
== Experiment Setup
|
||||
% todo
|
||||
todo setup of experiments, which classes used, nr of samples
|
||||
kinds of experiments which lead to graphs
|
||||
== CAML
|
||||
=== Approach
|
||||
For the CAML implementation the pretrained model weights from the original paper were used.
|
||||
As a feture extractor a ViT-B/16 model was used, which is a Vision Transformer with a patch size of 16.
|
||||
This feature extractor was already pretrained when used by the authors of the original paper.
|
||||
For the non-causal sequence model a transformer model was used
|
||||
It consists of 24 Layers with 16 Attention-heads and a hidden dimension of 1024 and output MLP size of 4096.
|
||||
This transformer was trained on a huge number of images as described in @CAML.
|
||||
|
||||
== Jupyter
|
||||
=== Results
|
||||
The results were not as good as expeced.
|
||||
This might be caused by the fact that the model was not fine-tuned for any industrial dataset domain.
|
||||
The model was trained on a large number of general purpose images and is not fine-tuned at all.
|
||||
It might not handle very similar images well.
|
||||
|
||||
To get accurate performance measures the active-learning process was implemented in a Jupyter notebook first.
|
||||
This helps to choose which of the methods performs the best and which one to use in the final Dagster pipeline.
|
||||
A straight forward machine-learning pipeline was implemented with the help of Pytorch and RESNet-18.
|
||||
|
||||
Moreover, the Dataset was manually imported with the help of a custom torch dataloader and preprocessed with random augmentations.
|
||||
After each loop iteration the Area Under the Curve (AUC) was calculated over the validation set to get a performance measure.
|
||||
All those AUC were visualized in a line plot, see section~\ref{sec:experimental-results} for the results.
|
||||
#todo[Add images of graphs with CAML stuff only]
|
||||
|
Reference in New Issue
Block a user