bachelor-thesis/materialandmethods.typ
lukas-heiligenbrunner 2690a3d0f2
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 18s
add pmf material section
2025-01-03 15:25:32 +01:00

378 lines
21 KiB
Typst

#import "@preview/subpar:0.1.1"
#import "utils.typ": todo
#import "@preview/equate:0.2.1": equate
= Material and Methods
== Material
=== MVTec AD
MVTec AD is a dataset for benchmarking anomaly detection methods with a focus on industrial inspection.
It contains 5354 high-resolution images divided into fifteen different object and texture categories.
Each category comprises a set of defect-free training images and a test set of images with various kinds of defects as well as images without defects.
#figure(
image("rsc/mvtec/dataset_overview_large.png", width: 80%),
caption: [Architecture convolutional neural network. #cite(<datasetsampleimg>)],
) <datasetoverview>
In this bachelor thesis only two categories are used. The categories are "Bottle" and "Cable".
The bottle category contains 3 different defect classes: 'broken_large', 'broken_small' and 'contamination'.
#subpar.grid(
figure(image("rsc/mvtec/bottle/broken_large_example.png"), caption: [
Broken large defect
]), <a>,
figure(image("rsc/mvtec/bottle/broken_small_example.png"), caption: [
Broken small defect
]), <b>,
figure(image("rsc/mvtec/bottle/contamination_example.png"), caption: [
Contamination defect
]), <c>,
columns: (1fr, 1fr, 1fr),
caption: [Bottle category different defect classes],
label: <full>,
)
Whereas cable has a lot more defect classes: 'bent_wire', 'cable_swap', 'combined', 'cut_inner_insulation',
'cut_outer_insulation', 'missing_cable', 'missing_wire', 'poke_insulation'.
So many more defect classes are already an indication that a classification task might be more difficult for the cable category.
#subpar.grid(
figure(image("rsc/mvtec/cable/bent_wire_example.png"), caption: [
Bent wire defect
]), <a>,
figure(image("rsc/mvtec/cable/cable_swap_example.png"), caption: [
Cable swap defect
]), <b>,
figure(image("rsc/mvtec/cable/combined_example.png"), caption: [
Combined defect
]), <c>,
figure(image("rsc/mvtec/cable/cut_inner_insulation_example.png"), caption: [
Cut inner insulation
]), <d>,
figure(image("rsc/mvtec/cable/cut_outer_insulation_example.png"), caption: [
Cut outer insulation
]), <e>,
figure(image("rsc/mvtec/cable/missing_cable_example.png"), caption: [
Mising cable defect
]), <e>,
figure(image("rsc/mvtec/cable/poke_insulation_example.png"), caption: [
Poke insulation defect
]), <f>,
figure(image("rsc/mvtec/cable/missing_wire_example.png"), caption: [
Missing wire defect
]), <g>,
columns: (1fr, 1fr, 1fr, 1fr),
caption: [Cable category different defect classes],
label: <full>,
)
== Methods
=== Few-Shot Learning
Few-Shot learning is a subfield of machine-learning which aims to train a classification-model with just a few or no samples at all.
In contrast to traditional supervised learning where a huge amount of labeled data is required is to generalize well to unseen data.
So the model is prone to overfitting to the few training samples.
Typically a few-shot leaning task consists of a support and query set.
Where the support-set contains the training data and the query set the evaluation data for real world evaluation.
A common way to format a few-shot leaning problem is using n-way k-shot notation.
For Example 3 target classeas and 5 samples per class for training might be a 3-way 5-shot few-shot classification problem.
A classical example of how such a model might work is a prototypical network.
These models learn a representation of each class and classify new examples based on proximity to these representations in an embedding space.
#figure(
image("rsc/prototype_fewshot_v3.png", width: 60%),
caption: [Prototypical network for few-shots. #cite(<snell2017prototypicalnetworksfewshotlearning>)],
) <prototypefewshot>
The first and easiest method of this bachelor thesis uses a simple ResNet to calucalte those embeddings and is basically a simple prototypical netowrk.
See #todo[link to this section]
#todo[proper source]
=== Generalisation from few samples
An especially hard task is to generalize from such few samples.
In typical supervised learning the model sees thousands or millions of samples of the corresponding domain during learning.
This helps the model to learn the underlying patterns and to generalize well to unseen data.
In few-shot learning the model has to generalize from just a few samples.
=== Softmax
#todo[Maybe remove this section]
The Softmax function @softmax #cite(<liang2017soft>) converts $n$ numbers of a vector into a probability distribution.
Its a generalization of the Sigmoid function and often used as an Activation Layer in neural networks.
$
sigma(bold(z))_j = (e^(z_j)) / (sum_(k=1)^k e^(z_k)) "for" j:={1,...,k}
$ <softmax>
The softmax function has high similarities with the Boltzmann distribution and was first introduced in the 19th century #cite(<Boltzmann>).
=== Cross Entropy Loss
#todo[Maybe remove this section]
Cross Entropy Loss is a well established loss function in machine learning.
@crelformal #cite(<crossentropy>) shows the formal general definition of the Cross Entropy Loss.
And @crelbinary is the special case of the general Cross Entropy Loss for binary classification tasks.
$
H(p,q) &= -sum_(x in cal(X)) p(x) log q(x) #<crelformal>\
H(p,q) &= -(p log(q) + (1-p) log(1-q)) #<crelbinary>\
cal(L)(p,q) &= -1/N sum_(i=1)^(cal(B)) (p_i log(q_i) + (1-p_i) log(1-q_i)) #<crelbatched>
$ <crel>
Equation~$cal(L)(p,q)$ @crelbatched #cite(<handsonaiI>) is the Binary Cross Entropy Loss for a batch of size $cal(B)$ and used for model training in this Practical Work.
=== Cosine Similarity
To measure the distance between two vectors some common distance measures are used.
One popular of them is the Cosine Similarity (@cosinesimilarity).
It measures the cosine of the angle between two vectors.
The Cosine Similarity is especially useful when the magnitude of the vectors is not important.
$
cos(theta) &:= (A dot B) / (||A|| dot ||B||)\
&= (sum_(i=1)^n A_i B_i)/ (sqrt(sum_(i=1)^n A_i^2) dot sqrt(sum_(i=1)^n B_i^2))
$ <cosinesimilarity>
#todo[Source?]
=== Euclidean Distance
The euclidean distance (@euclideannorm) is a simpler method to measure the distance between two points in a vector space.
It just calculates the square root of the sum of the squared differences of the coordinates.
the euclidean distance can also be represented as the L2 norm (euclidean norm) of the difference of the two vectors.
$
cal(d)(A,B) = ||A-B|| := sqrt(sum_(i=1)^n (A_i - B_i)^2)
$ <euclideannorm>
#todo[Source?]
=== Patchcore
// https://arxiv.org/pdf/2106.08265
PatchCore is an advanced method designed for cold-start anomaly detection and localization, primarily focused on industrial image data.
It operates on the principle that an image is anomalous if any of its patches is anomalous.
The method achieves state-of-the-art performance on benchmarks like MVTec AD with high accuracy, low computational cost, and competitive inference times. #cite(<patchcorepaper>)
#todo[Absatz umformulieren und vereinfachen]
The PatchCore framework leverages a pre-trained convolutional neural network (e.g., WideResNet50) to extract mid-level features from image patches.
By focusing on intermediate layers, PatchCore balances the retention of localized information with a reduction in bias associated with high-level features pre-trained on ImageNet.
To enhance robustness to spatial variations, the method aggregates features from local neighborhoods using adaptive pooling, which increases the receptive field without sacrificing spatial resolution. #cite(<patchcorepaper>)
A crucial component of PatchCore is its memory bank, which stores patch-level features derived from the training dataset.
This memory bank represents the nominal distribution of features against which test patches are compared.
To ensure computational efficiency and scalability, PatchCore employs a coreset reduction technique to condense the memory bank by selecting the most representative patch features.
This optimization reduces both storage requirements and inference times while maintaining the integrity of the feature space. #cite(<patchcorepaper>)
#todo[reference to image below]
During inference, PatchCore computes anomaly scores by measuring the distance between patch features from test images and their nearest neighbors in the memory bank.
If any patch exhibits a significant deviation, the corresponding image is flagged as anomalous.
For localization, the anomaly scores of individual patches are spatially aligned and upsampled to generate segmentation maps, providing pixel-level insights into the anomalous regions.~#cite(<patchcorepaper>)
Patchcore reaches a 99.6% AUROC on the MVTec AD dataset when detecting anomalies.
A great advantage of this method is the coreset subsampling reducing the memory bank size significantly.
This lowers computational costs while maintaining detection accuracy.~#cite(<patchcorepaper>)
#figure(
image("rsc/patchcore_overview.png", width: 80%),
caption: [Architecture of Patchcore. #cite(<patchcorepaper>)],
) <patchcoreoverview>
=== EfficientAD
// https://arxiv.org/pdf/2303.14535
EfficientAD is another state of the art method for anomaly detection.
It focuses on maintining performance as well as high computational efficiency.
At its core, EfficientAD uses a lightweight feature extractor, the Patch Description Network (PDN), which processes images in less than a millisecond on modern hardware.
In comparison to Patchcore which relies on a deeper, more computationaly heavy WideResNet-101 network, the PDN uses only four convulutional layers and two pooling layers.
This results in reduced latency while retains the ability to generate patch-level features.~#cite(<efficientADpaper>)
#todo[reference to image below]
The detection of anomalies is achieved through a student-teacher framework.
The teacher network is a PDN and pre-trained on normal (good) images and the student network is trained to predict the teachers output.
An anomalie is identified when the student failes to replicate the teachers output.
This works because of the abscence of anomalies in the training data and the student network has never seen an anomaly while training.
A special loss function helps the student network not to generalize too broadly and inadequatly learn to predict anomalous features.~#cite(<efficientADpaper>)
Additionally to this structural anomaly detection EfficientAD can also address logical anomalies, such as violations in spartial or contextual constraints (eg. object wrong arrangments).
This is done by the integration of an autoencoder trained to replicate the teacher's features.~#cite(<efficientADpaper>)
By comparing the outputs of the autoencdoer and the student logical anomalies are effectively detected.
This is a challenge that Patchcore does not directly address.~#cite(<efficientADpaper>)
#todo[maybe add key advantages such as low computational cost and high performance]
#figure(
image("rsc/efficientad_overview.png", width: 80%),
caption: [Architecture of EfficientAD. #cite(<efficientADpaper>)],
) <efficientadoverview>
=== Jupyter Notebook
A Jupyter notebook is a shareable document which combines code and its output, text and visualizations.
The notebook along with the editor provides a environment for fast prototyping and data analysis.
It is widely used in the data science, mathematics and machine learning community.
In the context of this bachelor thesis it was used to test and evaluate the three few-shot learning methods and to compare them. #cite(<jupyter>)
=== CNN
Convolutional neural networks are especially good model architectures for processing images, speech and audio signals.
A CNN typically consists of Convolutional layers, pooling layers and fully connected layers.
Convolutional layers are a set of learnable kernels (filters).
Each filter performs a convolution operation by sliding a window over every pixel of the image.
On each pixel a dot product creates a feature map.
Convolutional layers capture features like edges, textures or shapes.
Pooling layers sample down the feature maps created by the convolutional layers.
This helps reducing the computational complexity of the overall network and help with overfitting.
Common pooling layers include average- and max pooling.
Finally, after some convolution layers the feature map is flattened and passed to a network of fully connected layers to perform a classification or regression task.
@cnnarchitecture shows a typical binary classification task.~#cite(<cnnintro>)
#figure(
image("rsc/cnn_architecture.png", width: 80%),
caption: [Architecture convolutional neural network. #cite(<cnnarchitectureimg>)],
) <cnnarchitecture>
=== RESNet
Residual neural networks are a special type of neural network architecture.
They are especially good for deep learning and have been used in many state-of-the-art computer vision tasks.
The main idea behind ResNet is the skip connection.
The skip connection is a direct connection from one layer to another layer which is not the next layer.
This helps to avoid the vanishing gradient problem and helps with the training of very deep networks.
ResNet has proven to be very successful in many computer vision tasks and is used in this practical work for the classification task.
There are several different ResNet architectures, the most common are ResNet-18, ResNet-34, ResNet-50, ResNet-101 and ResNet-152. #cite(<resnet>)
For this bachelor theis the ResNet-50 architecture was used to predict the corresponding embeddings for the few-shot learning methods.
=== P$>$M$>$F
// https://arxiv.org/pdf/2204.07305
P>P>F (Pre-training > Meta-training > Fine-tuning) is a three-stage pipelined designed for few-shot learning.
It focuses on simplicity but still achieves competitive performance.
The three stages convert a general feature extractor into a task-specific model through fine-tuned optimization.
#cite(<pmfpaper>)
*Pre-training:*
The first stage in @pmfarchitecture initializes the backbone feature extractor.
This can be for instance as ResNet or ViT and is learned by self-supervised techniques.
This backbone is traned on large scale datasets on a general domain such as ImageNet or similar.
This step optimizes for robust feature extractions and builds a foundation model.
There are well established bethods for pretraining which can be used such as DINO (self-supervised consistency), CLIP (Image-text alignment) or BERT (for text data).
#cite(<pmfpaper>)
*Meta-training:*
The second stage in the pipline as in @pmfarchitecture is the meta-training.
Here a prototypical network (ProtoNet) is used to refine the pre-trained backbone.
ProtoNet constructs class centroids for each episode and then performs nearest class centroid classification.
Have a look at @prototypefewshot for a visualisation of its architecture.
The ProtoNet only requires a backbone $f$ to map images to an m-dimensional vector space: $f: cal(X) -> RR^m$.
The probability of a query image $x$ belonging to a class $k$ is given by the $exp$ of the distance of the sample to the class center divided by the sum of all distances:
$
p(y=k|x) = exp(-d(f(x), c_k)) / (sum_(k') exp(-d(f(x), c_k')))#cite(<pmfpaper>)
$
As a distance metric $d$ a cosine similarity is used. See @cosinesimilarity for the formula.
$c_k$, the prototy of a class is defined as $c_k = 1/N_k sum_(i:y_i=k) f(x_i)$ and $N_k$ is just the number of samples of class $k$.
The meta-training process is dataset-agnostic, allowing for flexible adaptation to various few-shot classification scenarios.#cite(<pmfpaper>)
*Fine-tuning:*
If an novel task is drawn from an unseen domain the model may fail to generalize because of a significant fail in the distribution.
To overcome this the model is optionally fine-tuned with the support set on a few gradient steps.
Data augmentation is used to generate a pseudo query set.
With the support set the class prototypes are calculated and compared against the models predictions for the pseudo query set.
With the loss of this steps the whole model is fine-tuned to the new domain.~#cite(<pmfpaper>)
#figure(
image("rsc/pmfarchitecture.png", width: 100%),
caption: [Architecture of P>M>F. #cite(<pmfpaper>)],
) <pmfarchitecture>
*Inference:*
During inference the support set is used to calculate the class prototypes.
For a query image the feature extractor extracts its embedding in lower dimensional space and compares it to the pre-computed prototypes.
The query image is then assigned to the class with the closest prototype.#cite(<pmfpaper>)
*Performance:*
P>M>F performs well across several few-shot learning benchmarks.
The combination of pre-training on large dataset and meta-trainng with episodic tasks helps the model to generalize well.
The inclusion of fine-tuning enhances adaptability to unseen domains, ensuring robust and efficient learning.#cite(<pmfpaper>)
*Limitations and Scalability:*
This method has some limitations.
It relies on domains with large external datasets, which require substantial computational computation resources to create pre-trained models.
Fine-tuning is effective but might be slow and not work well on devices with limited ocmputational resources.
Future research could focus on exploring faster and more efficient methods for fine-tuning models.
#cite(<pmfpaper>)
=== CAML <CAML>
// https://arxiv.org/pdf/2310.10971v2
CAML (Context aware meta learning) is one of the state-of-the-art methods for few-shot learning.
It consists of three different components: a frozen pre-trained image encoder, a fixed Equal Length and Maximally Equiangular Set (ELMES) class encoder and a non-causal sequence model.
This is a universal meta-learning approach.
That means no fine-tuning or meta-training is applied for specific domains.~#cite(<caml_paper>)
*Architecture:*
CAML first encodes the query and support set images using the fozen pre-trained feature extractor as shown in @camlarchitecture.
This step brings the images into a low dimensional space where similar images are encoded into similar embeddings.
The class labels are encoded with the ELMES class encoder.
Since the class of the query image is unknown in this stage a special learnable "unknown token" is added to the encoder.
This embedding is learned during pre-training.
Afterwards each image embedding is concatenated with the corresponding class embedding.
~#cite(<caml_paper>)
#todo[Add more references to the architecture image below]
*ELMES Encoder:*
The ELMES (Equal Length and Maximally Equiangular Set) encoder encodes the class labels to vectors of equal length.
The encoder is a bijective mapping between the labels and set of vectors that are equal length and maximally equiangular.
#todo[Describe what equiangular and bijective means]
Similar to one-hot encoding but with some advantages.
This encoder maximizes the algorithms ability to distinguish between different classes.
~#cite(<caml_paper>)
*Non-causal sequence model:*
The sequence created by the ELMES encoder is then fed into a non-causal sequence model.
This might be for instance a transormer encoder.
This step conditions the input sequence consisting of the query and support set embeddings.
Visual features from query and support set can be compared to each other to determine specific informations such as content or textures.
This can then be used to predict the class of the query image.
From the output of the sequence model the element at the same position as the query is selected.
Afterwards it is passed through a simple MLP network to predict the class of the query image.
~#cite(<caml_paper>)
*Large-Scale Pre-Training:*
CAML is pre-trained on a huge number of images from ImageNet-1k, Fungi, MSCOCO, and WikiArt datasets.
Those datasets span over different domains and help to detect any new visual concept during inference.
Only the non-causal sequence model is trained and the weights of the image encoder and ELMES encoder are kept frozen.
~#cite(<caml_paper>)
*Inference:*
During inference, CAML processes the following:
- Encodes the support set images and labels with the pre-trained feature and class encoders.
- Concatenates these encodings into a sequence alongside the query image embedding.
- Passes the sequence through the non-causal sequence model, enabling dynamic interaction between query and support set representations.
- Extracts the transformed query embedding and classifies it using a Multi-Layer Perceptron (MLP).~#cite(<caml_paper>)
*Performance:*
CAML achieves state-of-the-art performance in universal meta-learning across 11 few-shot classification benchmarks,
including generic object recognition (e.g., MiniImageNet), fine-grained classification (e.g., CUB, Aircraft),
and cross-domain tasks (e.g., Pascal+Paintings).
It outperformed or matched existing models in 14 of 22 evaluation settings.
It performes competitively against P>M>F in 8 benchmarks even though P>M>F was meta-trained on the same domain.
~#cite(<caml_paper>)
CAML does great in generalization and inference efficiency but faces limitations in specialized domains (e.g., ChestX)
and low-resolution tasks (e.g., CIFAR-fs).
Its use of frozen pre-trained feature extractors is key to avoiding overfitting and enabling robust performance.
~#cite(<caml_paper>)
#todo[We should add stuff here why we have a max amount of shots bc. of pretrained model]
#figure(
image("rsc/caml_architecture.png", width: 100%),
caption: [Architecture of CAML. #cite(<caml_paper>)],
) <camlarchitecture>
== Alternative Methods
There are several alternative methods to few-shot learning which are not used in this bachelor thesis.
#todo[Do it!]