diff --git a/main.typ b/main.typ index 0dc7db3..9d5c397 100644 --- a/main.typ +++ b/main.typ @@ -100,9 +100,9 @@ To everyone who contributed to this thesis, directly or indirectly, I offer my h // Set citation style -#set cite(style: "iso-690-author-date") // page info visible +// #set cite(style: "iso-690-author-date") // page info visible //#set cite(style: "iso-690-numeric") // page info visible -//#set cite(style: "springer-basic")// no additional info visible (page number in square brackets) +#set cite(style: "springer-basic")// no additional info visible (page number in square brackets) //#set cite(style: "alphanumeric")// page info not visible diff --git a/materialandmethods.typ b/materialandmethods.typ index ab7d81a..553130c 100644 --- a/materialandmethods.typ +++ b/materialandmethods.typ @@ -98,6 +98,7 @@ This helps the model to learn the underlying patterns and to generalize well to In few-shot learning the model has to generalize from just a few samples. === Patchcore +// https://arxiv.org/pdf/2106.08265 PatchCore is an advanced method designed for cold-start anomaly detection and localization, primarily focused on industrial image data. It operates on the principle that an image is anomalous if any of its patches is anomalous. The method achieves state-of-the-art performance on benchmarks like MVTec AD with high accuracy, low computational cost, and competitive inference times. #cite() @@ -114,22 +115,49 @@ This optimization reduces both storage requirements and inference times while ma During inference, PatchCore computes anomaly scores by measuring the distance between patch features from test images and their nearest neighbors in the memory bank. If any patch exhibits a significant deviation, the corresponding image is flagged as anomalous. -For localization, the anomaly scores of individual patches are spatially aligned and upsampled to generate segmentation maps, providing pixel-level insights into the anomalous regions. #cite() +For localization, the anomaly scores of individual patches are spatially aligned and upsampled to generate segmentation maps, providing pixel-level insights into the anomalous regions.~#cite() Patchcore reaches a 99.6% AUROC on the MVTec AD dataset when detecting anomalies. A great advantage of this method is the coreset subsampling reducing the memory bank size significantly. -This lowers computational costs while maintaining detection accuracy. #cite() +This lowers computational costs while maintaining detection accuracy.~#cite() + +// todo reference to image below #figure( image("rsc/patchcore_overview.png", width: 80%), caption: [Architecture of Patchcore. #cite()], ) -// https://arxiv.org/pdf/2106.08265 === EfficientAD -todo stuff #cite() // https://arxiv.org/pdf/2303.14535 +EfficientAD is another state of the art method for anomaly detection. +It focuses on maintining performance as well as high computational efficiency. +At its core, EfficientAD uses a lightweight feature extractor, the Patch Description Network (PDN), which processes images in less than a millisecond on modern hardware. +In comparison to Patchcore which relies on a deeper, more computationaly heavy WideResNet-101 network, the PDN uses only four convulutional layers and two pooling layers. +This results in reduced latency while retains the ability to generate patch-level features.~#cite() + +The detection of anomalies is achieved through a student-teacher framework. +The teacher network is a PDN and pre-trained on normal (good) images and the student network is trained to predict the teachers output. +An anomalie is identified when the student failes to replicate the teachers output. +This works because of the abscence of anomalies in the training data and the student network has never seen an anomaly while training. +A special loss function helps the student network not to generalize too broadly and inadequatly learn to predict anomalous features.~#cite() + +Additionally to this structural anomaly detection EfficientAD can also address logical anomalies, such as violations in spartial or contextual constraints (eg. object wrong arrangments). +This is done by the integration of an autoencoder trained to replicate the teacher's features.~#cite() + +By comparing the outputs of the autoencdoer and the student logical anomalies are effectively detected. +This is a challenge that Patchcore does not directly address.~#cite() + +// todo maybe add key advantages such as low computational cost and high performance +// +// todo reference to image below + + +#figure( + image("rsc/efficientad_overview.png", width: 80%), + caption: [Architecture of EfficientAD. #cite()], +) === Jupyter Notebook @@ -150,8 +178,7 @@ Pooling layers sample down the feature maps created by the convolutional layers. This helps reducing the computational complexity of the overall network and help with overfitting. Common pooling layers include average- and max pooling. Finally, after some convolution layers the feature map is flattened and passed to a network of fully connected layers to perform a classification or regression task. -@cnnarchitecture shows a typical binary classification task. -#cite() +@cnnarchitecture shows a typical binary classification task.~#cite() #figure( image("rsc/cnn_architecture.png", width: 80%), diff --git a/rsc/efficientad_overview.png b/rsc/efficientad_overview.png new file mode 100644 index 0000000..ac4d79c Binary files /dev/null and b/rsc/efficientad_overview.png differ