fix stuff from prof
Some checks failed
Build Typst document / build_typst_documents (push) Failing after 1m39s
Some checks failed
Build Typst document / build_typst_documents (push) Failing after 1m39s
This commit is contained in:
@ -8,12 +8,12 @@ Machine learning helped the field to advance a lot in the past.
|
||||
Most of the time the error rate is sub $0.1%$ and therefore plenty of good data and almost no faulty data is available.
|
||||
So the train data is heavily unbalanced.~#cite(<parnami2022learningexamplessummaryapproaches>)
|
||||
|
||||
PatchCore and EfficientAD are state of the art algorithms trained only on good data and then detect anomalies within unseen (but similar) data.
|
||||
PatchCore@patchcorepaper and EfficientAD@efficientADpaper are state of the art algorithms trained only on good data and then detect anomalies within unseen (but similar) data.
|
||||
One of their problems is the need of lots of training data and time to train.
|
||||
Moreover a slight change of the camera position or the lighting conditions can lead to a mandatory complete retraining of the model.
|
||||
Few-Shot learning might be a suitable alternative with hugely lowered train times and fast adaption to new conditions.~#cite(<efficientADpaper>)#cite(<patchcorepaper>)#cite(<parnami2022learningexamplessummaryapproaches>)
|
||||
|
||||
In this thesis the performance of 3 Few-Shot learning algorithms (ResNet50, P>M>F, CAML) will be compared in the field of anomaly detection.
|
||||
In this thesis the performance of 3 Few-Shot learning algorithms (ResNet50@resnet, P>M>F@pmfpaper, CAML@caml_paper) will be compared in the field of anomaly detection.
|
||||
Moreover, few-shot learning might be able not only to detect anomalies but also to detect the anomaly class.
|
||||
|
||||
== Research Questions <sectionresearchquestions>
|
||||
@ -23,10 +23,10 @@ _Should Few-Shot learning be used for anomaly detection tasks?
|
||||
How does it compare to well established algorithms such as PatchCore or EfficientAD?_
|
||||
|
||||
=== How does disbalancing the Shot number affect performance?
|
||||
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
||||
_Does giving the Few-Shot learner a higher proportion of normal (non-anomalous) samples compared to anomalous samples improve the model's performance?_
|
||||
|
||||
=== How do the 3 (ResNet, CAML, P>M>F) methods perform in only detecting the anomaly class?
|
||||
_How much does the performance improve by only detecting the presence of an anomaly?
|
||||
=== How do the 3 (ResNet, CAML, P>M>F) methods perform in distinguishing between different anomaly types?
|
||||
_And how much does the performance improve by only detecting the presence of an anomaly?
|
||||
How does it compare to PatchCore and EfficientAD?_
|
||||
|
||||
/*#if inwriting [
|
||||
@ -38,7 +38,7 @@ How does it compare to PatchCore and EfficientAD?_
|
||||
This thesis is structured to provide a comprehensive exploration of Few-Shot Learning in anomaly detection.
|
||||
@sectionmaterialandmethods introduces the datasets and methodologies used in this research.
|
||||
The MVTec AD dataset is discussed in detail as the primary source for benchmarking, along with an overview of the Few-Shot Learning paradigm.
|
||||
The section elaborates on the three selected methods—ResNet50, P>M>F, and CAML—while also touching upon well established anomaly detection algorithms such as PatchCore and EfficientAD.
|
||||
The section elaborates on the three selected methods—ResNet50@resnet, P>M>F@pmfpaper, and CAML@caml_paper—while also touching upon well established anomaly detection algorithms such as PatchCore and EfficientAD.
|
||||
|
||||
@sectionimplementation focuses on the practical realization of the methods described in the previous chapter.
|
||||
It outlines the experimental setup, including the use of Jupyter Notebook for prototyping and testing, and provides a detailed account of how each method was implemented and evaluated.
|
||||
|
Reference in New Issue
Block a user