finish results section and add most of conclusion and outlook stuff
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 12s
@ -1,5 +1,19 @@
|
|||||||
= Conclusion and Outlook
|
= Conclusion and Outlook
|
||||||
|
|
||||||
== Conclusion
|
== Conclusion
|
||||||
|
In conclusion one can say that Few-Shot learning is not the best choice for anomaly detection tasks.
|
||||||
|
It is hugely outperformed by state of the art algorithms like Patchcore or EfficientAD.
|
||||||
|
The only benefit of Few-Shot learning is that it can be used in environments where only a limited number of good samples are available.
|
||||||
|
But this should not be the case in most scenarios.
|
||||||
|
Most of the time plenty of good samples are available and in this case Patchcore or EfficientAD should perform great.
|
||||||
|
|
||||||
|
The only case where Few-Shot learning could be used is in a scenario where one wants to detect the anomaly class itself.
|
||||||
|
Patchcore and EfficientAD can only detect if an anomaly is present or not but not what the anomaly is.
|
||||||
|
So chaining a Few-Shot learner after Patchcore or EfficientAD could be a good idea to use the best of both worlds.
|
||||||
|
|
||||||
|
In most of the tests performed P>M>F performed the best.
|
||||||
|
But also the simple ResNet50 method performed better than expected in most cases and can be considered if the computational resources are limited and if a simple architecture is enough.
|
||||||
|
|
||||||
== Outlook
|
== Outlook
|
||||||
|
In the future when new Few-Shot learning methods evolve it could be interesting to test again how they perform in anomaly detection tasks.
|
||||||
|
There might be a lack of research in the area where the classes to detect are very similar to each other
|
||||||
|
and when building a few-shot learning algorithm tailored specifically for very similar classes this could boost the performance by a large margin.
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
= Experimental Results
|
= Experimental Results
|
||||||
|
|
||||||
== Is Few-Shot learning a suitable fit for anomaly detection?
|
== Is Few-Shot learning a suitable fit for anomaly detection? <expresults2way>
|
||||||
_Should Few-Shot learning be used for anomaly detection tasks?
|
_Should Few-Shot learning be used for anomaly detection tasks?
|
||||||
How does it compare to well established algorithms such as Patchcore or EfficientAD?_
|
How does it compare to well established algorithms such as Patchcore or EfficientAD?_
|
||||||
|
|
||||||
@ -32,11 +32,74 @@ That means if the goal is just to detect anomalies, Few-Shot learning is not the
|
|||||||
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
||||||
|
|
||||||
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
|
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
|
||||||
Which is an result that is unexpected.
|
Which is an result that is unexpected (since one can think more samples perform always better) but align with the idea that all classes should always be as balanced as possible.
|
||||||
#todo[Image of disbalanced shots]
|
|
||||||
|
@comparisoninbalanced shows the performance of the inbalanced classification on the bottle and cable class for all anomaly classes.
|
||||||
|
|
||||||
|
#subpar.grid(
|
||||||
|
figure(image("rsc/inbalanced-bottle.png"), caption: [
|
||||||
|
Bottle class
|
||||||
|
]), <comparisoninbalancedbottle>,
|
||||||
|
figure(image("rsc/inbalanced-cable.png"), caption: [
|
||||||
|
Cable class
|
||||||
|
]), <comparisoninbalancedcable>,
|
||||||
|
columns: (1fr, 1fr),
|
||||||
|
caption: [Inbalanced classification performance],
|
||||||
|
label: <comparisoninbalanced>,
|
||||||
|
)
|
||||||
|
|
||||||
|
@comparisoninbalanced2way shows the performance of the inbalanced classification for just being an anomaly or not on the bottle and cable class.
|
||||||
|
#subpar.grid(
|
||||||
|
figure(image("rsc/inbalanced-2way-bottle.png"), caption: [
|
||||||
|
Bottle class
|
||||||
|
]), <comparisoninbalanced2waybottle>,
|
||||||
|
figure(image("rsc/inbalanced-2way-cable.png"), caption: [
|
||||||
|
Cable class
|
||||||
|
]), <comparisoninbalanced2waycable>,
|
||||||
|
columns: (1fr, 1fr),
|
||||||
|
caption: [Inbalanced 2-Way classification performance],
|
||||||
|
label: <comparisoninbalanced2way>,
|
||||||
|
)
|
||||||
|
|
||||||
|
Clearly all four graphs show that the performance decreases with an increasing number of good samples.
|
||||||
|
So the conclusion is that the Few-Shot learner should always be trained with as balanced classes as possible.
|
||||||
|
|
||||||
== How does the 3 (ResNet, CAML, pmf) methods perform in only detecting the anomaly class?
|
== How does the 3 (ResNet, CAML, pmf) methods perform in only detecting the anomaly class?
|
||||||
_How much does the performance improve if only detecting an anomaly or not?
|
_How much does the performance improve if only detecting an anomaly or not?
|
||||||
How does it compare to PatchCore and EfficientAD?_
|
How does it compare to PatchCore and EfficientAD#todo[Maybe remove comparion?]?_
|
||||||
|
|
||||||
|
@comparisonnormal shows graphs comparing the performance of the ResNet, CAML and P>M>F methods in detecting the anomaly class only including the good class as well as excluding the good class.
|
||||||
|
P>M>F performs in almost all cases better than ResNet and CAML.
|
||||||
|
P>M>F reaches up to 78% accuracy in the bottle class (@comparisonnormalbottle) and 46% in the cable class (@comparisonnormalcable) when detecting all classes including good ones
|
||||||
|
and 84% in the bottle class (@comparisonfaultyonlybottle) and 51% in the cable class (@comparisonfaultyonlycable) when excluding the good class.
|
||||||
|
Those results are pretty good when considering the few amount of samples and how similar the anomaly classes actually are.
|
||||||
|
CAML performes the worst in all cases except for the cable class when detecting all classes except the good one.
|
||||||
|
This might be the case because it is not fine-tuned on the shots and not really built for such similar classes.
|
||||||
|
|
||||||
|
The detection is not really compareable with PatchCore and EfficientAD as they are trained on the good class only.
|
||||||
|
And they are built for detecting anomalies in general and not the anomaly classes.
|
||||||
|
Have a look at @expresults2way for a comparison of the 2-way classification performance.
|
||||||
|
|
||||||
|
So in conclusion it's a good idea to use P>M>F for detecting the anomaly classes only.
|
||||||
|
Especially when there are not many samples of the anomaly classes available such as in most anomaly detection scenarios.
|
||||||
|
One could use a well established algorithm like PatchCore or EfficientAD for detecting anomalies in general and P>M>F for detecting the anomaly class afterwards.
|
||||||
|
#subpar.grid(
|
||||||
|
figure(image("rsc/normal-bottle.png"), caption: [
|
||||||
|
5-Way - Bottle class
|
||||||
|
]), <comparisonnormalbottle>,
|
||||||
|
figure(image("rsc/normal-cable.png"), caption: [
|
||||||
|
9-Way - Cable class
|
||||||
|
]), <comparisonnormalcable>,
|
||||||
|
figure(image("rsc/faultclasses-bottle.png"), caption: [
|
||||||
|
4-Way - Bottle class
|
||||||
|
]), <comparisonfaultyonlybottle>,
|
||||||
|
figure(image("rsc/faultclasses-cable.png"), caption: [
|
||||||
|
8-Way - Cable class
|
||||||
|
]), <comparisonfaultyonlycable>,
|
||||||
|
columns: (1fr, 1fr),
|
||||||
|
caption: [Nomaly class only classification performance],
|
||||||
|
label: <comparisonnormal>,
|
||||||
|
)
|
||||||
|
|
||||||
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?
|
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?
|
||||||
|
#todo[Maybe don't do this]
|
||||||
|
15
main.typ
@ -54,20 +54,7 @@
|
|||||||
abstract-en: [//max. 250 words
|
abstract-en: [//max. 250 words
|
||||||
#lorem(200) ],
|
#lorem(200) ],
|
||||||
abstract-de: none,// or specify the abbstract_de in a container []
|
abstract-de: none,// or specify the abbstract_de in a container []
|
||||||
acknowledgements: [
|
acknowledgements: none,//acknowledgements: none // if you are self-made
|
||||||
|
|
||||||
// TODO
|
|
||||||
I would like to extend a huge thank you to Dr. Felina Whiskers, my primary advisor, for her pawsitive support and expert guidance. Without her wisdom and occasional catnip breaks, this thesis might have turned into a hairball of confusion.
|
|
||||||
|
|
||||||
A special shoutout to Dr. Felix Pawsworth, my co-advisor, for his keen insights and for keeping me from chasing my own tail during this research. Your input was invaluable and much appreciated.
|
|
||||||
|
|
||||||
To the cat owners, survey respondents, and interviewees—thank you for sharing your feline escapades. Your stories made this research more entertaining than a laser pointer.
|
|
||||||
|
|
||||||
Lastly, to my family and friends, thank you for tolerating the endless cat puns and my obsession with feline behavior. Your patience and encouragement kept me from becoming a full-time cat herder.
|
|
||||||
|
|
||||||
To everyone who contributed to this thesis, directly or indirectly, I offer my heartfelt gratitude. You've all made this journey a little less ruff!
|
|
||||||
|
|
||||||
],//acknowledgements: none // if you are self-made
|
|
||||||
show-title-in-header: false,
|
show-title-in-header: false,
|
||||||
draft: draft,
|
draft: draft,
|
||||||
)
|
)
|
||||||
|
BIN
rsc/faultclasses-bottle.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
rsc/faultclasses-cable.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
rsc/inbalanced-2way-bottle.png
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
rsc/inbalanced-2way-cable.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
rsc/inbalanced-bottle.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
rsc/inbalanced-cable.png
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
rsc/normal-bottle.png
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
rsc/normal-cable.png
Normal file
After Width: | Height: | Size: 36 KiB |