bachelor-thesis/experimentalresults.typ
lukas-heiligenbrunner 9c70fdf932
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 18s
add info line why pc and eid not in plot
2025-01-05 18:02:45 +01:00

43 lines
2.1 KiB
Typst

#import "utils.typ": todo
#import "@preview/subpar:0.1.1"
= Experimental Results
== Is Few-Shot learning a suitable fit for anomaly detection?
_Should Few-Shot learning be used for anomaly detection tasks?
How does it compare to well established algorithms such as Patchcore or EfficientAD?_
@comparison2waybottle shows the performance of the 2-way classification (anomaly or not) on the bottle class and @comparison2waycable the same on the cable class.
The performance values are the same as in @experiments but just merged together into one graph.
As a reference Patchcore reaches an AUROC score of 99.6% and EfficientAD reaches 99.8% averaged over all classes provided by the MVTec AD dataset.
Both are trained with samples from the 'good' class only.
So there is a clear performance gap between Few-Shot learning and the state of the art anomaly detection algorithms.
In the @comparison2way Patchcore and EfficientAD are not included as they aren't directly compareable in the same fashion.
That means if the goal is just to detect anomalies, Few-Shot learning is not the best choice and Patchcore or EfficientAD should be used.
#subpar.grid(
figure(image("rsc/comparison-2way-bottle.png"), caption: [
Bottle class
]), <comparison2waybottle>,
figure(image("rsc/comparison-2way-cable.png"), caption: [
Cable class
]), <comparison2waycable>,
columns: (1fr, 1fr),
caption: [2-Way classification performance],
label: <comparison2way>,
)
== How does disbalancing the Shot number affect performance?
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
Which is an result that is unexpected.
#todo[Image of disbalanced shots]
== How does the 3 (ResNet, CAML, pmf) methods perform in only detecting the anomaly class?
_How much does the performance improve if only detecting an anomaly or not?
How does it compare to PatchCore and EfficientAD?_
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?