2025-01-03 21:48:48 +01:00
|
|
|
#import "utils.typ": todo
|
|
|
|
#import "@preview/subpar:0.1.1"
|
|
|
|
|
2024-10-28 16:02:53 +01:00
|
|
|
= Experimental Results
|
|
|
|
|
|
|
|
== Is Few-Shot learning a suitable fit for anomaly detection?
|
2025-01-03 21:48:48 +01:00
|
|
|
_Should Few-Shot learning be used for anomaly detection tasks?
|
|
|
|
How does it compare to well established algorithms such as Patchcore or EfficientAD?_
|
|
|
|
|
|
|
|
@comparison2waybottle shows the performance of the 2-way classification (anomaly or not) on the bottle class and @comparison2waycable the same on the cable class.
|
|
|
|
The performance values are the same as in @experiments but just merged together into one graph.
|
|
|
|
As a reference Patchcore reaches an AUROC score of 99.6% and EfficientAD reaches 99.8% averaged over all classes provided by the MVTec AD dataset.
|
|
|
|
Both are trained with samples from the 'good' class only.
|
|
|
|
So there is a clear performance gap between Few-Shot learning and the state of the art anomaly detection algorithms.
|
2024-10-28 16:02:53 +01:00
|
|
|
|
2025-01-03 21:48:48 +01:00
|
|
|
That means if the goal is just to detect anomalies, Few-Shot learning is not the best choice and Patchcore or EfficientAD should be used.
|
|
|
|
|
|
|
|
#subpar.grid(
|
|
|
|
figure(image("rsc/comparison-2way-bottle.png"), caption: [
|
|
|
|
Bottle class
|
|
|
|
]), <comparison2waybottle>,
|
|
|
|
figure(image("rsc/comparison-2way-cable.png"), caption: [
|
|
|
|
Cable class
|
|
|
|
]), <comparison2waycable>,
|
|
|
|
columns: (1fr, 1fr),
|
|
|
|
caption: [2-Way classification performance],
|
|
|
|
label: <comparison2way>,
|
|
|
|
)
|
2024-10-28 16:02:53 +01:00
|
|
|
|
|
|
|
== How does disbalancing the Shot number affect performance?
|
2025-01-03 21:48:48 +01:00
|
|
|
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
|
|
|
|
|
|
|
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
|
|
|
|
Which is an result that is unexpected.
|
|
|
|
#todo[Image of disbalanced shots]
|
2024-10-28 16:02:53 +01:00
|
|
|
|
|
|
|
== How does the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class?
|
2025-01-03 21:48:48 +01:00
|
|
|
_How much does the performance improve if only detecting an anomaly or not?
|
|
|
|
How does it compare to PatchCore and EfficientAD?_
|
2024-10-28 16:02:53 +01:00
|
|
|
|
|
|
|
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?
|