add images for final results
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 12s
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 12s
This commit is contained in:
@ -1,15 +1,41 @@
|
||||
#import "utils.typ": todo
|
||||
#import "@preview/subpar:0.1.1"
|
||||
|
||||
= Experimental Results
|
||||
|
||||
== Is Few-Shot learning a suitable fit for anomaly detection?
|
||||
_Should Few-Shot learning be used for anomaly detection tasks?
|
||||
How does it compare to well established algorithms such as Patchcore or EfficientAD?_
|
||||
|
||||
Should Few-Shot learning be used for anomaly detection tasks?
|
||||
How does it compare to well established algorithms such as Patchcore or EfficientAD?
|
||||
@comparison2waybottle shows the performance of the 2-way classification (anomaly or not) on the bottle class and @comparison2waycable the same on the cable class.
|
||||
The performance values are the same as in @experiments but just merged together into one graph.
|
||||
As a reference Patchcore reaches an AUROC score of 99.6% and EfficientAD reaches 99.8% averaged over all classes provided by the MVTec AD dataset.
|
||||
Both are trained with samples from the 'good' class only.
|
||||
So there is a clear performance gap between Few-Shot learning and the state of the art anomaly detection algorithms.
|
||||
|
||||
That means if the goal is just to detect anomalies, Few-Shot learning is not the best choice and Patchcore or EfficientAD should be used.
|
||||
|
||||
#subpar.grid(
|
||||
figure(image("rsc/comparison-2way-bottle.png"), caption: [
|
||||
Bottle class
|
||||
]), <comparison2waybottle>,
|
||||
figure(image("rsc/comparison-2way-cable.png"), caption: [
|
||||
Cable class
|
||||
]), <comparison2waycable>,
|
||||
columns: (1fr, 1fr),
|
||||
caption: [2-Way classification performance],
|
||||
label: <comparison2way>,
|
||||
)
|
||||
|
||||
== How does disbalancing the Shot number affect performance?
|
||||
Does giving the Few-Shot learner more good than bad samples improve the model performance?
|
||||
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
||||
|
||||
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
|
||||
Which is an result that is unexpected.
|
||||
#todo[Image of disbalanced shots]
|
||||
|
||||
== How does the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class?
|
||||
How much does the performance improve if only detecting an anomaly or not?
|
||||
How does it compare to PatchCore and EfficientAD?
|
||||
_How much does the performance improve if only detecting an anomaly or not?
|
||||
How does it compare to PatchCore and EfficientAD?_
|
||||
|
||||
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?
|
||||
|
Reference in New Issue
Block a user