fix stuff from prof
Some checks failed
Build Typst document / build_typst_documents (push) Failing after 1m39s
Some checks failed
Build Typst document / build_typst_documents (push) Failing after 1m39s
This commit is contained in:
@ -9,7 +9,7 @@ How does it compare to well established algorithms such as PatchCore or Efficien
|
||||
|
||||
@comparison2waybottle shows the performance of the 2-way classification (anomaly or not) on the bottle class and @comparison2waycable the same on the cable class.
|
||||
The performance values are the same as in @experiments but just merged together into one graph.
|
||||
As a reference PatchCore reaches an AUROC score of 99.6% and EfficientAD reaches 99.8% averaged over all classes provided by the MVTec AD dataset.
|
||||
As a reference PatchCore@patchcorepaper reaches an AUROC score of 99.6% and EfficientAD@efficientADpaper reaches 99.8% averaged over all classes provided by the MVTec AD dataset.
|
||||
Both are trained with samples from the 'good' class only.
|
||||
So there is a clear performance gap between Few-Shot learning and the state of the art anomaly detection algorithms.
|
||||
In the @comparison2way PatchCore and EfficientAD are not included as they aren't directly compareable in the same fashion.
|
||||
@ -29,7 +29,7 @@ That means if the goal is just to detect anomalies, Few-Shot learning is not the
|
||||
)
|
||||
|
||||
== How does disbalancing the Shot number affect performance?
|
||||
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
|
||||
_Does giving the Few-Shot learner a higher proportion of normal (non-anomalous) samples compared to anomalous samples improve the model's performance?_
|
||||
|
||||
As all three method results in @experiments show, the performance of the Few-Shot learner decreases with an increasing number of good samples.
|
||||
Which is an result that is unexpected (since one can think more samples perform always better) but align with the idea that all classes should always be as balanced as possible.
|
||||
@ -64,11 +64,11 @@ Which is an result that is unexpected (since one can think more samples perform
|
||||
Clearly all four graphs show that the performance decreases with an increasing number of good samples.
|
||||
So the conclusion is that the Few-Shot learner should always be trained with as balanced classes as possible.
|
||||
|
||||
== How do the 3 (ResNet, CAML, P>M>F) methods perform in only detecting the anomaly class?
|
||||
_How much does the performance improve by only detecting the presence of an anomaly?
|
||||
== How do the 3 (ResNet, CAML, P>M>F) methods perform in distinguishing between different anomaly types?
|
||||
_And how much does the performance improve by only detecting the presence of an anomaly?
|
||||
How does it compare to PatchCore and EfficientAD#todo[Maybe remove comparion?]?_
|
||||
|
||||
@comparisonnormal shows graphs comparing the performance of the ResNet, CAML and P>M>F methods in detecting the anomaly class only including the good class as well as excluding the good class.
|
||||
@comparisonnormal shows graphs comparing the performance of the ResNet@resnet, CAML@caml_paper and P>M>F@pmfpaper methods in detecting the anomaly class only including the good class as well as excluding the good class.
|
||||
P>M>F performs in almost all cases better than ResNet and CAML.
|
||||
P>M>F reaches up to 78% accuracy in the bottle class (@comparisonnormalbottle) and 46% in the cable class (@comparisonnormalcable) when detecting all classes including good ones
|
||||
and 84% in the bottle class (@comparisonfaultyonlybottle) and 51% in the cable class (@comparisonfaultyonlycable) when excluding the good class.
|
||||
|
Reference in New Issue
Block a user