diff --git a/experimentalresults.typ b/experimentalresults.typ index 9647729..5a9d9c9 100644 --- a/experimentalresults.typ +++ b/experimentalresults.typ @@ -12,6 +12,7 @@ The performance values are the same as in @experiments but just merged together As a reference Patchcore reaches an AUROC score of 99.6% and EfficientAD reaches 99.8% averaged over all classes provided by the MVTec AD dataset. Both are trained with samples from the 'good' class only. So there is a clear performance gap between Few-Shot learning and the state of the art anomaly detection algorithms. +In the @comparison2way Patchcore and EfficientAD are not included as they aren't directly compareable in the same fashion. That means if the goal is just to detect anomalies, Few-Shot learning is not the best choice and Patchcore or EfficientAD should be used. @@ -34,7 +35,7 @@ As all three method results in @experiments show, the performance of the Few-Sho Which is an result that is unexpected. #todo[Image of disbalanced shots] -== How does the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class? +== How does the 3 (ResNet, CAML, pmf) methods perform in only detecting the anomaly class? _How much does the performance improve if only detecting an anomaly or not? How does it compare to PatchCore and EfficientAD?_