finish results section and add most of conclusion and outlook stuff
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 12s
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 12s
This commit is contained in:
@ -1,5 +1,19 @@
|
||||
= Conclusion and Outlook
|
||||
|
||||
== Conclusion
|
||||
In conclusion one can say that Few-Shot learning is not the best choice for anomaly detection tasks.
|
||||
It is hugely outperformed by state of the art algorithms like Patchcore or EfficientAD.
|
||||
The only benefit of Few-Shot learning is that it can be used in environments where only a limited number of good samples are available.
|
||||
But this should not be the case in most scenarios.
|
||||
Most of the time plenty of good samples are available and in this case Patchcore or EfficientAD should perform great.
|
||||
|
||||
The only case where Few-Shot learning could be used is in a scenario where one wants to detect the anomaly class itself.
|
||||
Patchcore and EfficientAD can only detect if an anomaly is present or not but not what the anomaly is.
|
||||
So chaining a Few-Shot learner after Patchcore or EfficientAD could be a good idea to use the best of both worlds.
|
||||
|
||||
In most of the tests performed P>M>F performed the best.
|
||||
But also the simple ResNet50 method performed better than expected in most cases and can be considered if the computational resources are limited and if a simple architecture is enough.
|
||||
|
||||
== Outlook
|
||||
In the future when new Few-Shot learning methods evolve it could be interesting to test again how they perform in anomaly detection tasks.
|
||||
There might be a lack of research in the area where the classes to detect are very similar to each other
|
||||
and when building a few-shot learning algorithm tailored specifically for very similar classes this could boost the performance by a large margin.
|
||||
|
Reference in New Issue
Block a user