fix more comma errors
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 17s
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 17s
This commit is contained in:
@ -92,7 +92,7 @@ After creating the embeddings for the support and query set the euclidean distan
|
||||
The class with the smallest distance is chosen as the predicted class.
|
||||
|
||||
=== Results <resnet50perf>
|
||||
This method performed better than expected wich such a simple method.
|
||||
This method performed better than expected with such a simple method.
|
||||
As in @resnet50bottleperfa with a normal 5 shot / 4 way classification the model achieved an accuracy of 75%.
|
||||
When detecting if there occured an anomaly or not only the performance is significantly better and peaks at 81% with 5 shots / 2 ways.
|
||||
Interestintly the model performed slightly better with fewer shots in this case.
|
||||
@ -136,7 +136,7 @@ but this is expected as the cable class consists of 8 faulty classes.
|
||||
|
||||
== P>M>F
|
||||
=== Approach
|
||||
For P>M>F I used the pretrained model weights from the original paper.
|
||||
For P>M>F, I used the pretrained model weights from the original paper.
|
||||
As backbone feature extractor a DINO model is used, which is pre-trained by facebook.
|
||||
This is a vision transformer with a patch size of 16 and 12 attention heads learned in a self-supervised fashion.
|
||||
This feature extractor was meta-trained with 10 public image dasets #footnote[ImageNet-1k, Omniglot, FGVC-
|
||||
@ -144,7 +144,7 @@ Aircraft, CUB-200-2011, Describable Textures, QuickDraw,
|
||||
FGVCx Fungi, VGG Flower, Traffic Signs and MSCOCO~@pmfpaper]
|
||||
of diverse domains by the authors of the original paper.~@pmfpaper
|
||||
|
||||
Finally, this model is finetuned with the support set of every test iteration.
|
||||
Finally, this model is fine-tuned with the support set of every test iteration.
|
||||
Every time the support set changes, we need to finetune the model again.
|
||||
In a real world scenario this should not be the case because the support set is fixed and only the query set changes.
|
||||
|
||||
@ -196,12 +196,12 @@ This transformer was trained on a huge number of images as described in @CAML.
|
||||
|
||||
=== Results
|
||||
The results were not as good as expeced.
|
||||
This might be caused by the fact that the model was not fine-tuned for any industrial dataset domain.
|
||||
This might be because the model was not fine-tuned for any industrial dataset domain.
|
||||
The model was trained on a large number of general purpose images and is not fine-tuned at all.
|
||||
Moreover, it was not fine-tuned on the support set similar to the P>M>F method, which could have a huge impact on performance.
|
||||
It might also not handle very similar images well.
|
||||
|
||||
Compared the the other two methods, CAML performed poorly in almost all experiments.
|
||||
Compared to the other two methods, CAML performed poorly in almost all experiments.
|
||||
The normal few-shot classification reached only 40% accuracy in @camlperfa at best.
|
||||
The only test it did surprisingly well was the detection of the anomaly class for the cable class in @camlperfb were it reached almost 60% accuracy.
|
||||
|
||||
|
Reference in New Issue
Block a user