fix some errors
All checks were successful
Build Typst document / build_typst_documents (push) Successful in 1m34s

This commit is contained in:
lukas-heiligenbrunner 2025-01-24 19:51:55 +01:00
parent 8f28a8c387
commit 71bdb0a207
3 changed files with 11 additions and 11 deletions

View File

@ -64,8 +64,8 @@ Which is an result that is unexpected (since one can think more samples perform
Clearly all four graphs show that the performance decreases with an increasing number of good samples.
So the conclusion is that the Few-Shot learner should always be trained with as balanced classes as possible.
== How does the 3 (ResNet, CAML, P>M>F) methods perform in only detecting the anomaly class?
_How much does the performance improve if only detecting an anomaly or not?
== How do the 3 (ResNet, CAML, P>M>F) methods perform in only detecting the anomaly class?
_How much does the performance improve by only detecting the presence of an anomaly?
How does it compare to PatchCore and EfficientAD#todo[Maybe remove comparion?]?_
@comparisonnormal shows graphs comparing the performance of the ResNet, CAML and P>M>F methods in detecting the anomaly class only including the good class as well as excluding the good class.

View File

@ -6,7 +6,7 @@ Anomaly detection has especially in the industrial and automotive field essentia
Lots of assembly lines need visual inspection to find errors often with the help of camera systems.
Machine learning helped the field to advance a lot in the past.
Most of the time the error rate is sub $.1%$ and therefore plenty of good data and almost no faulty data is available.
So the train data is heavily unbalaned.~#cite(<parnami2022learningexamplessummaryapproaches>)
So the train data is heavily unbalanced.~#cite(<parnami2022learningexamplessummaryapproaches>)
PatchCore and EfficientAD are state of the art algorithms trained only on good data and then detect anomalies within unseen (but similar) data.
One of their problems is the need of lots of training data and time to train.
@ -25,8 +25,8 @@ How does it compare to well established algorithms such as Patchcore or Efficien
=== How does disbalancing the Shot number affect performance?
_Does giving the Few-Shot learner more good than bad samples improve the model performance?_
=== How does the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class?
_How much does the performance improve if only detecting an anomaly or not?
=== How do the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class?
_How much does the performance improve by only detecting the presence of an anomaly?
How does it compare to PatchCore and EfficientAD?_
#if inwriting [

View File

@ -36,7 +36,7 @@ The bottle category contains 3 different defect classes: _broken_large_, _broken
Whereas cable has a lot more defect classes: _bent_wire_, _cable_swap_, _combined_, _cut_inner_insulation_,
_cut_outer_insulation_, _missing_cable_, _missing_wire_, _poke_insulation_.
So many more defect classes are already an indication that a classification task might be more difficult for the cable category.
More defect classes are already an indication that a classification task might be more difficult for the cable category.
#subpar.grid(
figure(image("rsc/mvtec/cable/bent_wire_example.png"), caption: [
@ -79,7 +79,7 @@ So the model is prone to overfitting to the few training samples and this means
Typically a few-shot leaning task consists of a support and query set.
Where the support-set contains the training data and the query set the evaluation data for real world evaluation.
A common way to format a few-shot leaning problem is using n-way k-shot notation.
For Example 3 target classes and 5 samples per class for training might be a 3-way 5-shot few-shot classification problem.~@snell2017prototypicalnetworksfewshotlearning @patchcorepaper
For Example, 3 target classes and 5 samples per class for training might be a 3-way 5-shot few-shot classification problem.~@snell2017prototypicalnetworksfewshotlearning @patchcorepaper
A classical example of how such a model might work is a prototypical network.
These models learn a representation of each class in a reduced dimensionality and classify new examples based on proximity to these representations in an embedding space.~@snell2017prototypicalnetworksfewshotlearning
@ -127,10 +127,10 @@ $ <crel>
Equation~$cal(L)(p,q)$ @crelbatched #cite(<handsonaiI>) is the Binary Cross Entropy Loss for a batch of size $cal(B)$ and used for model training in this Practical Work.
=== Cosine Similarity
To measure the distance between two vectors some common distance measures are used.
One popular of them is the Cosine Similarity (@cosinesimilarity).
It measures the cosine of the angle between two vectors.
The Cosine Similarity is especially useful when the magnitude of the vectors is not important.
Cosine similarity is a widely used metric for measuring the similarity between two vectors. (@cosinesimilarity).
It computes the cosine of the angle between the vectors, offering a measure of their alignment.
This property makes the cosine similarity particularly effective in scenarios where the
direction of the vector holds more important information than the magnitude.
$
cos(theta) &:= (A dot B) / (||A|| dot ||B||)\