2024-04-10 19:21:26 +02:00
\section { Material and Methods} \label { sec:material-and-methods}
\subsection { Material} \label { subsec:material}
\subsubsection { Dagster}
\subsubsection { Label-Studio}
\subsubsection { Pytorch}
\subsubsection { NVTec}
\subsubsection { Imagenet}
2024-04-10 23:31:41 +02:00
\subsubsection { Anomalib}
% todo maybe remove?
2024-04-10 19:21:26 +02:00
\subsection { Methods} \label { subsec:methods}
\subsubsection { Active-Learning}
2024-04-12 15:48:57 +02:00
\subsubsection { Semi-Supervised learning}
In traditional supervised learning we have a labeled dataset.
Each datapoint is associated with a corresponding target label.
The goal is to fit a model to predict the labels from datapoints.
In traditional unsupervised learning there are also datapoints but no labels are known.
The goal is to find patterns or structures in the data.
Moreover, it can be used for clustering or downprojection.
Those two techniques combined yield semi-supervised learning.
Some of the labels are known, but for most of the data we have only the raw datapoints.
The basic idea is that the unlabeled data can significantly improve the model performance when used in combination with the labeled data.
2024-04-10 19:21:26 +02:00
\subsubsection { ROC and AUC}
2024-04-18 22:54:59 +02:00
A receiver operating characteristic curve can be used to measure the performance of a classifier of a binary classification task.
When using the accuracy as the performance metric it doesn't reveal much about the balance of the predictions.
There might be many true-positives and rarely any true-negatives and the accuracy is still good.
The ROC curve helps with this problem and visualizes the true-positives and false-positives on a line plot.
The more the curve ascents the upper-left or bottom-right corner the better the classifier gets.
\begin { figure}
\centering
\includegraphics [width=\linewidth] { ../rsc/Roc_ curve.svg}
\caption { Architecture convolutional neural network. Image by \href { https://cointelegraph.com/explained/what-are-convolutional-neural-networks} { SKY ENGINE AI} }
\label { fig:roc-example}
\end { figure}
Furthermore, the area under this curve is called AUR curve and a useful metric to measure the performance of a binary classifier.
2024-04-10 19:21:26 +02:00
\subsubsection { RESNet}
\subsubsection { CNN}
2024-04-12 13:19:41 +02:00
Convolutional neural networks are especially good model architectures for processing images, speech and audio signals.
A CNN typically consists of Convolutional layers, pooling layers and fully connected layers.
Convolutional layers are a set of learnable kernels (filters).
Each filter performs a convolution operation by sliding a window over every pixel of the image.
On each pixel a dot product creates a feature map.
Convolutional layers capture features like edges, textures or shapes.
Pooling layers sample down the feature maps created by the convolutional layers.
This helps reducing the computational complexity of the overall network and help with overfitting.
Common pooling layers include average- and max pooling.
Finally, after some convolution layers the feature map is flattened and passed to a network of fully connected layers to perform a classification or regression task.
2024-04-12 15:48:57 +02:00
\ref { fig:cnn-architecture} shows a typical binary classification task.
2024-04-12 13:19:41 +02:00
2024-04-17 16:04:02 +02:00
\begin { figure}
2024-04-12 13:19:41 +02:00
\centering
\includegraphics [width=\linewidth] { ../rsc/cnn_ architecture}
\caption { Architecture convolutional neural network. Image by \href { https://cointelegraph.com/explained/what-are-convolutional-neural-networks} { SKY ENGINE AI} }
\label { fig:cnn-architecture}
\end { figure}
2024-04-10 19:21:26 +02:00
\subsubsection { Softmax}
2024-04-11 12:54:47 +02:00
The Softmax function converts $ n $ numbers of a vector into a probability distribution.
2024-04-10 19:21:26 +02:00
Its a generalization of the Sigmoid function and often used as an Activation Layer in neural networks.
\begin { equation} \label { eq:softmax}
2024-04-10 23:31:41 +02:00
\sigma (\mathbf { z} )_ j = \frac { e^ { z_ j} } { \sum _ { k=1} ^ K e^ { z_ k} } \; for j\coloneqq \{ 1,\dots ,K\}
\end { equation}
2024-04-12 13:19:41 +02:00
The softmax function has high similarities with the Boltzmann distribution and was first introduced in the 19$ ^ { \textrm { th } } $ century~\cite { Boltzmann} .
2024-04-10 23:31:41 +02:00
\subsubsection { Cross Entropy Loss}
% todo maybe remove this
\subsubsection { Adam}