49 lines
2.2 KiB
TeX
49 lines
2.2 KiB
TeX
\section{Material and Methods}\label{sec:material-and-methods}
|
|
|
|
\subsection{Material}\label{subsec:material}
|
|
|
|
\subsubsection{Dagster}
|
|
\subsubsection{Label-Studio}
|
|
\subsubsection{Pytorch}
|
|
\subsubsection{NVTec}
|
|
\subsubsection{Imagenet}
|
|
\subsubsection{Anomalib}
|
|
% todo maybe remove?
|
|
|
|
\subsection{Methods}\label{subsec:methods}
|
|
|
|
\subsubsection{Active-Learning}
|
|
\subsubsection{ROC and AUC}
|
|
\subsubsection{RESNet}
|
|
\subsubsection{CNN}
|
|
Convolutional neural networks are especially good model architectures for processing images, speech and audio signals.
|
|
A CNN typically consists of Convolutional layers, pooling layers and fully connected layers.
|
|
Convolutional layers are a set of learnable kernels (filters).
|
|
Each filter performs a convolution operation by sliding a window over every pixel of the image.
|
|
On each pixel a dot product creates a feature map.
|
|
Convolutional layers capture features like edges, textures or shapes.
|
|
Pooling layers sample down the feature maps created by the convolutional layers.
|
|
This helps reducing the computational complexity of the overall network and help with overfitting.
|
|
Common pooling layers include average- and max pooling.
|
|
Finally, after some convolution layers the feature map is flattened and passed to a network of fully connected layers to perform a classification or regression task.
|
|
|
|
\begin{figure}[h]
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../rsc/cnn_architecture}
|
|
\caption{Architecture convolutional neural network. Image by \href{https://cointelegraph.com/explained/what-are-convolutional-neural-networks}{SKY ENGINE AI}}
|
|
\label{fig:cnn-architecture}
|
|
\end{figure}
|
|
|
|
\subsubsection{Softmax}
|
|
|
|
The Softmax function converts $n$ numbers of a vector into a probability distribution.
|
|
Its a generalization of the Sigmoid function and often used as an Activation Layer in neural networks.
|
|
\begin{equation}\label{eq:softmax}
|
|
\sigma(\mathbf{z})_j = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}} \; for j\coloneqq\{1,\dots,K\}
|
|
\end{equation}
|
|
|
|
The softmax function has high similarities with the Boltzmann distribution and was first introduced in the 19$^{\textrm{th}}$ century~\cite{Boltzmann}.
|
|
\subsubsection{Cross Entropy Loss}
|
|
% todo maybe remove this
|
|
\subsubsection{Adam}
|