From a358401ffbe0a1a2e7ca6d8e2746403622344f5a Mon Sep 17 00:00:00 2001 From: lukas-heilgenbrunner Date: Fri, 20 Dec 2024 12:33:54 +0100 Subject: [PATCH] add new sections and some todos --- materialandmethods.typ | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/materialandmethods.typ b/materialandmethods.typ index 06e5744..e8ec933 100644 --- a/materialandmethods.typ +++ b/materialandmethods.typ @@ -210,7 +210,7 @@ CAML (Context aware meta learning) is one of the state-of-the-art methods for fe Todo === Softmax - +#todo[Maybe remove this section] The Softmax function @softmax #cite() converts $n$ numbers of a vector into a probability distribution. Its a generalization of the Sigmoid function and often used as an Activation Layer in neural networks. @@ -222,6 +222,7 @@ The softmax function has high similarities with the Boltzmann distribution and w === Cross Entropy Loss +#todo[Maybe remove this section] Cross Entropy Loss is a well established loss function in machine learning. Equation~\eqref{eq:crelformal}\cite{crossentropy} shows the formal general definition of the Cross Entropy Loss. And equation~\eqref{eq:crelbinary} is the special case of the general Cross Entropy Loss for binary classification tasks. @@ -234,7 +235,9 @@ $ Equation~$cal(L)(p,q)$~\eqref{eq:crelbinarybatch}\cite{handsonaiI} is the Binary Cross Entropy Loss for a batch of size $cal(B)$ and used for model training in this Practical Work. -=== Mathematical modeling of problem +=== Cosine Similarity + +=== Euclidean Distance == Alternative Methods