Seminar_in_AI/summary/main.tex

239 lines
11 KiB
TeX
Raw Normal View History

2023-03-29 12:14:05 +00:00
\documentclass[sigconf]{acmart}
2023-03-29 12:52:30 +00:00
\usepackage{amsmath}
\usepackage{bbm}
\usepackage{mathtools}
2023-03-14 21:26:54 +00:00
2023-05-27 09:40:13 +00:00
\usepackage[inline]{enumitem}
2023-03-29 12:14:05 +00:00
%%
%% \BibTeX command to typeset BibTeX logo in the docs
\AtBeginDocument{%
\providecommand\BibTeX{{%
\normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}}
2023-03-14 21:26:54 +00:00
2023-03-29 12:14:05 +00:00
%% Rights management information. This information is sent to you
%% when you complete the rights form. These commands have SAMPLE
%% values in them; it is your responsibility as an author to replace
%% the commands and values with those provided to you when you
%% complete the rights form.
\setcopyright{acmcopyright}
\copyrightyear{2018}
\acmYear{2018}
\acmDOI{XXXXXXX.XXXXXXX}
2023-03-14 21:26:54 +00:00
2023-03-29 12:14:05 +00:00
%% These commands are for a PROCEEDINGS abstract or paper.
\acmConference[Conference acronym 'XX]{Make sure to enter the correct
conference title from your rights confirmation emai}{June 03--05,
2018}{Woodstock, NY}
%
% Uncomment \acmBooktitle if th title of the proceedings is different
% from ``Proceedings of ...''!
%
%\acmBooktitle{Woodstock '18: ACM Symposium on Neural Gaze Detection,
% June 03--05, 2018, Woodstock, NY}
\acmPrice{15.00}
\acmISBN{978-1-4503-XXXX-X/18/06}
%%
%% end of the preamble, start of the body of the document source.
2023-03-14 21:26:54 +00:00
\begin{document}
2023-03-29 12:14:05 +00:00
%%
%% The "title" command has an optional parameter,
%% allowing the author to define a "short title" to be used in page headers.
\title{Cross-Model Pseudo-Labeling for Semi-Supervised Action recognition}
%%
%% The "author" command and its associated commands are used to define
%% the authors and their affiliations.
%% Of note is the shared affiliation of the first two authors, and the
%% "authornote" and "authornotemark" commands
%% used to denote shared contribution to the research.
\author{Lukas Heiligenbrunner}
\email{k12104785@students.jku.at}
\affiliation{%
2023-03-29 12:52:30 +00:00
\institution{Johannes Kepler University Linz}
\city{Linz}
\state{Upperaustria}
\country{Austria}
\postcode{4020}
2023-03-29 12:14:05 +00:00
}
%%
%% By default, the full list of authors will be used in the page
%% headers. Often, this list is too long, and will overlap
%% other information printed in the page headers. This command allows
%% the author to define a more concise list
%% of authors' names for this purpose.
\renewcommand{\shortauthors}{Trovato and Tobin, et al.}
%%
%% The abstract is a short summary of the work to be presented in the
%% article.
\begin{abstract}
2023-05-27 09:40:13 +00:00
Cross-Model Pseudo-Labeling is a new framework for generating Pseudo-Labels
for supervised learning tasks where only a subset of true labels is known.
2023-03-29 12:14:05 +00:00
It builds upon the existing approach of FixMatch and improves it further by
using two different sized models complementing each other.
\end{abstract}
%%
%% Keywords. The author(s) should pick words that accurately describe
%% the work being presented. Separate the keywords with commas.
\keywords{neural networks, videos, pseudo-labeling, action recognition}
2023-05-27 09:40:13 +00:00
%\received{20 February 2007}
%\received[revised]{12 March 2009}
%\received[accepted]{5 June 2009}
2023-03-29 12:14:05 +00:00
%%
%% This command processes the author and affiliation and title
%% information and builds the first part of the formatted document.
\maketitle
\section{Introduction}\label{sec:introduction}
2023-05-03 14:04:46 +00:00
For most supervised learning tasks are lots of training samples essential.
With too less training data the model will gerneralize not well and not fit a real world task.
Labeling datasets is commonly seen as an expensive task and wants to be avoided as much as possible.
2023-05-03 14:04:46 +00:00
Thats why there is a machine-learning field called Semi-Supervised learning.
The general approach is to train a model that predicts Pseudo-Labels which then can be used to train the main model.
\section{Semi-Supervised learning}\label{sec:semi-supervised-learning}
In traditional supervised learning we have a labeled dataset.
Each datapoint is associated with a corresponding target label.
The goal is to fit a model to predict the labels from datapoints.
In traditional unsupervised learning no labels are known.
The goal is to find patterns and structures in the data.
2023-05-27 09:40:13 +00:00
Moreover, it can be used for clustering or downprojection.
Those two techniques combined yield semi-supervised learning.
Some of the labels are known, but for most of the data we have only the raw datapoints.
The basic idea is that the unlabeled data can significantly improve the model performance when used in combination with the labeled data.
2023-05-03 14:04:46 +00:00
\section{FixMatch}\label{sec:fixmatch}
There exists an already existing approach called FixMatch.
This was introduced in a Google Research paper from 2020~\cite{fixmatch}.
The key idea of FixMatch is to leverage the unlabeled data by predicting pseudo-labels out of the known labels.
Then both, the known labels and the predicted ones are used side by side to train the model.
The labeled samples guide the learning process and the unlabeled samples gain additional information.
Not every pseudo prediction is kept to train the model further.
A confidence threshold is defined to evaluate how `confident` the model is of its prediction.
The prediction is dropped if the model is too less confident.
The quantity and quality of the obtained labels is crucial and they have an significant impact on the overall accuracy.
This means improving the pseudo-label framework as much as possible is important.
\subsection{Math of FixMatch}\label{subsec:math-of-fixmatch}
Equation~\ref{eq:fixmatch} defines the loss-function that trains the model.
The sum over a batch size $B_u$ takes the average loss of this batch and should be straight forward.
The input data is augmented in two different ways.
At first there is a weak augmentation $\mathcal{T}_{\text{weak}}(\cdot)$ which only applies basic transformation such as filtering and bluring.
2023-05-27 09:40:13 +00:00
Moreover, there is the strong augmentation $\mathcal{T}_{\text{strong}}(\cdot)$ which does cropouts and random augmentations.
2023-05-19 16:18:57 +00:00
\begin{equation}
\label{eq:fixmatch}
\mathcal{L}_u = \frac{1}{B_u} \sum_{i=1}^{B_u} \mathbbm{1}(\max(p_i) \geq \tau) \mathcal{H}(\hat{y}_i,F(\mathcal{T}_{\text{strong}}(u_i)))
\end{equation}
The interesting part is the indicator function $\mathbbm{1}(\cdot)$ which applies a principle called `confidence-based masking`.
It retains a label only if its largest probability is above a threshold $\tau$.
Where $p_i \coloneqq F(\mathcal{T}_{\text{weak}}(u_i))$ is a model evaluation with a weakly augmented input.
The second part $\mathcal{H}(\cdot, \cdot)$ is a standard Cross-entropy loss function which takes two inputs, the predicted and the true label.
$\hat{y}_i$, the obtained pseudo-label and $F(\mathcal{T}_{\text{strong}}(u_i))$, a model evaluation with strong augmentation.
The indicator function evaluates in $0$ if the pseudo prediction is not confident and the current loss evaluation will be dropped.
Otherwise it will be kept and trains the model further.
\section{Cross-Model Pseudo-Labeling}\label{sec:cross-model-pseudo-labeling}
The newly invented approach of this paper is called Cross-Model Pseudo-Labeling (CMPL)\cite{Xu_2022_CVPR}.
2023-05-19 16:18:57 +00:00
In Figure~\ref{fig:cmpl-structure} one can see its structure.
2023-05-29 10:16:08 +00:00
We define two different models, a smaller auxiliary model and a larger model.
2023-05-27 09:40:13 +00:00
The SG label means stop gradient.
The loss function evaluations are fed into the opposite model as loss.
The two models train each other.
2023-05-19 16:18:57 +00:00
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{../presentation/rsc/structure}
\caption{Model structures of Cross-Model Pseudo-Labeling}
\label{fig:cmpl-structure}
\end{figure}
2023-03-29 12:14:05 +00:00
2023-05-19 16:18:57 +00:00
\subsection{Math of CMPL}\label{subsec:math}
The loss function of CMPL is similar to that one explaind above.
But we have to differ from the loss generated from the supervised samples with the label known and the unsupervised loss where no labels are knonw.
The two equations~\ref{eq:cmpl-losses1} and~\ref{eq:cmpl-losses2} are normal Cross-Entropy loss functions generated with the supervised labels of the two seperate models.
\begin{align}
\label{eq:cmpl-losses1}
\mathcal{L}_s^F &= \frac{1}{B_l} \sum_{i=1}^{B_l} \mathcal{H}(y_i,F(\mathcal{T}^F_{\text{standard}}(v_i)))\\
\label{eq:cmpl-losses2}
\mathcal{L}_s^A &= \frac{1}{B_l} \sum_{i=1}^{B_l} \mathcal{H}(y_i,A(\mathcal{T}^F_{\text{standard}}(v_i)))
\end{align}
Equation~\ref{eq:cmpl-loss3} and~\ref{eq:cmpl-loss4} are the unsupervised losses.
They are very similar to FastMatch, but
\begin{align}
\label{eq:cmpl-loss3}
\mathcal{L}_u^F &= \frac{1}{B_u} \sum_{i=1}^{B_u} \mathbbm{1}(\max(p_i^A) \geq \tau) \mathcal{H}(\hat{y}_i^A,F(\mathcal{T}_{\text{strong}}(u_i)))\\
\label{eq:cmpl-loss4}
\mathcal{L}_u^A &= \frac{1}{B_u} \sum_{i=1}^{B_u} \mathbbm{1}(\max(p_i^F) \geq \tau) \mathcal{H}(\hat{y}_i^F,A(\mathcal{T}_{\text{strong}}(u_i)))
\end{align}
Finally to train the main objective an overall loss is calculated by simply summing all the losses.
The loss is regulated by an hyperparamter $\lambda$ to enhance the importance of the supervised loss.
2023-03-29 22:29:23 +00:00
\begin{equation}
\label{eq:loss-main-obj}
\mathcal{L} = (\mathcal{L}_s^F + \mathcal{L}_s^A) + \lambda(\mathcal{L}_u^F + \mathcal{L}_u^A)
2023-03-29 22:29:23 +00:00
\end{equation}
2023-03-29 12:14:05 +00:00
\section{Performance}\label{sec:performance}
In figure~\ref{fig:results} a performance comparison is shown between just using the supervised samples for training against some different pseudo label frameworks.
One can clearly see that the performance gain with the new CMPL framework is quite significant.
2023-05-27 09:40:13 +00:00
For evaluation the Kinetics-400 and UCF-101 datasets are used.
And as a backbone model a 3D-ResNet18 and 3D-ResNet50 are used.
2023-03-29 12:14:05 +00:00
\begin{figure}[h]
\centering
2023-03-29 12:52:30 +00:00
\includegraphics[width=\linewidth]{../presentation/rsc/results}
2023-05-03 14:04:46 +00:00
\caption{Performance comparisons between CMPL, FixMatch and supervised learning only}
\Description{A woman and a girl in white dresses sit in an open car.}
\label{fig:results}
2023-03-29 12:14:05 +00:00
\end{figure}
2023-05-27 09:40:13 +00:00
\section{Further schemes}\label{sec:further-schemes}
How the pseudo-labels are generated my impact the overall performance.
In this paper the pseudo-labels are obtained by the cross-model approach.
But there might be other strategies.
For example:
\begin{enumerate*}
\item Self-First: Each network uses just its own prediction if its confident enough.
If not, it uses its sibling net prediction.
\item Opposite-First: Each net prioritizes the prediction of the sibling network.
\item Maximum: The most confident prediction is leveraged.
\item Average: The two predictions are averaged before deriving the pseudo-label
\end{enumerate*}.
Those are just other approaches one can keep in mind.
This doesn't mean they are better, in fact they performed even worse in this study.
2023-03-29 12:14:05 +00:00
%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.
\bibliographystyle{ACM-Reference-Format}
\bibliography{sources}
%%
%% If your work has an appendix, this is the place to put it.
\appendix
% appendix
2023-03-14 21:26:54 +00:00
\end{document}
2023-03-29 12:14:05 +00:00
\endinput