fix some typos and add some stuff

This commit is contained in:
lukas-heiligenbrunner 2023-05-27 11:40:13 +02:00
parent c0c51f8ecf
commit 6bdea981f4

View File

@ -3,6 +3,8 @@
\usepackage{bbm}
\usepackage{mathtools}
\usepackage[inline]{enumitem}
%%
%% \BibTeX command to typeset BibTeX logo in the docs
\AtBeginDocument{%
@ -69,7 +71,7 @@
%% The abstract is a short summary of the work to be presented in the
%% article.
\begin{abstract}
Cross-Model Pseudo-Labeling is a new Framework for generating Pseudo-labels
Cross-Model Pseudo-Labeling is a new framework for generating Pseudo-Labels
for supervised learning tasks where only a subset of true labels is known.
It builds upon the existing approach of FixMatch and improves it further by
using two different sized models complementing each other.
@ -80,9 +82,9 @@
%% the work being presented. Separate the keywords with commas.
\keywords{neural networks, videos, pseudo-labeling, action recognition}
\received{20 February 2007}
\received[revised]{12 March 2009}
\received[accepted]{5 June 2009}
%\received{20 February 2007}
%\received[revised]{12 March 2009}
%\received[accepted]{5 June 2009}
%%
%% This command processes the author and affiliation and title
@ -103,6 +105,7 @@ The goal is to fit a model to predict the labels from datapoints.
In traditional unsupervised learning no labels are known.
The goal is to find patterns and structures in the data.
Moreover, it can be used for clustering or downprojection.
Those two techniques combined yield semi-supervised learning.
Some of the labels are known, but for most of the data we have only the raw datapoints.
@ -126,7 +129,7 @@ Equation~\ref{eq:fixmatch} defines the loss-function that trains the model.
The sum over a batch size $B_u$ takes the average loss of this batch and should be straight forward.
The input data is augmented in two different ways.
At first there is a weak augmentation $\mathcal{T}_{\text{weak}}(\cdot)$ which only applies basic transformation such as filtering and bluring.
Moreover, there is the strong augmentation $\mathcal{T}_{\text{strong}}(\cdot)$ which does cropouts and edge-detections.
Moreover, there is the strong augmentation $\mathcal{T}_{\text{strong}}(\cdot)$ which does cropouts and random augmentations.
\begin{equation}
\label{eq:fixmatch}
@ -144,7 +147,11 @@ Otherwise it will be kept and trains the model further.
\section{Cross-Model Pseudo-Labeling}\label{sec:cross-model-pseudo-labeling}
The newly invented approach of this paper is called Cross-Model Pseudo-Labeling (CMPL)\cite{Xu_2022_CVPR}.
In Figure~\ref{fig:cmpl-structure} one can see its structure.
We define two different models, a smaller and a larger one.
We define two different models, a smaller, the auxiliary and a larger one, the primary model.
The SG label means stop gradient.
The loss function evaluations are fed into the opposite model as loss.
The two models train each other.
\begin{figure}[h]
\centering
@ -189,6 +196,8 @@ The loss is regulated by an hyperparamter $\lambda$ to enhance the importance of
In figure~\ref{fig:results} a performance comparison is shown between just using the supervised samples for training against some different pseudo label frameworks.
One can clearly see that the performance gain with the new CMPL framework is quite significant.
For evaluation the Kinetics-400 and UCF-101 datasets are used.
And as a backbone model a 3D-ResNet18 and 3D-ResNet50 are used.
\begin{figure}[h]
\centering
@ -198,6 +207,21 @@ One can clearly see that the performance gain with the new CMPL framework is qui
\label{fig:results}
\end{figure}
\section{Further schemes}\label{sec:further-schemes}
How the pseudo-labels are generated my impact the overall performance.
In this paper the pseudo-labels are obtained by the cross-model approach.
But there might be other strategies.
For example:
\begin{enumerate*}
\item Self-First: Each network uses just its own prediction if its confident enough.
If not, it uses its sibling net prediction.
\item Opposite-First: Each net prioritizes the prediction of the sibling network.
\item Maximum: The most confident prediction is leveraged.
\item Average: The two predictions are averaged before deriving the pseudo-label
\end{enumerate*}.
Those are just other approaches one can keep in mind.
This doesn't mean they are better, in fact they performed even worse in this study.
%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.