add remaining headings and github action workflow
Some checks failed
Build LaTeX Document / build (push) Has been cancelled
Build Typst document / build_typst_documents (push) Successful in 20s

This commit is contained in:
lukas-heilgenbrunner 2024-10-28 16:02:53 +01:00
parent 88368ddfbb
commit 2663f1814b
9 changed files with 102 additions and 52 deletions

13
.github/workflows/buildtypst.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name: Build Typst document
on: push
jobs:
build_typst_documents:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Typst
uses: lvignoli/typst-action@main
with:
source_file: typstalt/main.typ

View File

@ -0,0 +1,5 @@
= Conclusion and Outlook
== Conclusion
== Outlook

View File

@ -0,0 +1,15 @@
= Experimental Results
== Is Few-Shot learning a suitable fit for anomaly detection?
Should Few-Shot learning be used for anomaly detection tasks?
How does it compare to well established algorithms such as Patchcore or EfficientAD?
== How does disbalancing the Shot number affect performance?
Does giving the Few-Shot learner more good than bad samples improve the model performance?
== How does the 3 (ResNet, CAML, \pmf) methods perform in only detecting the anomaly class?
How much does the performance improve if only detecting an anomaly or not?
How does it compare to PatchCore and EfficientAD?
== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?

View File

@ -0,0 +1,16 @@
= Implementation
== Experiment Setup
% todo
todo setup of experiments, which classes used, nr of samples
kinds of experiments which lead to graphs
== Jupyter
To get accurate performance measures the active-learning process was implemented in a Jupyter notebook first.
This helps to choose which of the methods performs the best and which one to use in the final Dagster pipeline.
A straight forward machine-learning pipeline was implemented with the help of Pytorch and RESNet-18.
Moreover, the Dataset was manually imported with the help of a custom torch dataloader and preprocessed with random augmentations.
After each loop iteration the Area Under the Curve (AUC) was calculated over the validation set to get a performance measure.
All those AUC were visualized in a line plot, see section~\ref{sec:experimental-results} for the results.

View File

@ -25,7 +25,7 @@ How much does the performance improve if only detecting an anomaly or not?
How does it compare to PatchCore and EfficientAD?
=== Extra: How does Euclidean distance compare to Cosine-similarity when using ResNet as a feature-extractor?
I've tried different distance measures $->$ but results are pretty much the same.
// I've tried different distance measures $->$ but results are pretty much the same.
== Outline
todo

View File

@ -64,19 +64,15 @@
v(10mm)
},
indent: 2em,
depth: 3
depth: 2
)<outline>
#pagebreak(weak: false)
#include "introduction.typ"
#include "materialandmethods.typ"
= Section Heading
#cite(<efficientADpaper>)
== Subsection Heading
=== Subsubsection Heading
==== Paragraph Heading
===== Subparagraph Heading
#include "implementation.typ"
#include "experimentalresults.typ"
#include "conclusionandoutlook.typ"
#set par(leading: 0.7em, first-line-indent: 0em, justify: true)
#bibliography("sources.bib", style: "apa")

View File

@ -7,16 +7,13 @@ MVTec AD is a dataset for benchmarking anomaly detection methods with a focus on
It contains over 5000 high-resolution images divided into fifteen different object and texture categories.
Each category comprises a set of defect-free training images and a test set of images with various kinds of defects as well as images without defects.
// todo source for https://www.mvtec.com/company/research/datasets/mvtec-ad
// todo example image
//\begin{figure}
// \centering
// \includegraphics[width=\linewidth/2]{../rsc/muffin_chiauaua_poster}
// \caption{Sample images from dataset. \cite{muffinsvschiuahuakaggle_poster}}
// \label{fig:roc-example}
//\end{figure}
#figure(
image("rsc/dataset_overview_large.png", width: 80%),
caption: [Architecture convolutional neural network. #cite(<datasetsampleimg>)],
) <datasetoverview>
// todo
Todo: descibe which categories are used in this bac and how many samples there are.
== Methods
@ -37,9 +34,9 @@ The first and easiest method of this bachelor thesis uses a simple ResNet to cal
See //%todo link to this section
// todo proper source
=== Generalisation from few samples}
=== Generalisation from few samples
=== Patchcore}
=== Patchcore
%todo also show values how they perform on MVTec AD

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

View File

@ -42,33 +42,41 @@
howpublished = "\url{https://docs.jupyter.org/en/latest/}",
year = {2024},
note = "[Online; accessed 13-May-2024]"
}
}
@misc{cnnintro,
title={An Introduction to Convolutional Neural Networks},
author={Keiron O'Shea and Ryan Nash},
year={2015},
eprint={1511.08458},
archivePrefix={arXiv},
primaryClass={cs.NE}
}
@misc{cnnintro,
title={An Introduction to Convolutional Neural Networks},
author={Keiron O'Shea and Ryan Nash},
year={2015},
eprint={1511.08458},
archivePrefix={arXiv},
primaryClass={cs.NE}
}
@misc{cnnarchitectureimg,
author = {},
title = {{What are convolutional neural networks?}},
howpublished = "\url{https://cointelegraph.com/explained/what-are-convolutional-neural-networks}",
year = {2024},
note = "[Online; accessed 12-April-2024]"
}
@misc{cnnarchitectureimg,
author = {},
title = {{What are convolutional neural networks?}},
howpublished = "\url{https://cointelegraph.com/explained/what-are-convolutional-neural-networks}",
year = {2024},
note = "[Online; accessed 12-April-2024]"
}
@inproceedings{liang2017soft,
title={Soft-margin softmax for deep classification},
author={Liang, Xuezhi and Wang, Xiaobo and Lei, Zhen and Liao, Shengcai and Li, Stan Z},
booktitle={International Conference on Neural Information Processing},
pages={413--421},
year={2017},
organization={Springer}
}
@misc{datasetsampleimg,
author = {},
title = {{The MVTec anomaly detection dataset (MVTec AD)}},
howpublished = "\url{https://www.mvtec.com/company/research/datasets/mvtec-ad}",
year = {2024},
note = "[Online; accessed 12-April-2024]"
}
@inproceedings{liang2017soft,
title={Soft-margin softmax for deep classification},
author={Liang, Xuezhi and Wang, Xiaobo and Lei, Zhen and Liao, Shengcai and Li, Stan Z},
booktitle={International Conference on Neural Information Processing},
pages={413--421},
year={2017},
organization={Springer}
}
@inbook{Boltzmann,
place = {Cambridge},
@ -82,11 +90,11 @@
pages = {4996},
collection = {Cambridge Library Collection - Physical Sciences}, key = {value},}
@misc{resnet,
title={Deep Residual Learning for Image Recognition},
author={Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
year={2015},
eprint={1512.03385},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{resnet,
title={Deep Residual Learning for Image Recognition},
author={Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
year={2015},
eprint={1512.03385},
archivePrefix={arXiv},
primaryClass={cs.CV}
}