Commit 9cc4aa18 authored by Alex's avatar Alex
Browse files

some text to real world experiements

parent 7cad717f
File added
...@@ -29,10 +29,11 @@ In these experiments, I would like to show the effect of noise in soft labels co ...@@ -29,10 +29,11 @@ In these experiments, I would like to show the effect of noise in soft labels co
\input{figures/experiments/supervised_soft_noise_hw} \input{figures/experiments/supervised_soft_noise_hw}
\input{figures/experiments/supervised_soft_noise_null} \input{figures/experiments/supervised_soft_noise_null}
In the next step I want to observe if smoothing could have a negative effect if correct labels are smoothed. Therefore I repeat the previous experiment but don't flip the randomly selected labels and just apply the smoothing $s$ to them. Again, no major changes in the performance due to noise in \textit{hw} labels is expected which can also be seen in the left graph of \figref{fig:supervisedFalseSoftNoise}. In the case of wrongly smoothed \textit{null} labels we can see a negative trend in S score for higher smoothing values, as shown in the right graph. For a greater portion of smoothed labels, the smooth value has higher influence to the models performance. But for noise values $\leq 0.2\%$ the all personalized models still achieve higher S scores than the general models. Therefore it seems, that the personalization benefits from using soft labels. To make sure that the performance increase of smoothing false labels prevails the drawbacks of falsely smoothed correct labels, I combined both experiments. First a certain ratio $n$ of \textit{null} labels are flipped and then these labels are smoothed to value $s$ after that the same ratio $n$ of other \textit{null} labels are smoothed to value $s$. The resulting performance of personalizations can be seen in \figref{arg1}. In the next step I want to observe if smoothing could have a negative effect if correct labels are smoothed. Therefore I repeat the previous experiment but don't flip the randomly selected labels and just apply the smoothing $s$ to them. Again, no major changes in the performance due to noise in \textit{hw} labels is expected which can also be seen in the left graph of \figref{fig:supervisedFalseSoftNoise}. In the case of wrongly smoothed \textit{null} labels we can see a negative trend in S score for higher smoothing values, as shown in the right graph. For a greater portion of smoothed labels, the smooth value has higher influence to the models performance. But for noise values $\leq 0.2\%$ the all personalized models still achieve higher S scores than the general models. Therefore it seems, that the personalization benefits from using soft labels. To make sure that the performance increase of smoothing false labels prevails the drawbacks of falsely smoothed correct labels, I combined both experiments. This is oriented what happens to the labels if one of the denoising filters would be applied to a hand wash section. First a certain ratio $n$ of \textit{null} labels are flipped. This expresses when the filter would falsely classify a \textit{null} label as hand washing. The false labels are smoothed to value $s$. After that the same ratio $n$ of correct \textit{hw} labels are smoothed to value $s$. This is equal to smoothing the label boundaries of a hand wash action. The resulting performance of personalizations can be seen in \figref{fig:supervisedSoftNoiseBoth}.
%\input{figures/experiments/supervised_false_soft_noise_hw} %\input{figures/experiments/supervised_false_soft_noise_hw}
%\input{figures/experiments/supervised_false_soft_noise_null} %\input{figures/experiments/supervised_false_soft_noise_null}
\input{figures/experiments/supervised_false_soft_noise} \input{figures/experiments/supervised_false_soft_noise}
\input{figures/experiments/supervised_soft_noise_both}
\begin{itemize} \begin{itemize}
\item Which impact does hardened labels have against soft labels \item Which impact does hardened labels have against soft labels
\item flip labels and smooth out \item flip labels and smooth out
...@@ -74,6 +75,4 @@ specificity, sensitivity, f1, S1 ...@@ -74,6 +75,4 @@ specificity, sensitivity, f1, S1
\end{itemize} \end{itemize}
\section{Real world analysis} \section{Real world analysis}
In corporation with University of Basel, I evaluated my personalization approach on data collected by a study over multiple participants. In this study, they wore a smart watch running the base application almost every day for a month. Most participants showed indications of obsessive hand washing. The collected data covers XX participants with overall XXX hours of sensor data and XXX user beedback indicators. \tabref{arg1} shows the data over each participant in detail. Since no exact labeling for the sensor values exist I used the quality estimation approach for evaluation.
\section{Results}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=\textwidth]{figures/experiments/supervised_both_soft.png}
\caption[Supervised soft noise combined]{\textbf{Supervised soft noise combined.} Multiple plots of S score for personalized models trained on different noise values in \textit{null} labels. False labels and same propotion of correct labels are smoothen to value $s$.}
\label{fig:supervisedSoftNoiseBoth}
\end{centering}
\end{figure}
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment