@@ -77,9 +77,9 @@ The following experiment shows the impact of missing user feedback to the traini

...

@@ -77,9 +77,9 @@ The following experiment shows the impact of missing user feedback to the traini

\subsection{Evaluation over iteration steps}

\subsection{Evaluation over iteration steps}

In this section I compare the performance of the personalized models between iteration steps. Therefore the base model is applied to one of the training data sets of a participant, which is refined by one of the filter configurations. After that the resulted personalized model is evaluated. This step is repeated over all training sets where the previous base model is replaced by the new model. Additionally I evaluate the performance of a single iteration step by always training and evaluating the base model on the respective training data. I repeat that experiment with different amounts of training epochs and for the two regularization approaches of \secref{sec:approachRegularization}.

In this section I compare the performance of the personalized models between iteration steps. Therefore the base model is applied to one of the training data sets of a participant, which is refined by one of the filter configurations. After that the resulted personalized model is evaluated. This step is repeated over all training sets where the previous base model is replaced by the new model. Additionally I evaluate the performance of a single iteration step by always training and evaluating the base model on the respective training data. I repeat that experiment with different amounts of training epochs and for the two regularization approaches of \secref{sec:approachRegularization}.

\begin{itemize}

\item How does the personalized model evolve over multiple training steps

\subsubsection{Evolution}

\end{itemize}

First we observe how the model performance evolves over the iteration steps. \figref{arg1} shows the S scores for each iteration step of the overall personalized model and single trained model. The training data is generated by the \texttt{all\_noise\_hwgt} filter configuration. The epochs and regularization are the same as of the previous experiments. We can see, that the first iteration leads