Commit 8d8277f4 authored by Alexander Henkel's avatar Alexander Henkel
Browse files

minor changes

parent bc1811fa
...@@ -3,6 +3,4 @@ In this work, I have elaborated a personalization process for human activity rec ...@@ -3,6 +3,4 @@ In this work, I have elaborated a personalization process for human activity rec
I evaluated personalization in general on a theoretical basis with supervised data. These revealed the impact of noise in the highly imbalanced data and how soft labels can counter training errors. Based on these insights several constellations and filter approaches for training data have been implemented to analyze the behavior of resulting models under the different aspects. I found out, that just using the predictions of the base model leads to performance decreases, since they consist of too much label noise. But even relying only on data covered by user feedback does not overcome the general model, although the training data hardly consists of false labels. Therefore the training data have to consist of a variety of samples which contain as less incorrect labels as possible. The resulting denoising approaches all generates training data which leads to personalized models which achieve higher F1 and S scores than the general model. Some of the configurations even result in similar performance as with supervised training. I evaluated personalization in general on a theoretical basis with supervised data. These revealed the impact of noise in the highly imbalanced data and how soft labels can counter training errors. Based on these insights several constellations and filter approaches for training data have been implemented to analyze the behavior of resulting models under the different aspects. I found out, that just using the predictions of the base model leads to performance decreases, since they consist of too much label noise. But even relying only on data covered by user feedback does not overcome the general model, although the training data hardly consists of false labels. Therefore the training data have to consist of a variety of samples which contain as less incorrect labels as possible. The resulting denoising approaches all generates training data which leads to personalized models which achieve higher F1 and S scores than the general model. Some of the configurations even result in similar performance as with supervised training.
I compared my personalization approach with a active learning implementation as common personalization method. The sophisticated filters configurations achieve higher S scores which confirms the robustness. I compared my personalization approach with a active learning implementation as common personalization method. The sophisticated filters configurations achieve higher S scores which confirms the robustness. The real world experiment in corporation with the University of Basel offered a great opportunity to evaluate my personalization approach on a large variety of users and their feedback behaviors. It confirms, that in most cases personalized models outperforms the general model.
\ No newline at end of file
The real world experiment in corporation with the University of Basel offered a great opportunity to evaluate my personalization approach on a large variety of users and their feedback behaviors. It confirms, that in most cases personalized models outperforms the general model.
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$" isTestSource="false" />
<excludeFolder url="file://$MODULE_DIR$/data" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment