Preprint alert! We explore 3 exploration tasks, testing if they measure a stable construct & its link to real-world exploration. Findings suggest improved robustness of latent factors compared to single-task estimates. Work with @mirkothm & @cpilab 🔗osf.io/preprints/psya… 🧵⬇️
A growing number of studies uses few-armed bandit tasks to test how people explore. Recently, an increasing body of research (including our own) turned to individual differences in this behaviour and related model parameters to questionnaire measures of psychiatric traits.
This type of approach is very valuable. However, for treating individual differences in model parameters as cognitive traits, we need the model parameters to be: 🕐stable over time 🔗converging across tasks 📋related to real-world exploration
We tested these requirements by collecting data on three bandit tasks, five questionnaires and three working memory tasks. We retested participants on all tasks after 6 weeks, to test for temporal stability.
When using the standard modelling approaches, the test-retest reliability of all model parameters and most task measures was rather poor. We also found low correlations between model parameters from different tasks, despite these parameters measuring the same strategies.