Academic Journals Database
Disseminating quality controlled scientific knowledge

Explaining Human Behavior in Dynamic Tasks through Reinforcement Learning

ADD TO MY LIST
 
Author(s): Varun Dutt

Journal: Journal of Advances in Information Technology
ISSN 1798-2340

Volume: 2;
Issue: 3;
Start page: 177;
Date: 2011;
Original page

Keywords: dynamic tasks | best performer | worst performer | model comparison | model generalization | reinforcement learning

ABSTRACT
Modeling human behavior in dynamic tasks can be challenging. As human beings possess a common set of cognitive processes, there should be certain robust cognitive mechanisms that capture human behavior in these dynamic tasks. This paper argues for a learning model of human behavior that uses a reinforcement learning (RL) mechanism that has been widely used in the fields of cognitive modeling, and judgement and decision making. The RL model has a generic decision-making structure that is well suited to explaining human behavior in dynamic tasks. The RL model is used to model human behavior in a popular dynamic control task called Dynamic Stock and Flows (DSF) that was used in a recent Model Comparison Challenge (MCC). The RL model’s performance is compared to a winner model that won the MCC, that also uses the RL mechanism, and that is the best known model to explain human behavior in the DSF task. Results of comparison reveal that the RL model generalizes to explain human behavior better than the winner model. Furthermore, the RL model is able to generalize to human data of best and worst performers better than the winner model. Implications of this research highlight the potential of using experienced-based mechanisms like reinforcement learning to explain human behavior in dynamic tasks. 
Affiliate Program      Why do you need a reservation system?