Abstract—Modeling human behavior in dynamic tasks can be challenging. As human beings possess a common set of cognitive processes, there should be certain robust cognitive mechanisms that capture human behavior in these dynamic tasks. This paper argues for a learning model of human behavior that uses a reinforcement learning (RL) mechanism that has been widely used in the fields of cognitive modeling, and judgement and decision making. The RL model has a generic decision-making structure that is well suited to explaining human behavior in dynamic tasks. The RL model is used to model human behavior in a popular dynamic control task called Dynamic Stock and Flows (DSF) that was used in a recent Model Comparison Challenge (MCC). The RL model’s performance is compared to a winner model that won the MCC, that also uses the RL mechanism, and that is the best known model to explain human behavior in the DSF task. Results of comparison reveal that the RL model generalizes to explain human behavior better than the winner model. Furthermore, the RL model is able to generalize to human data of best and worst performers better than the winner model. Implications of this research highlight the potential of using experienced-based mechanisms like reinforcement learning to explain human behavior in dynamic tasks.
Index Terms—dynamic tasks, best performer, worst performer, model comparison, model generalization, reinforcement learning
Cite: Varun Dutt, "Explaining Human Behavior in Dynamic Tasks through Reinforcement Learning," Journal of Advances in Information Technology, Vol. 2, No. 3, pp. 177-188, August, 2011.doi:10.4304/jait.2.3.177-188
Copyright © 2013-2020. JAIT. All Rights Reserved
This work is licensed under the Creative Commons Attribution License (CC BY-NC-ND 4.0)