Publications

* denotes equally contributing authors

Peer-reviewed

Song M., Baah P.A., Cai M.B. & Niv Y. (2022). Humans combine value learning and hypothesis testing strategically in multi-dimensional probabilistic reward learning. PLoS computational biology, 18(11), e1010699. [pdf] [code]

Song M., Jones C.E., Monfils M.H., & Niv Y. (2022). Explaining the effectiveness of fear extinction through latent-cause inference. Neurons, Behavior, Data analysis, and Theory, https://doi.org/10.51628/001c.35660 [pdf] [code]

Song M., Takahashi Y.K., Burton A.C., Roesch M.R., Schoenbaum G., Niv, Y., & Langdon A.J. (2022). Minimal cross-trial generalization in learning the representation of an odor-guided choice task. PLoS computational biology, 18(3), e1009897. [pdf] [code]

Langdon A.J., Song M., Niv Y. (2019) Uncovering the “state”: tracing the hidden state representations that structure learning and decision-making. Behavioural Processes, 103891 [pdf]

Song M.*, Bnaya Z.*, Ma W.J. (2019) Sources of suboptimality in a minimalistic explore-exploit task. Nature Human Behaviour 3, 361–368. [pdf] [code]

Song M.*, Wang X.*, Zhang H., Li J. (2019) Proactive information sampling in value-based decision-making: deciding when and where to saccade. Frontiers in Human Neuroscience, 13, 35. [pdf]

Conference Proceedings

Song M., Niv Y., Cai M. B. (2021) Using Recurrent Neural Networks to Understand Human Reward Learning. 43rd Annual Meeting of the Cognitive Science Society [poster] [video] [paper]

Song M., Niv Y., Cai M.B. (2020) Learning multi-dimensional rules with probabilistic feedback via value-based serial hypothesis testing. Workshop on Biological and Artificial Reinforcement Learning, Thirty-fourth Conference on Neural Information Processing Systems (Neurips) [talk] [paper]

Song M., Niv Y., Cai M.B. (2020) Learning what is relevant for rewards via value-based serial hypothesis testing. 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci) [talk] [paper]

Song M., Cai M.B, Niv Y. (2019) Learning what is relevant for rewards via value learning and hypothesis testing. Computational and Cognitive Neuroscience Conference (CCN), Berlin, Germany [poster]

Song M., Langdon A., Takahashi Y.K., Schoenbaum G., Niv Y. (2019) Not smart enough: most rats fail to learn a parsimonious task representation. The Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM), Montreal, QC, Canada [poster & spotlight presentation]

Song M., Bnaya Z., Ma W.J. (2017) History effects in a minimalistic explore-exploit task. Computational and Cognitive Neuroscience Conference (CCN), New York, NY [talk]

Dissertation

Song M. (2022) Learning to Discover Structure in Animal and Human Decision Tasks. Princeton University [link] [pdf]


The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by these copyrights. These works may not be reposted without the explicit permission of the copyright holder.