2025
A-Izzeddin, E. J., Wallis, T. S. A., Mattingley, J. B., & Harrison, W. J. (2025). Low-level features predict perceived similarity for naturalistic images. Journal of Vision, 25(12), 11. doi: 10.1167/jov.25.12.11
Duan, Y., Eicke-Kanani, L., Tatai, F., & Wallis, T. S. A. (2025). Contact angle uncertainty influences perceived causality in launching events. In Proceedings of the Annual Meeting of the Cognitive Science Society, 47.
Mahncke, S., Eicke-Kanani, L., Fabritz, O., & Wallis, T. S. A. (2025, July). The visibility of Eidolon distortions in things and stuff. Journal of Vision, 25(8):12 doi: 10.1167/jov.25.8.12
van Dam, L. C. J., Kernig, S., Lazarova, K., Ünal, M., Gappa, N., Straube, B., & Wallis, T. S. A. (2025). Delay adaptation does not transfer between discrete button press actions and continuous control. i-Perception, 16(4), 20416695251352067. doi: 10.1177/20416695251352067.
Roth, J., Duan, Y., Mahner, F. P., Kaniuth, P., Wallis, T. S. A., & Hebart, M. N. (2025, April). Ten principles for reliable, efficient, and adaptable coding in psychology and cognitive neuroscience. Commun Psychol, 3(62). doi: 10.1038/s44271-025-00236-3.
Abu Halia, T., Kunst, K., Khanh, T. Q., & Wallis, T. S. A. (2025, February). Recent consumer OLED monitors can be suitable for vision science. Journal of Vision, 25(2):11. doi: 10.1167/jov.25.2.11.
2024
Abu Halia, T., Kunst, K., Khanh, T. Q., & Wallis, T. S. A. (2024, October 22). Recent consumer OLED monitors can be suitable for vision science. arXiv. doi: 10.48550/arXiv.2410.17019
Reining, L. C., & Wallis, T. S. A. (2024, September 27). A psychophysical evaluation of techniques for Mooney image generation. PeerJ 12:e18059 doi: 10.7717/peerj.18059
Wallis, T. S. A., & Martin, J. M. (2024, August 16). No evidence that late-sighted individuals rely more on color for object recognition: Reply to Vogelsang et al. PsyArXiv. doi: 10.31234/osf.io/sv4pw
Harrison, W. J., Stead, I., Wallis, T. S. A., Bex, P. J. & Mattingley, J. B. (2024). A computational account of transsaccadic attentional allocation based on visual gain fields. PNAS, 121(27). doi: 10.1073/pnas.2316608121.
2022
Kümmerer. M., Bethge, M. & Wallis, T. S. A. (2022). DeepGaze III: Modelling Free-Viewing Human Scanpaths with Deep Learning. Journal of Vision, 22(7).
Pedziwiatr, M. A., Kümmerer, M., Wallis, T. S. A., Bethge, M., & Teufel, C. (2022). Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps. Journal of Vision, 22(2), 9. doi: 10.1167/jov.22.2.9
Rideaux, R., West, R. K., Wallis, T. S. A., Bex, P. J., Mattingley, J. B., & Harrison, W. J. (2022). Spatial structure, phase, and the contrast of natural images. Journal of Vision, 22(1), 4.
2021
Zimmermann, R. S., Borowski, J., Geirhos, R., Bethge, M., Wallis, T. S. A., & Brendel, W. (2021). How Well do Feature Visualizations Support Causal Understanding of CNN Activations? Neural Information Processing Systems (NeurIPS). arXiv: 2106.12447
Funke, C. M., Borowski, J., Stosio, K., Brendel, W., Wallis, T. S. A., & Bethge, M. (2021). Five points to check when comparing visual perception in humans and machines. Journal of Vision, 21(3), 16. doi: 10.1167/jov.21.3.16
Lukashova-Sanz, O., Wahl, S., Wallis, T. S. A., & Rifai, K. (2021). The Impact of Shape-Based Cue Discriminability on Attentional Performance. Vision, 5(2), 18. doi: 10.3390/vision5020018
Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R., Wallis, T. S. A., Bethge, M., & Brendel, W. (2021). Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualizations. International Conference on Learning Representations (ICLR). arXiv: 2010.12606
Pedziwiatr, M. A., Kümmerer. M., Wallis, T. S. A., Bethge, M. & Teufel, C. (2021). Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations. Cognition, 206 (104465). doi: 10.1016/j.cognition.2020.104465
 
