2024
, & Reining, L. C. (2024, September 27). Wallis, T. S. A. PeerJ 12:e18059 doi: 10.7717/peerj.18059 A psychophysical evaluation of techniques for Mooney image generation.
, & Wallis, T. S. A. (2024, August 16). Martin, J. M. PsyArXiv. doi: 10.31234/osf.io/sv4pw No evidence that late-sighted individuals rely more on color for object recognition: Reply to Vogelsang et al.
Harrison, W. J., Stead, I., , Bex, P. J. & Mattingley, J. B. (2024). Wallis, T. S. A. PNAS, 121(27). doi: 10.1073/pnas.2316608121. A computational account of transsaccadic attentional allocation based on visual gain fields.
2022
Kümmerer. M., Bethge, M. & (2022). Wallis, T. S. A. Journal of Vision, 22(7). DeepGaze III: Modelling Free-Viewing Human Scanpaths with Deep Learning.
Pedziwiatr, M. A., Kümmerer, M., , Bethge, M., & Teufel, C. (2022). Wallis, T. S. A. Journal of Vision, 22(2), 9. doi: 10.1167/jov.22.2.9 Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps.
Rideaux, R., West, R. K., , Bex, P. J., Mattingley, J. B., & Harrison, W. J. (2022). Wallis, T. S. A. Journal of Vision, 22(1), 4. Spatial structure, phase, and the contrast of natural images.
2021
Zimmermann, R. S., Borowski, J., Geirhos, R., Bethge, M., , & Brendel, W. (2021). Wallis, T. S. A. Neural Information Processing Systems (NeurIPS). arXiv: 2106.12447 How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Funke, C. M., Borowski, J., Stosio, K., Brendel, W., , & Bethge, M. (2021). Wallis, T. S. A. Journal of Vision, 21(3), 16. doi: 10.1167/jov.21.3.16 Five points to check when comparing visual perception in humans and machines.
Lukashova-Sanz, O., Wahl, S., , & Rifai, K. (2021). Wallis, T. S. A. Vision, 5(2), 18. doi: 10.3390/vision5020018 The Impact of Shape-Based Cue Discriminability on Attentional Performance.
Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R., , Bethge, M., & Brendel, W. (2021). Wallis, T. S. A. International Conference on Learning Representations (ICLR). arXiv: 2010.12606 Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualizations.
Pedziwiatr, M. A., Kümmerer. M., , Bethge, M. & Teufel, C. (2021). Wallis, T. S. A. Cognition, 206 (104465). doi: 10.1016/j.cognition.2020.104465 Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations.