Comparing Object Recognition in Humans and Deep Convolutional Neural Networks - An Eye Tracking Study

Leonard Elia Van Dyck*, Roland Kwitt, Sebastian Jochen Denzler, Walter Roland Gruber

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep convolutional neural networks (DCNNs) and the ventral visual pathway share vast architectural and functional similarities in visual challenges such as object recognition. Recent insights have demonstrated that both hierarchical cascades can be compared in terms of both exerted behavior and underlying activation. However, these approaches ignore key differences in spatial priorities of information processing. In this proof-of-concept study, we demonstrate a comparison of human observers (N = 45) and three feedforward DCNNs through eye tracking and saliency maps. The results reveal fundamentally different resolutions in both visualization methods that need to be considered for an insightful comparison. Moreover, we provide evidence that a DCNN with biologically plausible receptive field sizes called vNet reveals higher agreement with human viewing behavior as contrasted with a standard ResNet architecture. We find that image-specific factors such as category, animacy, arousal, and valence have a direct link to the agreement of spatial object recognition priorities in humans and DCNNs, while other measures such as difficulty and general image properties do not. With this approach, we try to open up new perspectives at the intersection of biological and computer vision research.
Original languageEnglish
Article number750639
Number of pages15
JournalFrontiers in Neuroscience
Volume15
DOIs
Publication statusPublished - 6 Oct 2021

Bibliographical note

Publisher Copyright:
© Copyright © 2021 van Dyck, Kwitt, Denzler and Gruber.

Keywords

  • seeing
  • Vision
  • object recognition
  • brain
  • Deep neural network
  • eye tracking
  • saliency map

Fields of Science and Technology Classification 2012

  • 501 Psychology

Cite this