Developing gaze estimation models that generalize well to
unseen domains and in-the-wild conditions remains a challenge with no
known best solution. This is mostly due to the difficulty of acquiring
ground truth data that cover the distribution of faces, head poses, and
environments that exist in the real world. Most recent methods attempt
to close the gap between specific source and target domains using domain
adaptation.
In this work, we propose to train general gaze estimation
models which can be directly employed in novel environments without
adaptation. To do so, we leverage the observation that head, body, and
hand pose estimation benefit from revising them as dense 3D coordinate
prediction, and similarly express gaze estimation as regression of dense
3D eye meshes. To close the gap between image domains, we create a
large-scale dataset of diverse faces with gaze pseudo-annotations, which
we extract based on the 3D geometry of the face, and design a multi-view
supervision framework to balance their effect during training. We test
our method in the task of gaze generalization, in which we demonstrate
improvement of up to 23% compared to state-of-the-art when no ground
truth data are available, and up to 10% when they are.