3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views

1Imperial College London, 2Huawei Noah's Ark Lab UK 3Insightface

European Conference on Computer Vision (ECCV), 2024


teaser_fig

3DGazeNet reliably predicts 3D eye meshes and gaze direction from images in unconstrained environments.

Abstract

Developing gaze estimation models that generalize well to unseen domains and in-the-wild conditions remains a challenge with no known best solution. This is mostly due to the difficulty of acquiring ground truth data that cover the distribution of faces, head poses, and environments that exist in the real world. Most recent methods attempt to close the gap between specific source and target domains using domain adaptation. In this work, we propose to train general gaze estimation models which can be directly employed in novel environments without adaptation. To do so, we leverage the observation that head, body, and hand pose estimation benefit from revising them as dense 3D coordinate prediction, and similarly express gaze estimation as regression of dense 3D eye meshes. To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the face, and design a multi-view supervision framework to balance their effect during training. We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 23% compared to state-of-the-art when no ground truth data are available, and up to 10% when they are.

Method


teaser_fig

Overview of 3DGazeNet. (a) During training we employ either single images with ground-truth annotations or pairs of synthetic views of the same subject with pseudo-annotations. Different sets of losses are employed depending on the input source. (b) Detailed demonstration of the multi-view consistency loss. (c) The base network (3DEyeNet) of our model consists of a ResNet backbone and two fully connected layers leading to the 3D eye mesh and gaze vector outputs.

3D Eye Mesh Annotations

teaser_fig

(a) The employed rigid 3D eyeball mesh template. (b) Ground truth data generation, applied on gaze estimation datasets with available ground truth. (c) Pseudoground truth data generation, applied on arbitrary face images without gaze labels.

Results


Results of 3DGazeNet trained with MPIIFaceGaze olny (blue vectors) and combined MPIIFaceGaze and ITWG with multi-view supervision (red vectors), applied on the test set of G360 (yellow vectors). Especially for side and profile views, the effect of the extended variation of ITWG is significant.

teaser_fig

Results from applying our model on difficult cases including faces in profile pose, faces with glasses and faces with occlusions or low resolution. 3DGazeNet can successfully handle these difficult scenarios and produce reliable gaze predictions.

teaser_fig

BibTeX

@article{ververas20243dgazenet,
    author    = {Ververas, Evangelos and Gkagkos, Polydefkis and Deng, Jiankang and Christos Doukas, Michail and Guo, Jia and Zafeiriou, Stefanos},
    title     = {3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views},
    journal   = {ECCV},
    year      = {2024},
}