Self-Supervised Learning by Cross-Modal Audio-Video Clustering

Cross-Modal Deep Clustering (XDC) is a novel self-supervised method that leverages unsupervised clustering in one modality (e.g. audio) as a supervisory signal for the other modality (e.g. video). Left: an overview of XDC model. Right: Visualization of XDC clusters on Kinetics videos. We visualize the the closest videos to the cluster centroid of the top-2 audio clusters (top right) and the top-2 video clusters (bottom right) in terms of purity with respect to the original Kinetics labels.

Abstract

The visual and audio modalities are highly correlated yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g. audio) as a supervisory signal for the other modality (e.g. video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC significantly outperforms single-modality clustering and other multi-modal variants. Our XDC achieves state-of-the-art accuracy among self-supervised methods on several video and audio benchmarks including HMDB51, UCF101, ESC50, and DCASE. Most importantly, the video model pretrained with XDC significantly outperforms the same model pretrained with full-supervision on both ImageNet and Kinetics in action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first method to demonstrate that self-supervision outperforms large-scale full-supervision in representation learning for action recognition.

Publication
On arXiv

BibTex

@misc{alwassel2019selfsupervised,
    title={Self-Supervised Learning by Cross-Modal Audio-Video Clustering},
    author={Humam Alwassel and Dhruv Mahajan and Lorenzo Torresani and Bernard Ghanem and Du Tran},
    year={2019},
    eprint={1911.12667},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Visualizing XDC Clusters

We visualize the audio and visual clusters of XDC self-supervised Kinetics-pretrained model. For each cluster, we shows the 10 nearest clips to the cluster center. The lists of clusters below are ordered by clustering purity w.r.t. Kinetics labels.

Audio Clusters [listen]
Visual Clusters [look]
Cluster # (ordered by clustering purity w.r.t. Kinetics labels)