Concept Whitening for interpretable image recognition用于可解释图像识别的概念白化
Zhi Chen, Yijie Bei, Cynthia Rudin(Nature Machine Intelligence, Vol 2, Dec 2020, 772-782)
Deep neural networks achieve state-of-the-art performance in image recognition. But what does a neural network encode in its latent space?
深度神经网络在图像识别中实现了最先进的性能。但是,神经网络在隐空间中编码了什么呢?
Ideally, we want the latent space to be disentangled,meaning that different parts of the latent space represent different concepts that are understandable to humans.
理想情况下,我们希望隐空间是解开的,这意味着潜在空间的不同部分代表了人类可以理解的不同概念。
With axes aligned with predefined concepts,we will see how an image travels through the layers of the neural network.
使用轴与预定义的概念对齐后,我们将看到图像如何穿过神经网络的各个层。