For the Distributional Principal Autoencoder (DPA), we prove an exact identity linking the data score to the geometry of the encoder level sets, and show that any latent coordinates beyond the data manifold become completely uninformative. This means that the DPA learns nonlinear manifolds shaped locally by the data density, with a clear, testable dimensionality criterion — conditional independence, furthering the PCA analogy.