For the Distributional Principal Autoencoder (DPA), we prove a closed-form identity linking the data score to the geometry of the encoder level sets, and show that any latent coordinates beyond the data manifold become completely uninformative. This extends the PCA analogy from the original DPA work: DPA learns nonlinear manifolds shaped locally by the data density, with a clear, testable dimensionality criterion — conditional independence.