Something like the error from the whole linear subspace image model, and the amount
gained from each component, might make sense too.
In a full PCA/SVD analysis, say of data with N observations and D dimensions, one might expect that the component magnitudes fall off logarithmically, after the first k << N components that explain most of the variance. The noise and anything that can’t be represented as a linear combination of components are split across many small, random looking components. Then it’s pretty easy to see how much each is PC worth and how good the k-rank approximation is.
For 3DVA, we can’t really compute hundreds of PCs to see the full curve, but we can at least be confident that structural changes within a component were most parsimoniously described together. I would really like to be able to say, for example, whether a PC I connect to some biological process is genuinely more significant (justifying ignoring most of the other PCs) vs. there being very many dynamical modes of similar magnitude, from which we are plucking the ones that have interesting spatial correlations across the structure.