@apunjani What controls the 3D variability memory footprint? Is it just the particle size and number of eigenvectors, leading to a similar calculation as for 3D classification?

Thanks.

@apunjani What controls the 3D variability memory footprint? Is it just the particle size and number of eigenvectors, leading to a similar calculation as for 3D classification?

Thanks.

@DanielAsarnow yes it’s something like

O( N^3 (5*K + K^2) )

N - box size

K - num eigenvectors

So there’s a small quadratic dependence on the number of eigenvectors, but this doesnt become significant until K~7 or so

Thanks. I’m trying to balance between particle size and K. I seem to get fairly different results with varying K especially in terms of orthogonality (as perceived by me), but this may be more related to the smoothing factor. I assume the “extra” variance is supposed to be ignored rather than deposited into the last eigenvector or split between them.

On the other hand, in the general case (e.g. with SVD of data made by linear combination of some non-linear processes) the orthogonal basis vectors are linear combinations of the true basis, some other information might be needed to model the true basis. This approach is very developed in time-resolved optical spectroscopies.