Maximum box size?


I am currently trying to calculate a structure of a large complex with a box size of 512 pixel on a computer with a GTX1080. It keeps reaching the “MemoryError” in pyfft. I assume this is due to a limit in the graphic card memory. Is there a way to estimate how much memory is need for a given boxsize ?

I found in this forum the rules of 20*(boxsize)^3 for version 0.3.9, but it obviously doesn’t work with the current version.

Thank you for your help

Error message if that’s help :

Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/”, line 78, in
File “cryosparc2_worker/cryosparc2_compute/jobs/refine/”, line 363, in
File “cryosparc2_compute/”, line 853, in compute_all_fscs
radwns, fsc_loosemask = get_fsc(rMA, rMB, radwn_max, mask, mask)
File “cryosparc2_compute/”, line 803, in get_fsc
fMB = fourier.fft(rMB*maskB)
File “cryosparc2_compute/”, line 110, in fft
return fftcenter3(x, fft_threads)
File “cryosparc2_compute/”, line 74, in fftcenter3
fv = fftmod.fftn(tmp, threads=th)
File “/opt/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pyfftw/interfaces/”, line 183, in fftn
calling_func, normalise_idft=normalise_idft, ortho=ortho)
File “/opt/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pyfftw/interfaces/”, line 125, in _Xfftn
FFTW_object = getattr(builders, calling_func)(*planner_args)
File “/opt/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pyfftw/builders/”, line 364, in fftn
avoid_copy, inverse, real)
File “/opt/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/pyfftw/builders/”, line 127, in _Xfftn
output_array = pyfftw.empty_aligned(output_shape, output_dtype)
File “pyfftw/utils.pxi”, line 172, in pyfftw.pyfftw.empty_aligned
File “pyfftw/utils.pxi”, line 201, in pyfftw.pyfftw.empty_aligned

These are CPU FFTs from pyfftw, not CUDA FFTs. How much memory does your workstation have?

What if you change the computational minibatch size from 2000 to a smaller value like 1000?

(I assume these are all 1 class jobs or else you would have to multiply the space based on the formula you quoted, 2.5 GB / reference in your case).

Dear Daniel,

Thanks for your answer.

Yes I tried the smaller minibatch size (down to 10). And I tried with two computers, one with 32GB and one with 64GB, both failed. And indeed, it’s for a homogeneous reifnement.
I think there might be a weird memory spike which mess up things. I might want to monitor it more closely.