Large boxsize 1600 got a err

Hi all,
When I using cryosparc to do refinement with boxsize 1600,I got a err like blew.My Gpu is rtx8000,and my memory is 256 like your recommend in website,could give me some advice howe to fix that?
thanks very much.

[CPU: 102.46 GB] Traceback (most recent call last):
  File "cryosparc_worker/cryosparc_compute/run.py", line 84, in cryosparc_compute.run.main
  File "cryosparc_worker/cryosparc_compute/jobs/refine/newrun.py", line 327, in cryosparc_compute.jobs.refine.newrun.run_homo_refine
  File "/home/spider/software/cryospooarc/cryosparc_worker/cryosparc_compute/newfourier.py", line 416, in resample_resize_real
    return ZT( ifft( ZT(fft(x, stack=stack), N_resample, stack=stack), stack=stack), M, stack=stack), psize_final
  File "/home/spider/software/cryospooarc/cryosparc_worker/cryosparc_compute/newfourier.py", line 119, in ifft
    return ifftcenter3(X, fft_threads)
  File "/home/spider/software/cryospooarc/cryosparc_worker/cryosparc_compute/newfourier.py", line 92, in ifftcenter3
    v = fftmod.irfftn(tmp, threads=th)
  File "/home/spider/software/cryospooarc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pyfftw/interfaces/numpy_fft.py", line 295, in irfftn
    calling_func, **_norm_args(norm))
  File "/home/spider/software/cryospooarc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pyfftw/interfaces/_utils.py", line 128, in _Xfftn
    FFTW_object = getattr(builders, calling_func)(*planner_args)
  File "/home/spider/software/cryospooarc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pyfftw/builders/builders.py", line 545, in irfftn
    avoid_copy, inverse, real, **_norm_args(norm))
  File "/home/spider/software/cryospooarc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pyfftw/builders/_utils.py", line 261, in _Xfftn
    flags, threads, normalise_idft=normalise_idft, ortho=ortho)
  File "pyfftw/pyfftw.pyx", line 1223, in pyfftw.pyfftw.FFTW.__cinit__
ValueError: ('Strides of the output array must be less than ', '2147483647')

Hi @wonderful, unfortunately the error indicates that one of the libraries we use (pyFFTW) uses 32-bit numbers to store the shape of the 3D volume and can’t handle a box size of 1600. We will look into this but probably can’t fix this very soon!
Do you definitely need a box size of 1600? Do you expect to get close to the nyquist resolution?

Hi,@apunjani

Thanks for your reply.I need use this boxsize is because my protein is very big,I need 2048 (bin1)boxsize actualy.If this problem can be fixed,it will a wonderful thing for me.And I using rtx8000 to do that reconstruct.I am very expect for that your team can fix that problem.

Hi @wonderful,

We will definitely be looking into this but it may not be something we can fix soon!
To help us out can you provide:

  • your OS version and output of uname -a
  • full output of nvidia-smi
  • full output of free -g

Thanks

Hi @apunjani,

Thanks much for your reply.my OS version is centos7,and the command output is blew.

This maybe another question:The swap that I set to solve the RAM problem that I meet when refine withe boxsize 1024(job can complete with the swap).And I prepare to using a PCLE8 ssd to play the swap,Is this a good way to solve the prolem blew,or I need add more RAM.Looking forward your reply.Thanks very much.

Greeting there,

I was wondering has this problem been solved?
I am using box size of 1400 for homo refinement but come across same error.

Thanks and best,
Yanhe

Hi @yanhezhao,
Unfortunately we don’t yet have support for box sizes greater than ~1290 because of this issue with FFTW.

1 Like

Hi @apunjani,

What is the box size limitation in the NU refinement and local alignment with NU enabled?
I thought cross-validation is the most RAM consuming, and is box size and particle number (I guess) related, right?

Thanks and have a great day,
Yanhe

Hi @yanhezhao,

It is true that in Non-Uniform refinement, the local cross-validation step generally is the most host-RAM intensive step due to the large number of FFTs required. This depends only on the box size of the volume, so we recommend downsampling particles if you are limited by memory errors. Please also refer to our reply on this post.

Best,
Michael

1 Like