Homogeneous Refinement Fails After Reference Based Motion Correction

Hello,

I have been trying to use Reference Based Motion Correction however, after running the job and using the particles for a Homogeneous Refinement, I get the following error:

[CPU:  37.15 GB  Avail: 352.05 GB]
====== Initial Model ======

[CPU:  37.15 GB  Avail: 352.05 GB]
  Resampling initial model to specified volume representation size and pixel-size...

[CPU:  53.94 GB  Avail: 336.52 GB]
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 95, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/refine/newrun.py", line 356, in cryosparc_master.cryosparc_compute.jobs.refine.newrun.run_homo_refine
  File "/spshared/apps/cryosparc3/cryosparc_worker/cryosparc_compute/newfourier.py", line 417, in resample_resize_real
    return ZT( ifft( ZT(fft(x, stack=stack), N_resample, stack=stack), stack=stack), M, stack=stack), psize_final
  File "/spshared/apps/cryosparc3/cryosparc_worker/cryosparc_compute/newfourier.py", line 122, in ifft
    return ifftcenter3(X, fft_threads)
  File "/spshared/apps/cryosparc3/cryosparc_worker/cryosparc_compute/newfourier.py", line 95, in ifftcenter3
    v = fftmod.irfftn(tmp, threads=th)
  File "/spshared/apps/cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/pyfftw/interfaces/numpy_fft.py", line 293, in irfftn
    return _Xfftn(a, s, axes, overwrite_input, planner_effort,
  File "/spshared/apps/cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/pyfftw/interfaces/_utils.py", line 128, in _Xfftn
    FFTW_object = getattr(builders, calling_func)(*planner_args)
  File "/spshared/apps/cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/pyfftw/builders/builders.py", line 543, in irfftn
    return _Xfftn(a, s, axes, overwrite_input, planner_effort,
  File "/spshared/apps/cryosparc3/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.8/site-packages/pyfftw/builders/_utils.py", line 260, in _Xfftn
    FFTW_object = pyfftw.FFTW(input_array, output_array, axes, direction,
  File "pyfftw/pyfftw.pyx", line 1223, in pyfftw.pyfftw.FFTW.__cinit__
ValueError: ('Strides of the output array must be less than ', '2147483647')

This happens only with the particles from the reference based motion correction and running the same refinement with the original particles does not produce this error.

Thanks for your help.

@ddox Please can you

  • post a screenshot of the expanded Inputs section under the Inputs and Parameters tab and
  • for each job UID mentioned in _Inputs, describe job type
  • list the values of any non-default parameter of Homogeneous Refinement

Are you using EER data and upsampling 2 to 8K (which is the default)?

If so, after RBMC, your box size will have doubled, if you had been scaling down to equivalent of 4K.

J373 is the RBMC and J372 is the previous refinement job.

The only non-default parameters is that the SSD cache was disabled.

Thanks!

I had been doing everything on default, and I am not sure what the familiar with EER data. If this was the case, how would I go about down scaling to ~4K?

Default is upscaling to 8K. How far did you Fourier crop the particles when extracted? Check the RBMC log for what the output box size was.

Otherwise, you can try the Downsample Particles job, and try running the refinement with the downsampled particles stack.

1 Like

I agree with @rbs_sci about the likely cause. Trying a downsample particles job is probably the easiest way to work around this.
–Harris