Dear Oli, dear Ali, dear Daniel,
I think I’ve been running into similar issues as described above (I’m running CS version 2.15.0), trying to refine a structure with a box size of 800pix on a cluster equipped with NVIDIA GV100GL Tesla V100 with 16GB of GPU RAM per GPU node. I was using the new homogeneous refinement, all parameters default, getting this error after the start of iteration 0:
I tried to set the GPU batch size to 100 which made the jobs run further but they also failed in the end. Incidentally, non-uniform refinement jobs also fail with this kind of box size.
My structure is very large but I can downsample a bit (box size 600pix) without reaching the Nyquist limit which now made the homogeneous refinement run all the way to the end, but the NU refinement failed again. I now have a whole collection of error messages
So my question is should these jobs actually run, do I do something wrong? Have there been any developments since last year December? Has cryoSPARC done some sort of benchmarking for memory for running refinement jobs? Happy to provide further details if needed…