I have a large number of particles I am attempting to split in order to use the local SSD cache for processing. I’ve split my particles into stacks of 1M and would like to use the downsample particles job to generate new particle stacks with smaller total file sizes.
When I try running downsample particles, it consistently hangs indefinitely after processing the 40th stack, regardless of which particle subset it is given or the number of threads indicated in the settings. Has anyone encountered this problem?
I was facing the same issue today. My job repeatedly got stuck after a specific number of batches (in my case 64/88). But then, when I for once waited long enough (maybe another 30-60 min for some 180.000 particles in a 560 px box originally), it did finish (without writing out any logs for the remaining batches). So you just need to be patient, I guess.
If you were to run a clone for this job, it would be interesting to watch the job directory and see if files are being added to the directory or downsample subdirectory during the 30-60 min period of apparent inactivity.
Another interesting test would another clone of the job with blank Crop to box size and Fourier crop to box size parameters.
In case you run these tests please can you update us on your findings?