Not enough memory for benchmark

The benchmark that comes with CryoSPARC always fails at job 17 (homogeneous refinement) due to lack of memory. I run the job with a slurm wrapper like so:

#SBATCH --mem={{ (ram_gb)|int }}GB

The value of ram_gb that is predicted is not enough (24 GB).

I’m not sure how predicted memory is calculated, but that step needs a little goosing.

1 Like

We occasionally find jobs crashing on our SLURM cluster due to underestimated memory requirements. Until either memory estimates are reworked (I heard this is work-in-progress?) or a bespoke estimation method based on input is introduced, we’ve included additional buffer to our submission script:

#SBATCH --mem={{ (ram_gb*2)|int }}G

That seems to do the job for us.

1 Like

Preparing

#SBATCH --mem={{ (ram_gb*2)|int }}G

seems to me as overkill, which could backfire due to fact that it is going to be used for all jobs in lane, which could lead to waste of resources (i.e. memory) and also could make your jobs wating in queues much longer.

In my case I’ve changed third parameter in job.set_resources_needed() method in In class builder(bc.builderbase) in file cryosparc_master/cryosparc_compute/jobs/refine/newbuild.py as printed below:

def recompute_resources(job):
“”" required. should have no side effects and should overwrite the resources based on params and inputs “”"
params = bc.com.get_merged_params(job)
job.set_resources_needed(4, 1, 36000, params.get(‘compute_use_ssd’, True))

Nevertheless It would be nice if there would be addtional box for compute parameters in “build new job menu” where are other parameters such as particle files etc.

1 Like