3D Flex Data prep - RAM unavailable

Hi,

3D Flex was released exactly at the right time for us, as it is precisely what our project needs right now!
Unfortunately, I, like some others, have trouble with the RAM requirements. I tried running 3D Flex data prep, but it does not start (“RAM not available”). The workstation I am using has 24 CPU cores, 32 GB RAM, 1 GPU, so it is unsurprising RAM is limiting.

I therefore tried decreasing the number of particles from 460,000 particles first to 100,000 then to 10,000 particles. Neither job did start. My original particle size is 256 pixels, I had specified in the data prep jobs to crop to 220px and then resize to a training box size of 50px. I have also tested re-extracting to a box size of 50px using an extract particles job and using that as input for the data prep job. This also did not start, not even when additionally limiting the number of 50px particles to 10,000. Is this because of the 32 GB RAM? It seems quite extreme to me, that not even the jobs with severely limited input data would start.

How can I tell the required memory for the job (and the other 3D flex jobs)? I am mainly asking to see if it would be worth to upgrade our RAM, and if so, by how much. Are there any workarounds to limit memory usage, ie by splitting the job somehow?

I would appreciate some feedback on this. Many thanks in advance.
Katharina

Welcome to the forum @katsie
Please see the response to a related question here: 3DFlex train job - RAM unavailable - #2 by apunjani.

I know this is super late but for those who are newly encountering this problem. You can go into the cryosparc_master and cryosparc_worker folders nad in each go into cryosparc_compute. Then find the folder for the specific job or job set (felx_refinement in this case) and edit the build.py file go through all the job_params(#,#,#,#) lines and edit the third string to reflect your ram. You will see for this job there are 48000 and 64000 calls in the third slot (first slot is CPUs, second slot is GPUs, third slot if RAM Mbs). For 32Gb systems change all of these to 24000 for 48Gb systesm change to 32000. you will always have less RAM available than installed. You can do this for any job type. I used to run small pilot jobs with binned data on 2Gb GPUs

We would like to caution against changes to files inside the cryosparc_master/ and cryosparc_worker/ directories, with the exception of the ./config.sh files.
Changes proposed by @schiracha might

  • not persist after software updates
  • increase the probability of a job failure because the job was started with an insufficient resource reservation
1 Like