I did all the processing in CS, so I don’t understand why the uid is missing.`Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/run_reference_motion.py”, line 322, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.run_reference_motion.run_reference_motion_correction
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/refmotion.py”, line 764, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.refmotion.mainfn_param_optimize
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/refmotion.py”, line 762, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.refmotion.mainfn_param_optimize.estimate_memory
UnboundLocalError: local variable ‘uid’ referenced before assignment
Please can you email us the corresponding job report.
sure. what is your email address?
Hi @CleoShen, what do you have the in-memory cache size parameter set to? This may be a bug.
I use the default. and we have 102G. is it ok?
This is a bug, sorry about that. I’ll let you know when a fix is available.
By the way are the movies that you’re using especially huge (high pixel count / high number of frames)? If I’m reading this right, this bug would only happen if the memory requirement of a single movie exceeds the in-memory cache size. The default in-memory cache size is 80 GB, and I’d be sort of amazed if a single movie needed that much…
Hi Harris, sorry for the late response. Thank you for your reply.
I used 1.1 A/pix and 50 frames; I don’t think they are huge. Also, I’m running CS on computer clusters, and the node I’m using is pretty powerful. So it might not be my case of inducing the error.
You’re right, those parameters all sound reasonable. Another question: how many particles do you have per micrograph? Are they very crowded? Could you describe the processing pipeline before the reference motion job?
There is a bug here, as I mentioned before, but the bug would only be encountered if the very first movie used in the hyperparameter search needs more RAM than in-memory cache size parameter. Since that paramter is 80 GB by default, this is very unlikely, and it suggests that something about the data is unusually large. My reason for asking these follow-up questions is because I’m concerned that even if we resolve the bug, this data may still not process correctly due to extremely high memory demands, and I’m trying to figure out why that might be the case.
Around 25 particles/micrograph, and I used Topaz for particle picking, in which I can set the particle distance between each other so the picked particles have little overlap.
I imported raw data movie to CS, then patch MotCor, patchCTF, manual pick, and 2D, selected 2D for TopazTrain, then did TopazExatract, several rounds of 2D, model-based heterogenous refin, NU-refine several rounds, global CTF, local CTF, at last, RefBasMotCor.
Two things I need to point out are 1. my target is 27 nm large protein, 2. the map is only of 6.5A resolution before RefBasMotCor. I don’t if these two points induce the error.
This bug has been fixed in CryoSPARC v4.4.1, released today.
after updating to 4.1, re-run the job, error is different now. It becomes `[CPU: 2.14 GB]
Working with 2779 movies containing 12505 particles
[CPU: 2.14 GB]
Computing intended data cache configuration
[CPU: 189.1 MB]
====== Job process terminated abnormally.`
If you know how to fix it?
Are there any additional details in the job log (click on the metadata tab, and then click “log”)?
DIE: allocate: out of memory (reservation insufficient)
The compute settings I used was 1 GPU, 20 GB threshold, 80 cache size.
The GPU node has 128G memory.