I did all the processing in CS, so I don’t understand why the uid is missing.`Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/run_reference_motion.py”, line 322, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.run_reference_motion.run_reference_motion_correction
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/refmotion.py”, line 764, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.refmotion.mainfn_param_optimize
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/refmotion.py”, line 762, in cryosparc_master.cryosparc_compute.jobs.motioncorrection.refmotion.mainfn_param_optimize.estimate_memory
UnboundLocalError: local variable ‘uid’ referenced before assignment
This is a bug, sorry about that. I’ll let you know when a fix is available.
By the way are the movies that you’re using especially huge (high pixel count / high number of frames)? If I’m reading this right, this bug would only happen if the memory requirement of a single movie exceeds the in-memory cache size. The default in-memory cache size is 80 GB, and I’d be sort of amazed if a single movie needed that much…
Hi Harris, sorry for the late response. Thank you for your reply.
I used 1.1 A/pix and 50 frames; I don’t think they are huge. Also, I’m running CS on computer clusters, and the node I’m using is pretty powerful. So it might not be my case of inducing the error.
You’re right, those parameters all sound reasonable. Another question: how many particles do you have per micrograph? Are they very crowded? Could you describe the processing pipeline before the reference motion job?
There is a bug here, as I mentioned before, but the bug would only be encountered if the very first movie used in the hyperparameter search needs more RAM than in-memory cache size parameter. Since that paramter is 80 GB by default, this is very unlikely, and it suggests that something about the data is unusually large. My reason for asking these follow-up questions is because I’m concerned that even if we resolve the bug, this data may still not process correctly due to extremely high memory demands, and I’m trying to figure out why that might be the case.
Around 25 particles/micrograph, and I used Topaz for particle picking, in which I can set the particle distance between each other so the picked particles have little overlap.
I imported raw data movie to CS, then patch MotCor, patchCTF, manual pick, and 2D, selected 2D for TopazTrain, then did TopazExatract, several rounds of 2D, model-based heterogenous refin, NU-refine several rounds, global CTF, local CTF, at last, RefBasMotCor.
Two things I need to point out are 1. my target is 27 nm large protein, 2. the map is only of 6.5A resolution before RefBasMotCor. I don’t if these two points induce the error.
Yeah, I was concerned that this might happen, as I mentioned in my post last month. I think your machine doesn’t quite have enough RAM to process this dataset unfortunately. You’ve indicated that the protein is large… what box size are you working at?
The pixel size is 1.11 Å/pix, and the box size is 288 pixels. We are running the CS on computer clusters. Here is the information about the nodes. Do you still think the RAM is not big enough?