Reference Based Motion correction UID missing

I did all the processing in CS, so I don’t understand why the uid is missing.`Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/”, line 95, in
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/”, line 322, in
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/”, line 764, in
File “cryosparc_master/cryosparc_compute/jobs/motioncorrection/”, line 762, in
UnboundLocalError: local variable ‘uid’ referenced before assignment


Hi @CleoShen.

Please can you email us the corresponding job report.

sure. what is your email address?

Hi @CleoShen, what do you have the in-memory cache size parameter set to? This may be a bug.

I use the default. and we have 102G. is it ok?

Hi @CleoShen

This is a bug, sorry about that. I’ll let you know when a fix is available.

By the way are the movies that you’re using especially huge (high pixel count / high number of frames)? If I’m reading this right, this bug would only happen if the memory requirement of a single movie exceeds the in-memory cache size. The default in-memory cache size is 80 GB, and I’d be sort of amazed if a single movie needed that much…

– Harris

Hi Harris, sorry for the late response. Thank you for your reply.
I used 1.1 A/pix and 50 frames; I don’t think they are huge. Also, I’m running CS on computer clusters, and the node I’m using is pretty powerful. So it might not be my case of inducing the error.

1 Like

You’re right, those parameters all sound reasonable. Another question: how many particles do you have per micrograph? Are they very crowded? Could you describe the processing pipeline before the reference motion job?

There is a bug here, as I mentioned before, but the bug would only be encountered if the very first movie used in the hyperparameter search needs more RAM than in-memory cache size parameter. Since that paramter is 80 GB by default, this is very unlikely, and it suggests that something about the data is unusually large. My reason for asking these follow-up questions is because I’m concerned that even if we resolve the bug, this data may still not process correctly due to extremely high memory demands, and I’m trying to figure out why that might be the case.

Around 25 particles/micrograph, and I used Topaz for particle picking, in which I can set the particle distance between each other so the picked particles have little overlap.

I imported raw data movie to CS, then patch MotCor, patchCTF, manual pick, and 2D, selected 2D for TopazTrain, then did TopazExatract, several rounds of 2D, model-based heterogenous refin, NU-refine several rounds, global CTF, local CTF, at last, RefBasMotCor.

Two things I need to point out are 1. my target is 27 nm large protein, 2. the map is only of 6.5A resolution before RefBasMotCor. I don’t if these two points induce the error.

1 Like

This bug has been fixed in CryoSPARC v4.4.1, released today.

1 Like

after updating to 4.1, re-run the job, error is different now. It becomes `[CPU: 2.14 GB]
Working with 2779 movies containing 12505 particles

[CPU: 2.14 GB]
Computing intended data cache configuration

[CPU: 189.1 MB]
====== Job process terminated abnormally.`
If you know how to fix it?

Are there any additional details in the job log (click on the metadata tab, and then click “log”)?

DIE: allocate: out of memory (reservation insufficient)

The compute settings I used was 1 GPU, 20 GB threshold, 80 cache size.
The GPU node has 128G memory.

Yeah, I was concerned that this might happen, as I mentioned in my post last month. I think your machine doesn’t quite have enough RAM to process this dataset unfortunately. You’ve indicated that the protein is large… what box size are you working at?

The pixel size is 1.11 Å/pix, and the box size is 288 pixels. We are running the CS on computer clusters. Here is the information about the nodes. Do you still think the RAM is not big enough?

Hi @CleoShen, How many frames in your movies?

40 movies. Tif format.