Memory error while importing movies

Dear all,

I just have a problem while importing movies. Every time I import the movie and the reference file, it will show memory error. If I only import movies but no reference file, it works smoothly. Besides, if we only import one movie and reference, it works. Once we import more than one movies and a reference, it fails. That means, only the first movie is successful, it fails from the second one. But, we have to import gain reference file and many movies to perform a good reconstruction. Here’s the error information.

[CPU: 6.34 GB]   Traceback (most recent call last):
  File "cryosparc2_master/cryosparc2_compute/run.py", line 82, in cryosparc2_compute.run.main
  File "cryosparc2_compute/jobs/imports/run.py", line 649, in run_import_movies_or_micrographs
    imgdata = mrc.read_mrc(abs_path)[1].sum(axis=0) * gainref
  File "/home/pangguot/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/site-packages/numpy/core/_methods.py", line 32, in _sum
    return umr_sum(a, axis, dtype, out, keepdims)
MemoryError

I want to know if there is any way to modify the memory in this importing step?

Thanks

Hi @kortal,

Can you tell us how much CPU RAM the machine this job is running on has (this is most likely the master node)?

If you have a dedicated compute node with more memory, you can specify the flag CRYOSPARC_DISABLE_IMPORT_ON_MASTER which will allow you to run the Import Movies job on this node. More details on how to implement this flag are in this post:

Please also see cryoSPARC’s hardware requirements here:

Hi @stephan,

Thank you so much! The problem is solved if I queue the job on an allocated node but not master. I think it’s probably that the master node doesn’t have too much memory since it is also login node of the cluster, actually. Although I see that the login node have 125G memory (shared with a lot of other users), but we can’t use this node to run large memory jobs.

1 Like