AssertionError in patch motion correction using T20 proteasome tutorial v2.15.0

I installed version 2.15.0 on my computer and tried to run the program using T20 proteasome tutorial data. The first step, “import movies”, went okay. However, in the next “patch motion correction” step, the program stopped with the AssertionError. I tried to use only 1 movie but the program still gave me the same error.

Hardware specifications are as follows: 8 CPUs, 16 GB RAM, 2 GPUs (GTX 1050 3GB RAM).

Details of the screen output are shown below (with reduce GPU memory and further reduce GPU memory options turned on):

[CPU: 90.1 MB] Project P2 Job J32 Started
[CPU: 90.1 MB] Master running v2.15.0+200710, worker running v2.15.0+200710
[CPU: 90.4 MB] Running on lane default
[CPU: 90.4 MB] Resources allocated:
[CPU: 90.4 MB] Worker: ..***
[CPU: 90.4 MB] CPU : [0, 1, 2, 3, 4, 5]
[CPU: 90.4 MB] GPU : [0]
[CPU: 90.4 MB] RAM : [0, 1]
[CPU: 90.4 MB] SSD : False
[CPU: 90.4 MB] --------------------------------------------------------------
[CPU: 90.4 MB] Importing job module for job type patch_motion_correction_multi…
[CPU: 166.1 MB] Job ready to run
[CPU: 166.1 MB] ***************************************************************
[CPU: 166.4 MB] Job will process this many movies: 20
[CPU: 166.4 MB] parent process is 20714
[CPU: 134.3 MB] Calling CUDA init from 20746
[CPU: 166.6 MB] Outputting partial results now…
[CPU: 166.7 MB] Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 85, in cryosparc2_compute.run.main
File “cryosparc2_master/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 363, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi
AssertionError: Child process with PID 20746 has terminated unexpectedly!

Would this error be due to the hardware specification or some bug?

Hi @sangho,

It’s likely that something went wrong during the installation. Could you please execute the following command on the cryosparc computer? You’ll need to be logged in as the cryosparc user.

cryosparcm joblog Pxx Jyy

where xx and yy are replaced with the project number and job number respectively (probably P1 J2 or something like that).

You may need to use the full path to where cryosparcm is installed, depending on your PATH settings. If you get “command not found” or something similar, try

find ~ -name “cryosparcm”

which should tell you where cryosparcm is located. Again, you’ll need to be logged in as the cryosparc user for this to work.

Once you get the job’s log, please paste it here - it should contain more detailed information as to the nature of the error.

Thanks,
Harris

Hi Harris,

Thanks for your quick response. Below is the output of the job’s log:

Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py”, line 174, in _run_module_as_main
main”, fname, loader, pkg_name)
File “/home/sangho/cryosparc/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py”, line 72, in _run_code
exec code in run_globals
File “/home/sangho/cryosparc/cryosparc2_master/cryosparc2_compute/client.py”, line 83, in
print eval(“cli.”+command)
File “”, line 1, in
File “/home/sangho/cryosparc/cryosparc2_master/cryosparc2_compute/client.py”, line 57, in func
assert False, res[‘error’]
AssertionError: {u’message’: u"OtherError: argument of type ‘NoneType’ is not iterable", u’code’: 500, u’data’: None, u’name’: u’OtherError’}

Hmm… that error message actually means that the job’s log cannot be found. Could you double check the command you entered? For example, if I wanted to view the output of Project 1, Job 12, the command would be:

cryosparcm joblog P1 J12

Your installation is a single-workstation installation, right? You don’t have a cluster of several computers?

Harris

Hi Harris,

Sorry for the confusion. Now I post the correct output from the problematic job:

========= monitor process now starting main process
MAINPROCESS PID 20714
========= monitor process now waiting for main process
MAIN PID 20714
motioncorrection.run_patch cryosparc2_compute.jobs.jobregister


Running job on hostname %s sangho-sparc.skku.edu
Allocated Resources : {u’lane’: u’default’, u’target’: {u’monitor_port’: None, u’lane’: u’default’, u’name’: u’sangho-sparc.skku.edu’, u’title’: u’Worker node sangho-sparc.skku.edu’, u’resource_slots’: {u’GPU’: [0, 1], u’RAM’: [0, 1], u’CPU’: [0, 1, 2, 3, 4, 5, 6, 7]}, u’hostname’: u’sangho-sparc.skku.edu’, u’worker_bin_path’: u’/home/sangho/cryosparc/cryosparc2_worker/bin/cryosparcw’, u’cache_path’: None, u’cache_quota_mb’: None, u’resource_fixed’: {u’SSD’: False}, u’gpus’: [{u’mem’: 3163422720, u’id’: 0, u’name’: u’GeForce GTX 1050’}, {u’mem’: 3166699520, u’id’: 1, u’name’: u’GeForce GTX 1050’}], u’cache_reserve_mb’: 10000, u’type’: u’node’, u’ssh_str’: u’sangho@sangho-sparc.skku.edu’, u’desc’: None}, u’license’: True, u’hostname’: u’sangho-sparc.skku.edu’, u’slots’: {u’GPU’: [0], u’RAM’: [0, 1], u’CPU’: [0, 1, 2, 3, 4, 5]}, u’fixed’: {u’SSD’: False}, u’lane_type’: u’default’, u’licenses_acquired’: 1}
/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cublas.py:284: UserWarning: creating CUBLAS context to get version number
warnings.warn(‘creating CUBLAS context to get version number’)
Process Process-1:1:
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/process.py”, line 267, in _bootstrap
self.run()
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/process.py”, line 114, in run
self._target(*self._args, **self._kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 155, in process_work_simple
process_setup(proc_idx) # do any setup you want on a per-process basis
File “cryosparc2_master/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 80, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi.process_setup
File “cryosparc2_compute/engine/init.py”, line 8, in
from engine import *
File “cryosparc2_worker/cryosparc2_compute/engine/engine.py”, line 12, in init cryosparc2_compute.engine.engine
File “cryosparc2_worker/cryosparc2_compute/engine/gfourier.py”, line 6, in init cryosparc2_compute.engine.gfourier
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 20, in
from . import misc
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/misc.py”, line 25, in
from . import cublas
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cublas.py”, line 292, in
_cublas_version = int(_get_cublas_version())
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cublas.py”, line 285, in _get_cublas_version
h = cublasCreate()
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cublas.py”, line 203, in cublasCreate
cublasCheckStatus(status)
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cublas.py”, line 179, in cublasCheckStatus
raise e
cublasNotInitialized
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)IOError: [Errno 32] Broken pipe
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
Traceback (most recent call last):
File “/home/sangho/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
IOError: [Errno 32] Broken pipe
========= sending heartbeat
========= sending heartbeat
========= sending heartbeat
**** handle exception rc
set status to failed
Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 85, in cryosparc2_compute.run.main
File “cryosparc2_master/cryosparc2_compute/jobs/motioncorrection/run_patch.py”, line 363, in cryosparc2_compute.jobs.motioncorrection.run_patch.run_patch_motion_correction_multi
AssertionError: Child process with PID 20746 has terminated unexpectedly!
========= main process now complete.
========= monitor process now complete.

Hi @sangho,

It looks like some dependency libraries aren’t working correctly. Usually this means that the cryoSPARC installation failed partway through the process. I’d recommend checking that you have a compatible CUDA version installed, and then re-install cryoSPARC. The installation guide can be found here: https://cryosparc.com/docs/reference/install

Hi @hsnyder

Thanks for your advice. It turned out that the CUDA version was not compatible, which I had not noticed until recently. Now the cryoSPARC works.

Only one small trouble is that my NVIDIA card has only 3 GB memory, which apparently made impossible to proceed to motion correction step.

Hi @sangho,

Yes, unfortunately 3 GB is not enough. We recommend cards with at least 11 GB to avoid being constrained by GPU memory.

Harris