Dear cryoSPARC users,
I am trying to run patch CTF multi on a set of micrographs when I encountered the following error as below.
License is valid.
Launching job on lane default target Nannenga1.dhcp.asu.edu …
Running job on master node hostname Nannenga1.dhcp.asu.edu
[CPU: 93.3 MB] Project P1 Job J3 Started
[CPU: 93.3 MB] Master running v2.15.0, worker running v2.15.0
[CPU: 93.3 MB] Running on lane default
[CPU: 93.3 MB] Resources allocated:
[CPU: 93.3 MB] Worker: Nannenga1.dhcp.asu.edu
[CPU: 93.3 MB] CPU : [0, 1, 2, 3, 4, 5]
[CPU: 93.3 MB] GPU : [0, 1, 2]
[CPU: 93.3 MB] RAM : [0]
[CPU: 93.3 MB] SSD : False
[CPU: 93.3 MB] --------------------------------------------------------------
[CPU: 93.3 MB] Importing job module for job type patch_ctf_estimation_multi…
[CPU: 163.0 MB] Job ready to run
[CPU: 163.0 MB] ***************************************************************
[CPU: 163.3 MB] Job will process this many micrographs: 1699
[CPU: 163.3 MB] parent process is 11784
[CPU: 136.6 MB] Calling CUDA init from 11816
[CPU: 136.6 MB] Calling CUDA init from 11818
[CPU: 136.6 MB] Calling CUDA init from 11817
[CPU: 272.9 MB] – 1.0: processing 0 of 1699: J2/imported/12-3_00001_Jul16_17.07.15_sumavg_DW.mrc
loading /mnt/SSD/cryosparc/P1/J2/imported/12-3_00001_Jul16_17.07.15_sumavg_DW.mrc
Loading raw mic data from J2/imported/12-3_00001_Jul16_17.07.15_sumavg_DW.mrc …
Done in 0.03s
Processing …
[CPU: 272.9 MB] – 2.0: processing 1 of 1699: J2/imported/12-3_00002_Jul16_17.07.38_sumavg_DW.mrc
loading /mnt/SSD/cryosparc/P1/J2/imported/12-3_00002_Jul16_17.07.38_sumavg_DW.mrc
Loading raw mic data from J2/imported/12-3_00002_Jul16_17.07.38_sumavg_DW.mrc …
Done in 0.03s
Processing …
[CPU: 272.9 MB] – 0.0: processing 2 of 1699: J2/imported/12-3_00003_Jul16_17.08.01_sumavg_DW.mrc
loading /mnt/SSD/cryosparc/P1/J2/imported/12-3_00003_Jul16_17.08.01_sumavg_DW.mrc
Loading raw mic data from J2/imported/12-3_00003_Jul16_17.08.01_sumavg_DW.mrc …
Done in 0.03s
Processing …
[CPU: 304.3 MB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 165, in thread_work
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 115, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 66, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 114, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/engine/newgfourier.py”, line 31, in cryosparc2_compute.engine.newgfourier.get_plan_R2C_2D_outofplace
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 102, in init
capability = misc.get_compute_capability(misc.get_current_device())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/misc.py”, line 254, in get_current_device
return drv.Device(cuda.cudaGetDevice())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 767, in cudaGetDevice
cudaCheckStatus(status)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 565, in cudaCheckStatus
raise e
cudaErrorInsufficientDriver
[CPU: 304.1 MB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 165, in thread_work
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 115, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 66, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 114, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/engine/newgfourier.py”, line 31, in cryosparc2_compute.engine.newgfourier.get_plan_R2C_2D_outofplace
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 102, in init
capability = misc.get_compute_capability(misc.get_current_device())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/misc.py”, line 254, in get_current_device
return drv.Device(cuda.cudaGetDevice())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 767, in cudaGetDevice
cudaCheckStatus(status)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 565, in cudaCheckStatus
raise e
cudaErrorInsufficientDriver
[CPU: 304.3 MB] Traceback (most recent call last):
File “cryosparc2_compute/jobs/runcommon.py”, line 1685, in run_with_except_hook
run_old(*args, **kw)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/threading.py”, line 754, in run
self.__target(*self.__args, **self.__kwargs)
File “cryosparc2_compute/jobs/pipeline.py”, line 165, in thread_work
work = processor.process(item)
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 115, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 118, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 119, in cryosparc2_compute.jobs.ctf_estimation.run.run.ctfworker.process
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 66, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/patchctf.py”, line 114, in cryosparc2_compute.jobs.ctf_estimation.patchctf.patchctf
File “cryosparc2_worker/cryosparc2_compute/engine/newgfourier.py”, line 31, in cryosparc2_compute.engine.newgfourier.get_plan_R2C_2D_outofplace
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py”, line 102, in init
capability = misc.get_compute_capability(misc.get_current_device())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/misc.py”, line 254, in get_current_device
return drv.Device(cuda.cudaGetDevice())
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 767, in cudaGetDevice
cudaCheckStatus(status)
File “/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cudart.py”, line 565, in cudaCheckStatus
raise e
cudaErrorInsufficientDriver
[CPU: 168.5 MB] Outputting partial results now…
[CPU: 166.9 MB] Traceback (most recent call last):
File “cryosparc2_worker/cryosparc2_compute/run.py”, line 85, in cryosparc2_compute.run.main
File “cryosparc2_worker/cryosparc2_compute/jobs/ctf_estimation/run.py”, line 258, in cryosparc2_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 11816 has terminated unexpectedly!
My cryosparc installation details are as follows:
CryoSPARC System master node installed at
/home/cryosparc_user/cryosparc_user/software/cryosparc/cryosparc2_master
Current cryoSPARC version: v2.15.0
cryosparcm process status:
app STOPPED Not started
app_dev STOPPED Not started
command_core RUNNING pid 10853, uptime 0:34:33
command_proxy RUNNING pid 10896, uptime 0:34:30
command_rtp STOPPED Not started
command_vis RUNNING pid 10888, uptime 0:34:31
database RUNNING pid 10776, uptime 0:34:36
watchdog_dev STOPPED Not started
webapp RUNNING pid 10901, uptime 0:34:28
webapp_dev STOPPED Not started
I have setup cryosparc to use cuda 9.2. Is this cuda related error? Any inputs will be greatly valued.