KeyError: <pycuda._driver.Context object at

Hi,

We just installed CryoSPARC 3.1 on a new system with RTX 3090s and CUDA 11.2, and Patch CTF fails with the attached error on test data (pre-aligned micrographs, collected on K3). I have already applied all the latest patches. Thoughts? Gctf runs fine, as does blob picker, so it is not a general pathology with the installation or system.

Cheers
Oli

[CPU: 192.0 MB]  Job will process this many micrographs:  100
[CPU: 192.1 MB]  parent process is 86913
[CPU: 141.2 MB]  Calling CUDA init from 86939
[CPU: 377.3 MB]  -- 0.0: processing 0 of 100: J2/imported/21mar17b_Fran3_00012Gr_00067Sq_v03_00005Hln_00025Enn.frames_patch_aligned_doseweighted.mrc
        loading /home/exx/processing/cryosparc_projects/kook/P1/J2/imported/21mar17b_Fran3_00012Gr_00067Sq_v03_00005Hln_00025Enn.frames_patch_aligned_doseweighted.mrc
        Loading raw mic data from J2/imported/21mar17b_Fran3_00012Gr_00067Sq_v03_00005Hln_00025Enn.frames_patch_aligned_doseweighted.mrc ...
        Done in 0.02s
        Processing ...
[CPU: 394.5 MB]  Traceback (most recent call last):
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/tools.py", line 429, in context_dependent_memoize
    return ctx_dict[cur_ctx][args]
KeyError: <pycuda._driver.Context object at 0x7fe140d3ce70>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/cryosparc_worker/cryosparc_compute/jobs/runcommon.py", line 1726, in run_with_except_hook
    run_old(*args, **kw)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/cryosparc_worker/cryosparc_compute/jobs/pipeline.py", line 186, in thread_work
    work = processor.exec(item)
  File "/usr/local/cryosparc_worker/cryosparc_compute/jobs/pipeline.py", line 43, in exec
    return self.process(item)
  File "cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py", line 112, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
  File "cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py", line 118, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
  File "cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py", line 119, in cryosparc_compute.jobs.ctf_estimation.run.run.ctfworker.process
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py", line 71, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/patchctf.py", line 117, in cryosparc_compute.jobs.ctf_estimation.patchctf.patchctf_v217
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py", line 549, in fill
    func = elementwise.get_fill_kernel(self.dtype)
  File "<decorator-gen-13>", line 2, in get_fill_kernel
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/tools.py", line 433, in context_dependent_memoize
    result = func(*args)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 498, in get_fill_kernel
    "fill")
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 163, in get_elwise_kernel
    arguments, operation, name, keep, options, **kwargs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 149, in get_elwise_kernel_and_types
    keep, options, **kwargs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 76, in get_elwise_module
    options=options, keep=keep, no_extern_c=True)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 291, in __init__
    arch, code, cache_dir, include_dirs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 254, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 78, in compile_plain
    checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 55, in preprocess_source
    cmdline, stderr=stderr)
pycuda.driver.CompileError: nvcc preprocessing of /tmp/tmphasb8207.cu failed
[command: nvcc --preprocess -arch sm_86 -I/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/cuda /tmp/tmphasb8207.cu --compiler-options -P]
[stderr:
b"nvcc fatal   : Could not open output file '/home/tmp//tmpxft_000153b2_0000000a'\n"]

[CPU: 192.8 MB]  Outputting partial results now...
[CPU: 192.8 MB]  Traceback (most recent call last):
  File "cryosparc_worker/cryosparc_compute/run.py", line 84, in cryosparc_compute.run.main
  File "cryosparc_worker/cryosparc_compute/jobs/ctf_estimation/run.py", line 256, in cryosparc_compute.jobs.ctf_estimation.run.run
AssertionError: Child process with PID 86939 has terminated unexpectedly!```

Although blob picker fails with a related error… hmmm…

  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/tools.py", line 429, in context_dependent_memoize
    return ctx_dict[cur_ctx][args]
KeyError: <pycuda._driver.Context object at 0x7f41d1799d50>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "cryosparc_worker/cryosparc_compute/run.py", line 84, in cryosparc_compute.run.main
  File "cryosparc_worker/cryosparc_compute/jobs/template_picker_gpu/run.py", line 61, in cryosparc_compute.jobs.template_picker_gpu.run.run
  File "cryosparc_worker/cryosparc_compute/jobs/template_picker_gpu/run.py", line 246, in cryosparc_compute.jobs.template_picker_gpu.run.do_pick
  File "cryosparc_worker/cryosparc_compute/jobs/template_picker_gpu/run.py", line 307, in cryosparc_compute.jobs.template_picker_gpu.run.do_pick
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/gpuarray.py", line 549, in fill
    func = elementwise.get_fill_kernel(self.dtype)
  File "<decorator-gen-120>", line 2, in get_fill_kernel
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/tools.py", line 433, in context_dependent_memoize
    result = func(*args)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 498, in get_fill_kernel
    "fill")
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 163, in get_elwise_kernel
    arguments, operation, name, keep, options, **kwargs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 149, in get_elwise_kernel_and_types
    keep, options, **kwargs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/elementwise.py", line 76, in get_elwise_module
    options=options, keep=keep, no_extern_c=True)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 291, in __init__
    arch, code, cache_dir, include_dirs)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 254, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 78, in compile_plain
    checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
  File "/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/compiler.py", line 55, in preprocess_source
    cmdline, stderr=stderr)
pycuda.driver.CompileError: nvcc preprocessing of /tmp/tmpa0k572zn.cu failed
[command: nvcc --preprocess -arch sm_86 -I/usr/local/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.7/site-packages/pycuda/cuda /tmp/tmpa0k572zn.cu --compiler-options -P]
[stderr:
b"nvcc fatal   : Could not open output file '/home/tmp//tmpxft_000155e2_0000000a'\n"]

seems like some kind of permissions error but beyond that I’m a bit lost…

Is it a GPU architecure issue? - According to this post, it seems like it is.

1 Like

Thanks @vamsee - why do you think that it is an architecture issue? I’m not sure I see the similarity in these two errors other than they both involve nvcc but I am probably missing something… these are are RTX-3090 cards which I believe are supported by csparc with CUDA >11.1

1 Like

I looked through the errors and it seems like there was one exception initially thrown which was being handled and then another got thrown during the process. The permissions issue in my mind was a secondary error while the original one was a pycuda error. Maybe that’s a standard way of showing errors with pycuda. I looked through a bit more to see if this error was common and I found these links. They seem like permission errors but apparently are some path corruption.

Maybe you can get more info from these.

1 Like

Mystery solved - it was a path issue. We had $TMPDIR set to /home/tmp/, copied over from another installation (2080s, Ubuntu, CUDA 10.x), where it wasn’t a problem - on this system, the trailing / was causing problems for nvcc. All works now!

2 Likes

Hey, i come across the same question like, but i did not find the $TMPDIR you mentioned. Hope your generous help.