Continuing the discussion from Adjust per-particle defocus of subparticles in volume alignment tools?:
Yes, I know python’s a sticking point for sure, and I totally understand having been there myself! I think the idea with cs-tools is to let “power users” work with features the minute they think of them, but unfortunately the Venn diagram of advanced cryoEM practitioners and python programmers is not a circle!
On my list is a guide page along the lines of “Python for cryoEM”. If you have collected specific sticking points from trainees (i.e., beyond “the command line is frightening”, which I don’t in any way mean to downplay) I’d be very interested in hearing about them (although perhaps in their own forum topic )!
Cheers, @rposert!
Some manner of GUI integration would be ideal; I baulk at the idea of a browser interface for writing the code itself, which would be both a massive security risk and an epic disaster waiting to happen. But a UI where scripts could be run (with the usual drag-n-drop assignment for Project/Job to be worked on) would make access a lot easier.
Although cs-tools would probably have to be shipped directly with CryoSPARC to prevent version mismatching?
…
Otherwise, in a more general fashion, feedback from trainees used to focus on how complicated the RELION UI was, but since we experimented with a dual-suite training course for academics (both RELION and CryoSPARC, different datasets) I’ve heard several times that the CryoSPARC interface is overwhelming with the array of options.
Disabling “Advanced” mode hides most things, but as a result seriously limits options. It might be nice to have more granular choice (at a facility level) what is considered “Advanced” and what is considered “Basic” as there are many options in Advanced mode which rarely, if ever, need adjusting (or jobs fail if you do adjust them*). A management page where you can tickbox or radio button whether a function in each job should be considered “Basic” or “Advanced”, maybe?
*A good example here is the “GPU/CPU” option in Local Filtering - it says there are two options, but you should leave it at GPU… but running on CPU would be nice when the job runs out of GPU memory (as it appears to have no “low memory mode” like NU Refine/Local Refine)… but if you manually set CPU, it ignores it, runs it on the GPU anyway and crashes.
Traceback (most recent call last):
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 851, in _attempt_allocation
return allocator()
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 1054, in allocator
return driver.cuMemAlloc(size)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 348, in safe_cuda_api_call
return self._check_cuda_python_error(fname, libfn(*args))
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 408, in _check_cuda_python_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [CUresult.CUDA_ERROR_OUT_OF_MEMORY] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "cryosparc_master/cryosparc_compute/run.py", line 115, in cryosparc_master.cryosparc_compute.run.main
File "cryosparc_master/cryosparc_compute/jobs/local_filter/run.py", line 243, in cryosparc_master.cryosparc_compute.jobs.local_filter.run.run_locfilter
File "cryosparc_master/cryosparc_compute/jobs/local_filter/run.py", line 292, in cryosparc_master.cryosparc_compute.jobs.local_filter.run.standalone_locfilter
File "cryosparc_master/cryosparc_compute/jobs/local_filter/run.py", line 333, in cryosparc_master.cryosparc_compute.jobs.local_filter.run.standalone_locfilter
File "/home/cryosparcer/bin/cryosparc_worker/cryosparc_compute/gpu/gpuarray.py", line 276, in zeros
arr = empty(shape, dtype, stream=stream)
File "/home/cryosparcer/bin/cryosparc_worker/cryosparc_compute/gpu/gpuarray.py", line 270, in empty
return device_array(shape, dtype, stream=stream)
File "/home/cryosparcer/bin/cryosparc_worker/cryosparc_compute/gpu/gpuarray.py", line 226, in device_array
arr = GPUArray(shape=shape, strides=strides, dtype=dtype, stream=stream)
File "/home/cryosparcer/bin/cryosparc_worker/cryosparc_compute/gpu/gpuarray.py", line 21, in __init__
super().__init__(shape, strides, dtype, stream, gpu_data)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/devicearray.py", line 103, in __init__
gpu_data = devices.get_context().memalloc(self.alloc_size)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 1372, in memalloc
return self.memory_manager.memalloc(bytesize)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 1056, in memalloc
ptr = self._attempt_allocation(allocator)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 863, in _attempt_allocation
return allocator()
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 1054, in allocator
return driver.cuMemAlloc(size)
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 348, in safe_cuda_api_call
return self._check_cuda_python_error(fname, libfn(*args))
File "/home/cryosparcer/bin/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 408, in _check_cuda_python_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [CUresult.CUDA_ERROR_OUT_OF_MEMORY] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
I’ll post the last bit as a separate thread as well, here I just use it as an example.
edit: typo.