[feature request] Can I have Leaving_some_GPU_memory_free?

As I’m doing ab initio with cryoSparc, sometimes I experience that a job is killed

either with memory related message,

Traceback (most recent call last):
File “/home/doonam/bin/cryosparc/cryosparc-compute/sparcjob.py”, line 323, in
cuda_devs = current_experiment_params[‘cuda_devs’])
File “/home/doonam/bin/cryosparc/cryosparc-compute/abinit.py”, line 355, in abinit
class_anneal_beta = class_anneal_beta)
File “/home/doonam/bin/cryosparc/cryosparc-compute/reconstruct.py”, line 172, in reconstruction_kernel
compute_gradient = compute_gradient)
File “/home/doonam/bin/cryosparc/cryosparc-compute/objectives/cudaworker.py”, line 1001, in set_density
self.ghvi = gpurealloc_zeros(self.ghvi,self.gfMi.shape,n.float32,stream=stream)
File “/home/doonam/bin/cryosparc/cryosparc-compute/objectives/cudaworker.py”, line 32, in gpurealloc_zeros
ary = gpuarray.empty(shape,dtype)
File “/home/doonam/bin/cryosparc/anaconda2/lib/python2.7/site-packages/pycuda/gpuarray.py”, line 209, in init
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
MemoryError: cuMemAlloc failed: out of memory

or without any error message in the middle of running


----------- Iteration 109 (epoch 0.154). radwn 50.57 resolution 35.00A minisize 90 beta 0.10
– Class 0 – lr: 0.20 eps: 50.81 step ratio : 0.0250 ESS R: 6459.640 S: 2.800 Class Size: 100.0% (Average: 100.0%)

My cryosparc often was killed when I launched gpu enabled vmd or gpu enabled relion.
I assume that using GPU memory by non-cryoSparc application killed the cryoSparc jobs.

Would you make some option that limits gpu memory usage by cryosparc?

For example, something like this

https://www2.mrc-lmb.cam.ac.uk/relion/index.php/Benchmarks_%26_computer_hardware#Leaving_some_GPU_memory_free

Thank you