Why do I have data in the 'instance' file under cryosparc when it is also in the Job file

We have 2.5 TB particles extracted in the ‘extract’ file in J106, but also have the same data in the
/mnt/ssd/instance_xingcryoem2.oncology.wisc.edu:39001/projects/P5/J106/extract

directory. Why the duplication? And can I just delete the extract file under J106 in the ‘instance’ directory?
thanks

Hello @satyshur,

This is where cryoSPARC stores its cache files. I’m assuming this folder is in your SSD, and if you had the “SSD caching” parameter turned on, then cryoSPARC will automatically copy extracted particles to this location in order to speed up processing since SSDs are better at random read/writes than HDDs. Feel free to delete anything in this folder if you’re not actively using them.

Thanks much. Where is the ‘turn off caching’ button?
kas

Hi @satyshur,

When you build a job and are modifying the parameters, the option “Cache particles on SSD” will appear at the bottom of the job builder (right hand panel). You can turn this option off.

53%20PM

OK, but when the student did this she says she got this error, which went away when she turned cache on.

[CPU: 4.77 GB] Traceback (most recent call last): File "cryosparc2_compute/jobs/runcommon.py", line 1685, in run_with_except_hook run_old(*args, **kw) 
File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 110, in cryosparc2_compute.engine.cuda_core.GPUThread.run 
File "cryosparc2_worker/cryosparc2_compute/engine/cuda_core.py", line 111, in cryosparc2_compute.engine.cuda_core.GPUThread.run 
File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 991, in cryosparc2_compute.engine.engine.process.work 
File "cryosparc2_worker/cryosparc2_compute/engine/engine.py", line 109, in cryosparc2_compute.engine.engine.EngineThread.load_image_data_gpu 
File "cryosparc2_worker/cryosparc2_compute/engine/gfourier.py", line 33, in cryosparc2_compute.engine.gfourier.fft2_on_gpu_inplace 
File "/mnt/ssd/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/fft.py", line 127, in __init__ onembed, ostride, odist, self.fft_type, self.batch) 
File "/mnt/ssd/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py", line 742, in cufftMakePlanMany cufftCheckStatus(status) 
File "/mnt/ssd/cryosparc_user/cryosparc2_worker/deps/anaconda/lib/python2.7/site-packages/skcuda/cufft.py", line 117, in cufftCheckStatus 
raise e cufftAllocFailed

Hi @satyshur,

This is a very interesting error- turning off caching shouldn’t raise a GPU memory allocation error!
Is it possible if you can provide me with a bit more information?
Job type, cryoSPARC version, if the user set any special job parameters, OS, GPU models, type of data (particle box size, # of particles)