SSD caching issue

Hi there,
I got a issue when running Ab initio job,

The job logs “SSD cache : requested files are locked for past Xs” and never proceeds further, even when there are no other jobs accessing the SSD cache.

I tried option1 and option2 https://guide.cryosparc.com/setup-configuration-and-management/troubleshooting#option-3-fully-reset-ssd-cache-system but non of them worked.

I don’t know how to do step 2 (Navigate to the SSD cache directory containing CryoSPARC’s cache files ) in option3.

Great apprreciated it if you can help.

Hi qchen,

You can find out where your particles are cached by looking at the log of any job that uses SSD caching.

  1. Open the job
  2. In the Event Log tab, click the button “Show from Top”
  3. Find out on which machine the job was launched.

If you are running cryosparc locally, this means your local machine.
Alternatively, if you are running on a cluster, find the cluster job status line:

-------- Cluster job status at 2024-04-22 18:07:36.593011 (0 retries)

below there you will find the processing node, e.g. for SLURM:

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
52675 yourpartition jobname cryospar R 0:04 1 <NODENAME>

your data are cached on the machine named NODENAME

  1. Then navigate to a line that says something like, e.g.:

SSD cache ACTIVE at /somefolder/instance_masternode.yourdomain.com:39000 (300 GB reserve) (1 TB quota)

the data are cached in the folder named “/processing/instance_masternode.yourdomain.com”

then you can:

ssh yourusername@NODENAME
cd /somefolder/instance_masternode.yourdomain.com

provided your permissions allow that, of course.

Hi Andrea,

Thanks for your your instruction. I will try it.

@qchen Are you already using the new caching system, which was optional in v4.4 and became the default in v4.5? We updated the relevant guide section to indicate which troubleshooting steps specifically apply to the new and old caching systems, respectively.

1 Like