Purge cache on worker node

Hi,

Is there a way of purging the cache on a given worker? We have a couple of mixed-use nodes, where it’s been more convenient to not set a cache quota, but to manage the scratch space organically. However, our users do not have access to the user account to whom the cryoSPARC cache belongs to.

A quick search brought up this post. Are there any developments in the works along these lines?

Cheers,
Yang

Hi Yang,

  1. Do you mean that users and/or admins monitor cache device capacity regularly and (are expected to) remove files as needed?
  2. Do you use the CRYOSPARC_SSD_CACHE_LIFETIME_DAYS variable inside cryosparc_master/config.sh?
  3. Do you already use the CRYOSPARC_IMPROVED_SSD_CACHE variable inside cryosparc_worker/config.sh?
  4. Are CryoSPARC jobs submitted to an external workload manager like SLURM?
  5. How often would you want to delete cached data? After each job?

Hi @wtempel,

Thanks for following up.

At the moment, we have users creating and managing their own user folder on the scratch volume. When the disk fills up, we at least know who to speak to about clearing data. Not the best system, but most of our non-cryoSPARC activities contain elements of cache-removal post hoc, so it hasn’t been a major problem.

That’s a ‘no’ to the three points above. The default values have been working well for our cryoSPARC-exclusive workers, so this hasn’t been necessary to tweak.

Also, my understanding of the CRYOSPARC_SSD_CACHE_LIFETIME_DAYS variable is that it’s only compared against at point of job initiation-- until a subsequent job runs on the node, historical cache continues to occupy the volume–this has been the problem on our mixed-used nodes. They are used quite infrequently.

Ideally, we would like to have a tool, e.g. in the Instance tab, that will allow purging of instance-folders in ssd_path (perhaps with a pre-check against jobs that are running on the worker).

Cheers,
Yang