SSD required jobs get queued

Hi,

All the jobs that require cache path on SSD were queued, i.e. class_2D, and the info it gave in Resource Manager: " Waiting because resources are not available (SSD)". When I turned off the option “Cache particle on SSD”, the job could be started normally. But I actually configured the cache path on a local SSD (see attached Configuration of the computing node).

Is there any solution fixing this?

thanks,
Zhihai

Hi @zhihai,

Can you double check if the folder you specified on the SSD (/tmp) is readable & writeable by cryosparc_user?

yes, I checked that and it looks OK.

I am seeing the same issue - 2d classification seems to default to “cache particles on SSD” ON even thought I specified “–nossd” during install. Jobs queue and I need to consult the command_core.log to see why.

Hi, I was wondering if anyone had a solution to this as I’m having the same problem.

When I run:
cryosparcm cli "get_config_var(name='targets', colname='sched_config')"

I see

u'cache_path': u'/scratch/cryosparc_cache', u'cache_quota_mb': 500000, u'resource_fixed': {u'SSD': False}, u'cache_reserve_mb': 10000,

I’m guessing its something to do with {u'SSD': False}

Compute config is below:
Hostname
localhost

Name
localhost

Cache path
/scratch/cryosparc_cache

Cache quota (MB)
500000

Description
None

SSH String
mqbpkml3@localhost

Cache Reserve (MB)
10000

Thanks
Mike

Hi @bmcgough,

All jobs that use caches default to SSD caching being on by default, regardless of configuration.

Hi @mplockhartuom,

Can you try running the following commands again and check if the 'SSD' param changes to True?

cd <install_path>/cryosparc2_worker
bin/cryosparcw connect --worker <worker_hostname> --master <master_hostname> --port <port_num> --ssdpath <ssd_path>

Hi,

That seemed to work a treat. Thank you. I’d tried that previously using

—update

But didn’t get the same result. Is this something I should refrain from putting into my configs for GPU changes?

Thanks
Mike