I would need some help with configuring "ssdquota "

cryosparcuser@ip-10-0-0-242 ~]$ cryosparcw connect --worker g4dn-12x-dy-g4dn12xlarge-1 --master ip-10-0-0-242 --port 39000 --ssdpath /scratch --ssdquota 80000 --update --sshstr cryosparcuser@g4dn-12x-dy-g4dn12xlarge-1
** ---------------------------------------------------------------**
** CRYOSPARC CONNECT --------------------------------------------**
** ---------------------------------------------------------------**
** Attempting to register worker g4dn-12x-dy-g4dn12xlarge-1 to command ip-10-0-0-242:39002**
** Connecting as unix user cryosparcuser**
** Will register using ssh string: cryosparcuser@g4dn-12x-dy-g4dn12xlarge-1**
** If this is incorrect, you should re-run this command with the flag --sshstr **
** ---------------------------------------------------------------**
** Connected to master.**
** ---------------------------------------------------------------**
** Current connected workers:**
** ---------------------------------------------------------------**
Traceback (most recent call last):
** File “bin/connect.py”, line 133, in **
** assert len(target) > 0, “Worker %s has not been registered so cannot be updated.” % worker_hostname**
AssertionError: Worker g4dn-12x-dy-g4dn12xlarge-1 has not been registered so cannot be updated.

My query here is, if the nodes are coming up in a dynamic way as an example with AWS cloud.
then how can we configure the ssdquota for that node.

I need some help, here I did refer the below link but it did not help much.

Welcome to the forum @Praveen .
The AssertionError is triggered because the --update option of cryosparcw connect is intended for use when the respective --worker host has already been registered in the cryoSPARC database.
cryosparcw connect should be run on the worker you wish to connect, and may not be appropriate for your particular cloud implementation plan.
Special considerations apply in case of AWS deployments. Please see the AWS Deployment section of the guide for details.

I got this error

[CPU: 248.7 MB] Traceback (most recent call last):

  • File “cryosparc_worker/cryosparc_compute/run.py”, line 85, in cryosparc_compute.run.main*
  • File “cryosparc_worker/cryosparc_compute/jobs/class2D/run.py”, line 56, in cryosparc_compute.jobs.class2D.run.run_class_2D*
  • File “/apps/software/cryosparc-v3.3.1_220315/cryosparc_worker/cryosparc_compute/particles.py”, line 88, in read_blobs*
  • u_blob_paths = cache.download_and_return_cache_paths(u_rel_paths)*
  • File “/apps/software/cryosparc-v3.3.1_220315/cryosparc_worker/cryosparc_compute/jobs/cache.py”, line 143, in download_and_return_cache_paths*
  • free_mb = free_mb - cache_reserve_mb # subtract reserve from free so we never fill up all the way*
    TypeError: unsupported operand type(s) for -: ‘float’ and ‘str’

@Praveen

One of the ssd-related variables seems to be interpreted as “not-a-number”. You may want to inspect or post your cluster_info.json or worker configuration command, as applicable to the type of your cryoSPARC instance.

@wtempel Thanks. I got it now.

I have used double quotes, it was wrong.

"cache_reserve_mb" : "10000",
"cache_quota_mb" : "80000"

Correct syntax is
“cache_reserve_mb” : 10000,
“cache_quota_mb” : 80000