Hi,
All my jobs fail when ssd caching is set to be true with the error below.
I have tried removing workers via the icli interface and adding them again but they all stop with the same error. SSD path and permissions look good.
[CPU: 90.1 MB] Project P36 Job J67 Started
[CPU: 90.1 MB] Master running v3.1.0, worker running v3.1.0
[CPU: 90.3 MB] Running on lane shark
[CPU: 90.3 MB] Resources allocated:
[CPU: 90.3 MB] Worker: shark.qb3.berkeley.edu
[CPU: 90.3 MB] CPU : [0, 1]
[CPU: 90.4 MB] GPU : [0]
[CPU: 90.4 MB] RAM : [0, 1, 2]
[CPU: 90.4 MB] SSD : True
[CPU: 90.4 MB] --------------------------------------------------------------
[CPU: 90.4 MB] Importing job module for job type class_2D...
[CPU: 405.8 MB] Job ready to run
[CPU: 405.8 MB] ***************************************************************
[CPU: 406.1 MB] Using random seed of 1623841468
[CPU: 406.1 MB] Loading a ParticleStack with 11 items...
[CPU: 406.1 MB] SSD cache : cache successfuly synced in_use
[CPU: 406.1 MB] Traceback (most recent call last):
File "cryosparc2_worker/cryosparc2_compute/run.py", line 85, in cryosparc2_compute.run.main
File "cryosparc2_worker/cryosparc2_compute/jobs/class2D/run.py", line 64, in cryosparc2_compute.jobs.class2D.run.run_class_2D
File "cryosparc2_compute/particles.py", line 61, in read_blobs
u_blob_paths = cache.download_and_return_cache_paths(u_rel_paths)
File "cryosparc2_compute/jobs/cache.py", line 114, in download_and_return_cache_paths
used_mb = sync_hits(worker_hostname, ssd_cache_path, instance_id)
File "cryosparc2_compute/jobs/cache.py", line 191, in sync_hits
rc.cli.cache_sync_hits(worker_hostname, keys, sizes_mb)
File "cryosparc2_compute/client.py", line 57, in func
assert False, res['error']
AssertionError: {u'message': u"OtherError: argument should be a bytes-like object or ASCII string, not 'list'", u'code': 500, u'data': None, u'name': u'OtherError'}
drwxrwxr-x 3 cryosparc nogales-current 4096 Mar 5 18:00 cryosparc-scratch