Hi!
Very strange situation here! I am using the v4.2.1 with the latest patch 230427
I have a large particle set which is 1.5TB in size and my cache is a 1TB NVMe drive, so for the time being I have been working without caching the particle set to the SSD.
Now I have segregated the initial particle set in 3 with an ab-initio reconstruction job and I wanted to run individual refinements on all of them. I assumed that the sub-particle sets would fit in the SSD so I turned on SSD cache for those jobs. The jobs fail right after the job calculates the particle set size and the strangest is that any of the jobs reports a (sub-)particle set size that is very similar to the size of the original particle set. How is this possible? (The same happens if I try to run 2D classification on those particle stacks).
2D classification job with a particle set of 577868 particles:
Using random seed of 1709122616
Loading a ParticleStack with 577868 itemsâŠ
SSD cache : cache successfully synced in_use
SSD cache : cache successfully synced, found 0.00MB of files on SSD.
SSD cache : cache successfully requested to check 16291 files.
SSD cache : cache requires 1474728.09MB more on the SSD for files to be downloaded.
Traceback (most recent call last):
File âcryosparc_master/cryosparc_compute/run.pyâ, line 96, in cryosparc_compute.run.main
File âcryosparc_master/cryosparc_compute/jobs/class2D/run.pyâ, line 63, in cryosparc_compute.jobs.class2D.run.run_class_2D
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/particles.pyâ, line 114, in read_blobs
u_blob_paths = cache.download_and_return_cache_paths(u_rel_paths)
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.pyâ, line 115, in download_and_return_cache_paths
delete_cache_files(instance_id, worker_hostname, ssd_cache_path, cache_reserve_mb, cache_quota_mb, used_mb, need_mb)
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.pyâ, line 317, in delete_cache_files
assert need_mb <= total_mb, (
AssertionError: SSD cache needs 1474728MB but drive can only be filled up to 927797MB; please disable SSD cache for this job.
2D classification job with a subset of the initial particle set of 200003 particles:
Using random seed of 248115048
Loading a ParticleStack with 200003 itemsâŠ
SSD cache : cache successfully synced in_use
SSD cache : cache successfully synced, found 0.00MB of files on SSD.
SSD cache : cache successfully requested to check 16224 files.
SSD cache : cache requires 1473122.26MB more on the SSD for files to be downloaded.
Traceback (most recent call last):
File âcryosparc_master/cryosparc_compute/run.pyâ, line 96, in cryosparc_compute.run.main
File âcryosparc_master/cryosparc_compute/jobs/class2D/run.pyâ, line 63, in cryosparc_compute.jobs.class2D.run.run_class_2D
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/particles.pyâ, line 114, in read_blobs
u_blob_paths = cache.download_and_return_cache_paths(u_rel_paths)
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.pyâ, line 115, in download_and_return_cache_paths
delete_cache_files(instance_id, worker_hostname, ssd_cache_path, cache_reserve_mb, cache_quota_mb, used_mb, need_mb)
File â/home/angr5008/Software/cryosparc/cryosparc_worker/cryosparc_compute/jobs/cache.pyâ, line 317, in delete_cache_files
assert need_mb <= total_mb, (
AssertionError: SSD cache needs 1473122MB but drive can only be filled up to 927797MB; please disable SSD cache for this job.
The job is requesting virtually the same amount of cache for both jobs, even though the second job has less than half of the particles from the first job here presented.
I would appreciate help or any feedback in this matter urgently.
Thank you,
André