Cache is locked by job that is not using SSD caching

Hi all,

I experience some funky behavior regarding the cache lock in 3.3.2. I have a project where I ran some jobs and the relevant particles are still sitting in the cache. I then started a job WITHOUT SSD caching and it just started and ran normally (slow). In parallel, I started a clone of that job WITH SSD caching enabled. That job was now waiting for the cache to be unlocked because it needed to update some files. To me it seems like the first job that was not using the SSD cache still locked the cache for another job. My feeling is, that this is unintended behavior (at least it seems not very logical to me).

Edit: When starting the jobs in reverse order (with SSD first, then without SSD) they do not interfere (as one would expect). Also, when the SSD job does not need to update the particle files, it just starts fine, also after the non-SSD job.

Best and thanks,
Chris

Are these particles also relevant to the jobs that you described farther down in your post? Do they make up a part of the later jobs’ inputs?

Indeed, “the other jobs” are using exactly the same particles as the SSD/non-SSD jobs in my post (all from a single import). I am still puzzled why cryoSPARC wanted to update the files in the cache (claimed a file size change, but files on storage were unchanged), however, this should have been no problem anyways because the SSD cache was not used by the other job (running on the same particles but non-SSD).