Detected file change due to change in file size

Dear cryoSPARC team,

After upgrading to v3.0.0 I am experiencing a problem with caching particles from SSD. Namely, after a particular job finishes (e.g. Class2D, hetero-refine, ab-initio), running subsequent jobs with the same particles gives a warning that “change in file size has been detected” (see below) and all the particles are deleted then copied to SSD again. I can confirm that all mrcs files are on SSD after the job finishes and are not corrupted (e.g. I can open them with relion_display). I always use the “Cache particle images on SSD” option.

[CPU: 81.7 MB] --------------------------------------------------------------

[CPU: 81.7 MB] Importing job module for job type homo_refine_new…

[CPU: 535.2 MB] Job ready to run

[CPU: 535.2 MB] ***************************************************************

[CPU: 607.5 MB] Using random seed of 393599102

[CPU: 607.7 MB] Loading a ParticleStack with 86563 items…

[CPU: 608.5 MB] SSD cache : cache successfuly synced in_use

[CPU: 611.7 MB] SSD cache : cache successfuly synced, found 323233.69MB of files on SSD.

[CPU: 612.7 MB] SSD cache : cache successfuly requested to check 10538 files.

[CPU: 619.2 MB] Detected file change due to change in file size.

[CPU: 616.6 MB] SSD cache : cache requires 275328.09MB more on the SSD for files to be downloaded.

[CPU: 616.6 MB] SSD cache : cache does not have enough space for download

[CPU: 616.6 MB] SSD cache : but there are files that can be deleted, deleting…

[CPU: 616.6 MB] SSD cache : cache does not have enough space for download

[CPU: 616.6 MB] SSD cache : but there are no files that can be deleted.

[CPU: 616.6 MB] SSD cache : This could be because other jobs are running and using files, or because a different program has used up space on the SSD.
[CPU: 616.6 MB] SSD cache : Waiting 30 seconds for space to become available…

Note that this message goes on forever and the job never starts. I have to kill then clone it, after which it runs but all particles have to be copied again. I did not experience this problem in previous versions.

Any suggestions how to overcome this issue would be much appreciated.

My best,
Andrija

1 Like

Hi @asente,

Usually this only happens in situations where the job that originally created the particles (e.g., Extract From Micrographs) re-extracted the particles. They’d have the same filename, but the size of the files could potentially be different if the parameters in the job were changed. Is it possible that something upstream modified the particle files?
Can you check the filesize of a particle file in the project directory and compare it to the filesize of the same file in cache directory and see if they’re actually the same size or not?

Hi @stephan,

Many thanks for your quick response. The particles are extracted in RELION 3.1 then the particles.star file imported into cryoSPARC for classification and refinement. I compared the original file size vs the file size on SSD and indeed you are right: numbers of particles in mrcs files are different between original mrcs and the ones on SSD. Why would this be the case? Is there a way to keep reusing the same particle stacks in different jobs without modifying them? This was certrainly possible in v2.15.0.

Many thanks for your help.

Best wishes,
Andrija

Hi @asente,

Is it possible that these particle files were modified without your knowledge by another user or process?
Also, have you noticed this behaviour in any other project or job that uses a different set of particles in cryoSPARC?

Hi @stephan,

Thank you for your answer. I did not modify the files with another process and other users don’t have permissions to change my files. I observe this behaviour on a new set of particles using a different job (this time going from Class2D -> Ab-initio).

Best wishes,
Andrija

I’m having a very similar issue. Since updating to V3.0.1 I have been issues when trying to run certain jobs within projects created before the upgrade. Specifically, the jobs use a particle stack imported from Warp. Running a heterogenous refinement job works fine - but then when selecting particles from a particular class for further classification etc. I see the “detected file change due to change in file size message”. The particles (or a copy of them?) seem to be re-imported onto the SSD - then the job will actually begin but very soon fail with an error message of “no such file or directory found” on the SSD -seemingly when looking for a particular particle file designated by the input star file. If there was a solution to this it would be very much appreciated!

Hi @asente,

Were your particles initially imported in cryoSPARC v2? Or were they imported in cryoSPARC v3?

Hi @swhardwick,

Is it possible if you can send the full event log that lead up to the point where you see this error message? Is it also possible if you can also send the .cs file of the particle stack, as well as a directory listing of the folder on the SSD that contains all the particle files?

Hi @stephan,

Thank you for your reply. Particles were imported in v3 into a workspace from v3, although the project was created in v2.15.0.

Thanks,
Andrija

Dear cryoSPARC team,

‘Detected file change due to change in file size.’

I am having the exact same issue. I imported particles picked by Warp into cryoSPARC v3.0.1.
I would highly appreciate it if you could help me to solve this issue.

cheers,

Hi @hrishikesh,

Thank you for reporting. We have identified that this is a bug, and we’ll be soon releasing a fix for this in the next version of cryoSPARC (v3.1).

1 Like

@hrishikesh We have released a v3.1.0 update to cryoSPARC in which this issue should be resolved. Please let us know if this issue persists in v3.1.0.

2 Likes

Thank you very much! @spunjani @stephan

Updating to v3.1.0 has solved the issue, now it is working fine.

Cheers,

2 Likes

Dear @spunjani and @stephan,

Many thanks for your helpful responses and the bug fix. v3.1.0 is working great.

With best wishes,
Andrija

2 Likes