Topaz Train FileNotFoundError v5.0.1

Hello!

I am writing about the following problem that has been appearing consistently during Topaz Train jobs (both the normal and the sbgrid one), causing them to fail.

The error is this:

2026-03-31 11:26:52.47\] \[CPU: 244.3 MB

FileNotFoundError: [Errno 2] No such file or directory:
/J23/topaz_particles_processed_train.txt’

2026-03-31 11:26:52.59\] \[CPU: 244.3 MB

Training file appeared, beginning monitoring…

2026-03-31 11:26:58.14\] \[CPU: 246.3 MB

Traceback (most recent call last):
File “cli/run.py”, line 105, in cli.run.run_job
File “cli/run.py”, line 210, in cli.run.run_job_function
File “…/cryosparc/cs_cpusrv129_v4.7.0/cryosparc_worker/compute/jobs/topaz/run_topaz.py”, line
627, in run_topaz_wrapper_train
utils.run_process(rc, train_command, topaz_version=topaz_version_for_validation)
File “…/cryosparc/cs_cpusrv129_v4.7.0/cryosparc_worker/compute/jobs/topaz/topaz_utils.py”,
line 143, in run_process
assert process.returncode == 0, f"Subprocess exited with status {process.returncode} ({str_command})"
^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Subprocess exited with status 1 (/programs/x86_64-linux/system/sbgrid_bin/topaz train --train-images
/…/J23/image_list_train.txt --train-targets
/…/J23/topaz_particles_processed_tra…)

As far as I understand the topaz train code needs 4 input files: image_list_train.txt, image_list_test.txt, topaz_particles_processed_train.txt and topaz_particles_processed_test.txt. The first two come from the train-test splitting step and are written in the /preprocessed subdirectory of the job directory (here J23/) and this is made clear by the “# writing: …” line of code earlier in the event log. But the other two files have no direct sign of being written beforehand. Another thing I see is that the topaz_particles_processed.txt file (which I assume is the predecessor of the train/test files) is being written in the topaz convert step in the preprocessing and lives directly in the job directory. Furthermore, performing a quick search for the topaz_particles_processed_train.txt and topaz_particles_processed_test.txt files in the job directory (after a job has already failed) returns nothing, meaning these files are not being written anywhere, thus the job fails when it tries to find topaz_particles_processed_train.txt.

Has someone experienced similar problems? This seems like a run_topaz.py code error, which I have tried to overcome with some wrapper scripts, however none were successful.

Some important info: our cryoSPARC version is v.5.0.1 and we are using the sb-grid configuration for Topaz.

I’m open to all kinds of suggestions!

Thanks,

Kristina