Hello,
I am currently trying to us the apply trajectories job and am running into an error that is difficult to troubleshoot. I have been able to successfully run a similar job on a different data set in a different project, but it currently failing on my dataset of interest.
The error:
Error occurred while processing S28/import_movies/FoilHole_10962737_Data_10933834_34_20250710_224856_EER.eer Traceback (most recent call last): File "/software/repo/cryosparc-worker/default/cryosparc_worker/compute/pipeline.py", line 69, in exec return self.process(item) ^^^^^^^^^^^^^^^^^^ File "compute/jobs/motion_correction/run_applytraj.py", line 164, in compute.jobs.motion_correction.run_applytraj.run_apply_traj_multi.motionworker.process File "compute/jobs/motion_correction/run_applytraj.py", line 250, in compute.jobs.motion_correction.run_applytraj.run_apply_traj_multi.motionworker.process AssertionError Marking as incomplete and continuing...
here is the top of the log:
================= CRYOSPARC =================
Project P31 Job J1726
Master cryosparc.herzik.ucsd.edu Port 61000
# MAIN PROCESS PID 1743410
2026-03-31 10:53:56,531 core monitor INFO | MONITOR PROCESS PID 1743418
2026-03-31 10:53:56,532 core monitor INFO | ========= monitor process now waiting for main process
2026-03-31 10:53:56,532 core heartbeat INFO | ========= Updating heartbeat
================= CRYOSPARC =================
Project P31 Job J1726
Master cryosparc.herzik.ucsd.edu Port 61000
MAIN PROCESS PID 1743410
========= updating job startup information at 2026-03-31 10:53:57.550093
========= now starting main process at 2026-03-31 10:53:58.283999
2026-03-31 10:53:58,810 core run INFO | Running job J1726 of type local_applytraj
2026-03-31 10:53:58,810 core run INFO | Running job on hostname default
2026-03-31 10:53:58,821 core run INFO | Allocated Resources: lane=‘default’ lane_type=‘cluster’ hostname=‘default’ target=SchedulerTarget(cache_path=‘/scratch’, cache_reserve_mb=10000, cache_quota_mb=None, lane=‘default’, name=‘default’, title=‘default’, desc=None, hostname=‘default’, worker_bin_path=‘/software/repo/cryosparc-worker/default/cryosparc_worker/bin/cryosparcw’, config=Cluster(send_cmd_tpl=‘ssh cryosparc_user@hooke-new.ucsd.edu {{ command }}’, qsub_cmd_tpl=‘sbatch {{ script_path_abs }}’, qstat_cmd_tpl=‘squeue -j {{ cluster_job_id }}’, qdel_cmd_tpl=‘scancel {{ cluster_job_id }}’, qinfo_cmd_tpl=‘sinfo’, type=‘cluster’, script_tpl=‘#!/usr/bin/env bash\\n#### cryoSPARC cluster submission script template for SLURM\\n## Available variables:\\n## {{ run_cmd }} - the complete command string to run the job\\n## {{ num_cpu }} - the number of CPUs needed\\n## {{ num_gpu }} - the number of GPUs needed.\\n## Note: the code will use this many GPUs starting from dev id 0\\n## the cluster scheduler or this script have the responsibility\\n## of setting CUDA_VISIBLE_DEVICES so that the job code ends up\\n## using the correct cluster-allocated GPUs.\\n## {{ ram_gb }} - the amount of RAM needed in GB\\n## {{ job_dir_abs }} - absolute path to the job directory\\n## {{ project_dir_abs }} - absolute path to the project dir\\n## {{ job_log_path_abs }} - absolute path to the log file for the job\\n## {{ worker_bin_path }} - absolute path to the cryosparc worker command\\n## {{ run_args }} - arguments to be passed to cryosparcw run\\n## {{ project_uid }} - uid of the project\\n## {{ job_uid }} - uid of the job\\n## {{ job_creator }} - name of the user that created the job (may contain spaces)\\n## {{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)\\n##\\n## What follows is a simple SLURM script:\\n\\n#SBATCH --job-name cryosparc\_{{ project_uid }}*{{ job_uid }}\\n#SBATCH --partition=cryosparc\\n#SBATCH --nodes=1\\n#SBATCH --ntasks=1\\n#SBATCH --cpus-per-task={{ num_cpu }}\\n#SBATCH --gres=gpu:rtx2080ti:{{ num_gpu }}\\n#SBATCH --mem={{ ram_gb|int }}G\\n#SBATCH --comment=“User: {{ cryosparc_username }}”\\n#SBATCH --output={{ job_dir_abs }}/{{ project_uid }}*{{ job_uid }}*slurm.out\\n#SBATCH --error={{ job_dir_abs }}/{{ project_uid }}*{{ job_uid }}\_slurm.err\\n\\n{{ run_cmd }}\\n’, custom_vars={}, tpl_vars=\[‘job_dir_abs’, ‘num_gpu’, ‘job_log_path_abs’, ‘worker_bin_path’, ‘job_creator’, ‘run_cmd’, ‘job_uid’, ‘command’, ‘project_dir_abs’, ‘ram_gb’, ‘run_args’, ‘project_uid’, ‘cluster_job_id’, ‘num_cpu’, ‘cryosparc_username’\], custom_var_names=[ ])) slots=ResourceSlots(CPU=\[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11\], GPU=\[0, 1\], RAM=\[0, 1, 2, 3\]) fixed=FixedResourceSlots(SSD=False) licenses_acquired=2
2026-03-31 10:54:07,572 core heartbeat INFO | ========= Updating heartbeat
Transparent hugepages setting: always madvise \[never\]
Running job on hostname %s default
Allocated Resources : lane=‘default’ lane_type=‘cluster’ hostname=‘default’ target=SchedulerTarget(cache_path=‘/scratch’, cache_reserve_mb=10000, cache_quota_mb=None, lane=‘default’, name=‘default’, title=‘default’, desc=None, hostname=‘default’, worker_bin_path=‘/software/repo/cryosparc-worker/default/cryosparc_worker/bin/cryosparcw’, config=Cluster(send_cmd_tpl=‘ssh cryosparc_user@hooke-new.ucsd.edu {{ command }}’, qsub_cmd_tpl=‘sbatch {{ script_path_abs }}’, qstat_cmd_tpl=‘squeue -j {{ cluster_job_id }}’, qdel_cmd_tpl=‘scancel {{ cluster_job_id }}’, qinfo_cmd_tpl=‘sinfo’, type=‘cluster’, script_tpl=‘#!/usr/bin/env bash\\n#### cryoSPARC cluster submission script template for SLURM\\n## Available variables:\\n## {{ run_cmd }} - the complete command string to run the job\\n## {{ num_cpu }} - the number of CPUs needed\\n## {{ num_gpu }} - the number of GPUs needed.\\n## Note: the code will use this many GPUs starting from dev id 0\\n## the cluster scheduler or this script have the responsibility\\n## of setting CUDA_VISIBLE_DEVICES so that the job code ends up\\n## using the correct cluster-allocated GPUs.\\n## {{ ram_gb }} - the amount of RAM needed in GB\\n## {{ job_dir_abs }} - absolute path to the job directory\\n## {{ project_dir_abs }} - absolute path to the project dir\\n## {{ job_log_path_abs }} - absolute path to the log file for the job\\n## {{ worker_bin_path }} - absolute path to the cryosparc worker command\\n## {{ run_args }} - arguments to be passed to cryosparcw run\\n## {{ project_uid }} - uid of the project\\n## {{ job_uid }} - uid of the job\\n## {{ job_creator }} - name of the user that created the job (may contain spaces)\\n## {{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)\\n##\\n## What follows is a simple SLURM script:\\n\\n#SBATCH --job-name cryosparc\_{{ project_uid }}*{{ job_uid }}\\n#SBATCH --partition=cryosparc\\n#SBATCH --nodes=1\\n#SBATCH --ntasks=1\\n#SBATCH --cpus-per-task={{ num_cpu }}\\n#SBATCH --gres=gpu:rtx2080ti:{{ num_gpu }}\\n#SBATCH --mem={{ ram_gb|int }}G\\n#SBATCH --comment=“User: {{ cryosparc_username }}”\\n#SBATCH --output={{ job_dir_abs }}/{{ project_uid }}*{{ job_uid }}*slurm.out\\n#SBATCH --error={{ job_dir_abs }}/{{ project_uid }}*{{ job_uid }}\_slurm.err\\n\\n{{ run_cmd }}\\n’, custom_vars={}, tpl_vars=\[‘job_dir_abs’, ‘num_gpu’, ‘job_log_path_abs’, ‘worker_bin_path’, ‘job_creator’, ‘run_cmd’, ‘job_uid’, ‘command’, ‘project_dir_abs’, ‘ram_gb’, ‘run_args’, ‘project_uid’, ‘cluster_job_id’, ‘num_cpu’, ‘cryosparc_username’\], custom_var_names=[ ])) slots=ResourceSlots(CPU=\[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11\], GPU=\[0, 1\], RAM=\[0, 1, 2, 3\]) fixed=FixedResourceSlots(SSD=False) licenses_acquired=2
/software/repo/cryosparc-worker/default/cryosparc_worker/.pixi/envs/worker/lib/python3.12/site-packages/pymongo/\_csot.py:119: UserWarning: MongoClient opened before fork. May not be entirely fork-safe, proceed with caution. See PyMongo’s documentation for details: https://www.mongodb.com/docs/languages/python/pymongo-driver/current/faq/#is-pymongo-fork-safe-
return func(self, \*args, \*\*kwargs)
/software/repo/cryosparc-worker/default/cryosparc_worker/.pixi/envs/worker/lib/python3.12/site-packages/pymongo/\_csot.py:119: UserWarning: MongoClient opened before fork. May not be entirely fork-safe, proceed with caution. See PyMongo’s documentation for details: https://www.mongodb.com/docs/languages/python/pymongo-driver/current/faq/#is-pymongo-fork-safe-
return func(self, \*args, \*\*kwargs)
WARNING: io_uring support disabled (not supported by kernel), I/O performance may degrade
WARNING: io_uring support disabled (not supported by kernel), I/O performance may degrade
TIFFReadDirectory: Warning, Unknown field with tag 65001 (0xfde9) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 65002 (0xfdea) encountered.
ElectronCountedFramesDecompressor: reading using TIFF-EER mode.
TIFFReadDirectory: Warning, Unknown field with tag 65001 (0xfde9) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 65002 (0xfdea) encountered.
ElectronCountedFramesDecompressor: reading using TIFF-EER mode.
ElectronCountedFramesDecompressor::prepareRead: found 1337 frames in EER-TIFF file.
ElectronCountedFramesDecompressor::prepareRead: found 1337 frames in EER-TIFF file.
TIFFReadDirectory: Warning, Unknown field with tag 65100 (0xfe4c) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 65100 (0xfe4c) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 65100 (0xfe4c) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 65100 (0xfe4c) encountered.
2026-03-31 10:54:17,593 core heartbeat INFO | ========= Updating heartbeat