Firefox in limbo appears to be wiping *.cs outputs

Hello,

Last night, during particle extraction, Firefox crashed (no heartbeat in 180s) and disappeared from the desktop similar to what has been described here:

https://www.reddit.com/r/Ubuntu/comments/18yvbse/firefox_sometimes_disappears_and_goes_into_limbo/

Restarting the GPU server gave a black screen (No GUI or BIOS). Unplugging the unit and power cycling recovered the GUI, BIOS and Firefox.

Before extraction, I imported the dataset, ran motion correction and patch CTF estimation, curated the exposures, picked using blob picker and inspected the picks. When I restarted cryosparc after GUI recovery, I was getting a “DatasetLoadError: Could not load dataset from file” for CTF estimation through to the inspect picks jobs. I navigated to the appropriate cryosparc job directories and found that all *.cs outputs for jobs that had completed and run normally before the firefox crash and after motion correction were 0 bytes. Somehow, it seems that firefox going into limbo wiped those outputs.

I’ve been using cryosparc and firefox on this computer for the last couple years and I’ve never had this issue before. Do you know why it might’ve happened?

Is there a way I can recover those *.cs outputs? Currently I’m just re-running all jobs after motion correction because its *.cs outputs are intact.

Please find more information below:

System:
Ubuntu 22.04.04 LTS
Supermicro SYS-740GP
Intel® Xeon(R) Gold 6330 CPU @ 2.00GHz × 112
256 GiB Memory
NVIDIA Corporation [RTX A4000] x4
82TB HDD

Cryosparc:
cryosparcm test install
Running installation tests…
✓ Running as cryoSPARC owner
✓ Running on master node
✓ CryoSPARC is running
✓ Connected to command_core at ********
✓ CRYOSPARC_LICENSE_ID environment variable is set
✓ License has correct format
✓ Insecure mode is disabled
✓ License server set to “https://get.cryosparc.com
✓ Connection to license server succeeded
✓ License server returned success status code 200
✓ License server returned valid JSON response
✓ License exists and is valid
✓ CryoSPARC is running v4.5.3
✓ Running the latest version of CryoSPARC
Could not get latest patch (status code 404)
✓ Patch update not required
✓ Admin user has been created
✓ GPU worker connected.

DatasetLoadError (i.e., with J8 but is the same for J6-9):
[CPU: 256.1 MB Avail: 251.70 GB]

Traceback (most recent call last):
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_tools/cryosparc/dataset.py”, line 572, in load
raise TypeError(f"Could not determine dataset format (prefix is {prefix})")
TypeError: Could not determine dataset format (prefix is b’')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 115, in cryosparc_master.cryosparc_compute.run.main
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/interactive/run_inspect_picks_v2.py”, line 68, in run
micrographs_dset = rc.load_input_group(‘micrographs’)
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/runcommon.py”, line 700, in load_input_group
dsets = [load_input_connection_slots(input_group_name, keep_slot_names[idx], idx, allow_passthrough=allow_passthrough, memoize=memoize, force_innerjoin=force_innerjoin) for idx in range(num_connections)]
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/runcommon.py”, line 700, in
dsets = [load_input_connection_slots(input_group_name, keep_slot_names[idx], idx, allow_passthrough=allow_passthrough, memoize=memoize, force_innerjoin=force_innerjoin) for idx in range(num_connections)]
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/runcommon.py”, line 648, in load_input_connection_slots
dset = load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize)
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/runcommon.py”, line 638, in load_input_connection_single_slot
d = load_output_result_dset(_project_uid, output_result, slotconnection[‘version’], slot_name, memoize=memoize)
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_compute/jobs/runcommon.py”, line 588, in load_output_result_dset
d = dataset.Dataset.load(abspath)
File “/home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/cryosparc_tools/cryosparc/dataset.py”, line 605, in load
raise DatasetLoadError(f"Could not load dataset from file {file}") from err
cryosparc_tools.cryosparc.errors.DatasetLoadError: Could not load dataset from file /***/J8/picked_micrographs.cs

Navigating to J8 directory:

total 1866248
drwxrwxr-x 3 cryosparcuser cryosparcuser 4096 Jul 10 18:07 ./
drwxrwxr-x 18 cryosparcuser cryosparcuser 4096 Jul 11 09:53 …/
-rw-rw-r-- 1 cryosparcuser cryosparcuser 2069333 Jul 10 18:09 events.bson
drwxrwxr-x 2 cryosparcuser cryosparcuser 4096 Jul 10 18:07 gridfs_data/
-rw-rw-r-- 1 cryosparcuser cryosparcuser 1667 Jul 10 18:09 J8_micrographs.csg
-rw-rw-r-- 1 cryosparcuser cryosparcuser 438 Jul 10 18:09 J8_particles.csg
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:06 J8_passthrough_micrographs.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 248 Jul 10 18:09 J8_templates.csg
-rw-rw-r-- 1 cryosparcuser cryosparcuser 52167 Jul 10 18:09 job.json
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:07 job.log
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:06 picked_micrographs.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:06 picked_particles.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:06 templates.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 0 Jul 10 18:06 templates.mrc

Microscope parameters (~6k exposures):

EM: Krios Voltage (kV): 300 Spot size: 5 Beam size (micron): 1.81 Objective aperture (100, 70, NA):NA Energy filter (eV):15 Phase plate (Y/N):N Magnification:81kx Pixel size (Angstrom): 1.07 K3 mode (linear, counting, superresolution): Counted superrerolution (pixel size in raw data is 0.535) Cs: 2.7mm

Please let me know your thoughts.

-Gregor

I am not (yet) convinced that Firefox wiped out *.cs outputs.

Please can you post the outputs of these commands:

df -h /abs/path/to/J8 # susbstitute actual path to job dir J8/ 
cryosparcm status | grep -e HOSTNAME -e PORT
cryosparcm cli "get_scheduler_targets()"
cat /sys/kernel/mm/transparent_hugepage/enabled
cryosparcm eventlog P99 J8 | tail -n 40
cryosparcm cli "get_job('P99', 'J8', 'version', 'job_type', 'params_spec', 'status', 'instance_information', 'input_slot_groups')"
cryosparcm eventlog P99 J199 | tail -n 40
cryosparcm cli "get_job('P99', 'J199', 'version', 'job_type', 'params_spec', 'status', 'instance_information', 'input_slot_groups')"

where you replace P99 in the final four commands with job J8’s project ID and J199 in the final two commands with the job ID where the

cryosparc_tools.cryosparc.errors.DatasetLoadError: Could not load dataset from file /***/J8/picked_micrographs.cs

error occurred.

What was the job ID of the job where you observed

?

Hi Wolfram,

I’ll give those commands a try and post the outputs in the thread.

The no heartbeat Job ID was J11 (Extract Micrographs G.).

Also, the “DatasetLoadError” occurs for all jobs that are children of J6 to J9.

Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         64T  8.2T   53T  14% /mnt/md0
export CRYOSPARC_MASTER_HOSTNAME="*****"
export CRYOSPARC_BASE_PORT=39000
[{'cache_path': '/home/cryosparcuser/Desktop/Cryosparc/ssd_cache', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'desc': None, 'gpus': [{'id': 0, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000'}, {'id': 1, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000'}, {'id': 2, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000'}, {'id': 3, 'mem': 16888365056, 'name': 'NVIDIA RTX A4000'}], 'hostname': '*****', 'lane': 'default', 'monitor_port': None, 'name': '*****', 'resource_fixed': {'SSD': True}, 'resource_slots': {'CPU': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111], 'GPU': [0, 1, 2, 3], 'RAM': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}, 'ssh_str': 'cryosparcuser@*****', 'title': 'Worker node', 'type': 'node', 'worker_bin_path': '/home/cryosparcuser/Desktop/Cryosparc/cryosparc_worker/bin/cryosparcw'}]
always [madvise] never
{'_id': '668ec95fdd639807084e****', 'input_slot_groups': [{'connections': [{'group_name': 'exposures_accepted', 'job_uid': 'J7', 'slots': [{'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'micrograph_blob', 'result_type': 'exposure.micrograph_blob', 'slot_name': 'micrograph_blob', 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'background_blob', 'result_type': 'exposure.stat_blob', 'slot_name': 'background_blob', 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'ctf', 'result_type': 'exposure.ctf', 'slot_name': 'ctf', 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'mscope_params', 'result_type': 'exposure.mscope_params', 'slot_name': 'mscope_params', 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'movie_blob', 'result_type': 'exposure.movie_blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'micrograph_thumbnail_blob_1x', 'result_type': 'exposure.thumbnail_blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'micrograph_thumbnail_blob_2x', 'result_type': 'exposure.thumbnail_blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'ctf_stats', 'result_type': 'exposure.ctf_stats', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'rigid_motion', 'result_type': 'exposure.motion', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'spline_motion', 'result_type': 'exposure.motion', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'micrograph_blob_non_dw', 'result_type': 'exposure.micrograph_blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'micrograph_blob_non_dw_AB', 'result_type': 'exposure.micrograph_blob', 'slot_name': None, 'version': 'F'}, {'group_name': 'exposures_accepted', 'job_uid': 'J7', 'result_name': 'gain_ref_blob', 'result_type': 'exposure.gain_ref_blob', 'slot_name': None, 'version': 'F'}]}], 'count_max': inf, 'count_min': 1, 'description': '', 'name': 'micrographs', 'repeat_allowed': False, 'slots': [{'description': '', 'name': 'micrograph_blob', 'optional': False, 'title': '', 'type': 'exposure.micrograph_blob'}, {'description': '', 'name': 'micrograph_blob_denoised', 'optional': True, 'title': '', 'type': 'exposure.micrograph_blob'}, {'description': '', 'name': 'background_blob', 'optional': True, 'title': '', 'type': 'exposure.stat_blob'}, {'description': '', 'name': 'ctf', 'optional': False, 'title': '', 'type': 'exposure.ctf'}, {'description': '', 'name': 'mscope_params', 'optional': False, 'title': 'Exposure parameters', 'type': 'exposure.mscope_params'}], 'title': 'Micrographs', 'type': 'exposure'}], 'instance_information': {'CUDA_version': '11.8', 'available_memory': '243.08GB', 'cpu_model': 'Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz', 'driver_version': '12.2', 'gpu_info': [{'id': 0, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000', 'pcie': '0000:17:00'}, {'id': 1, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000', 'pcie': '0000:31:00'}, {'id': 2, 'mem': 16891248640, 'name': 'NVIDIA RTX A4000', 'pcie': '0000:4b:00'}, {'id': 3, 'mem': 16888365056, 'name': 'NVIDIA RTX A4000', 'pcie': '0000:ca:00'}], 'ofd_hard_limit': 1048576, 'ofd_soft_limit': 1024, 'physical_cores': 56, 'platform_architecture': 'x86_64', 'platform_node': '*****', 'platform_release': '6.5.0-41-generic', 'platform_version': '#41~22.04.2-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun  3 11:32:55 UTC 2', 'total_memory': '251.47GB', 'used_memory': '6.60GB'}, 'job_type': 'blob_picker_gpu', 'params_spec': {'diameter': {'value': 100}, 'diameter_max': {'value': 200}}, 'project_uid': 'P10', 'status': 'completed', 'uid': 'J8', 'version': 'v4.5.1'}
[Wed, 10 Jul 2024 22:06:19 GMT] [CPU RAM used: 2511 MB] Completed 5595 of 5601 : J5/motioncorrected/001621013543220733622_FoilHole_5857787_Data_4834613_4834615_20240707_041553_fractions_patch_aligned_doseweighted.mrc
  Picked 1073 particles in 1.59s (15399.25s total)
[Wed, 10 Jul 2024 22:06:20 GMT] [CPU RAM used: 2511 MB] Completed 5596 of 5601 : J5/motioncorrected/012166659683071538709_FoilHole_5857787_Data_4835755_4835757_20240707_041542_fractions_patch_aligned_doseweighted.mrc
  Picked 1052 particles in 1.65s (15400.90s total)
[Wed, 10 Jul 2024 22:06:25 GMT] [CPU RAM used: 2511 MB] Completed 5597 of 5601 : J5/motioncorrected/014026365127928154694_FoilHole_5857788_Data_4834613_4834615_20240707_041632_fractions_patch_aligned_doseweighted.mrc
  Picked 1061 particles in 4.51s (15405.41s total)
[Wed, 10 Jul 2024 22:06:26 GMT] [CPU RAM used: 2511 MB] Completed 5598 of 5601 : J5/motioncorrected/013037419011218817931_FoilHole_5857788_Data_4835755_4835757_20240707_041622_fractions_patch_aligned_doseweighted.mrc
  Picked 989 particles in 1.54s (15406.96s total)
[Wed, 10 Jul 2024 22:06:28 GMT] [CPU RAM used: 2512 MB] Completed 5599 of 5601 : J5/motioncorrected/009470299084080011854_FoilHole_5857789_Data_4834613_4834615_20240707_041359_fractions_patch_aligned_doseweighted.mrc
  Picked 1076 particles in 1.57s (15408.54s total)
[Wed, 10 Jul 2024 22:06:30 GMT] [CPU RAM used: 2512 MB] Completed 5600 of 5601 : J5/motioncorrected/006088541024837971552_FoilHole_5857789_Data_4835755_4835757_20240707_041350_fractions_patch_aligned_doseweighted.mrc
  Picked 1099 particles in 1.70s (15410.24s total)
[Wed, 10 Jul 2024 22:06:55 GMT] [CPU RAM used: 2997 MB] Output templates to J8/templates.cs; templates mrc to J8/templates.mrc
[Wed, 10 Jul 2024 22:06:55 GMT] [CPU RAM used: 2997 MB] Done in 15436.11s
[Wed, 10 Jul 2024 22:06:55 GMT] [CPU RAM used: 1867 MB] --------------------------------------------------------------
[Wed, 10 Jul 2024 22:06:55 GMT] [CPU RAM used: 1867 MB] Compiling job outputs...
[Wed, 10 Jul 2024 22:06:55 GMT] [CPU RAM used: 1867 MB] Passing through outputs for output group micrographs from input group micrographs
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1867 MB] This job outputted results ['micrograph_blob']
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1867 MB]   Loaded output dset with 5601 items
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1867 MB] Passthrough results ['background_blob', 'ctf', 'mscope_params', 'movie_blob', 'micrograph_thumbnail_blob_1x', 'micrograph_thumbnail_blob_2x', 'ctf_stats', 'rigid_motion', 'spline_motion', 'micrograph_blob_non_dw', 'micrograph_blob_non_dw_AB', 'gain_ref_blob']
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Loaded passthrough dset with 5601 items
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Intersection of output and passthrough has 5601 items
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Output dataset contains:  ['ctf_stats', 'micrograph_thumbnail_blob_1x', 'ctf', 'micrograph_thumbnail_blob_2x', 'background_blob', 'micrograph_blob_non_dw_AB', 'gain_ref_blob', 'micrograph_blob_non_dw', 'rigid_motion', 'movie_blob', 'mscope_params', 'spline_motion']
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result background_blob
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result ctf
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result mscope_params
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result movie_blob
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result micrograph_thumbnail_blob_1x
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result micrograph_thumbnail_blob_2x
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result ctf_stats
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result rigid_motion
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result spline_motion
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result micrograph_blob_non_dw
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result micrograph_blob_non_dw_AB
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB]   Outputting passthrough result gain_ref_blob
[Wed, 10 Jul 2024 22:06:56 GMT] [CPU RAM used: 1876 MB] Checking outputs for output group micrographs
[Wed, 10 Jul 2024 22:07:08 GMT] [CPU RAM used: 2948 MB] Updating job size...
[Wed, 10 Jul 2024 22:07:12 GMT] [CPU RAM used: 2948 MB] Exporting job and creating csg files...
[Wed, 10 Jul 2024 22:07:23 GMT] [CPU RAM used: 2948 MB] ***************************************************************
[Wed, 10 Jul 2024 22:07:23 GMT] [CPU RAM used: 2948 MB] Job complete. Total time 15464.12s

Hi Wolfram,

If I re-run patch CTF estimation, then I get the following errors on ~40 micrographs:

raise ValueError(f’Could not read mrc data from {fname}') from e
ValueError: Could not read mrc data

If I blob pick from the accepted micrographs, the job terminates with error:

ValueError: Could not read mrc data from /cryosparc_projects_data/J5/motioncorrected/003877869488527317468_FoilHole_4900068_Data_4834613_4834615_20240706_224420_fractions_patch_aligned_doseweighted.mrc

I’m starting to think that the whole workspace is corrupt.

What are the outputs of these commands

stat -f .
cd /abs/path/to/project_dir/
ls -l J5/
ls -l J5/motioncorrected/003877869488527317468_FoilHole_4900068_Data_4834613_4834615_20240706_224420_fractions_patch_aligned_doseweighted.mrc
ls -l J5/motioncorrected/*patch_aligned_doseweighted.mrc | wc -l

Hi Wolfram,

Please see below:

File: “.”
ID: 7caf5a64a557**** Namelen: 255 Type: ext2/ext3
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 480302136 Free: 388164186 Available: 363747726
Inodes: Total: 122068992 Free: 118939431
total 30236
-rw-rw-r-- 1 cryosparcuser cryosparcuser 12427618 Jul 10 09:04 events.bson
drwxrwxr-x 2 cryosparcuser cryosparcuser 4096 Jul 10 09:04 gridfs_data
-rw-rw-r-- 1 cryosparcuser cryosparcuser 1460 Jul 10 09:04 J5_micrographs.csg
-rw-rw-r-- 1 cryosparcuser cryosparcuser 632960 Jul 10 09:04 J5_passthrough_micrographs.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 896 Jul 10 09:04 J5_passthrough_micrographs_incomplete.cs
-rw-rw-r-- 1 cryosparcuser cryosparcuser 52963 Jul 10 09:04 job.json
-rw-rw-r-- 1 cryosparcuser cryosparcuser 910292 Jul 10 09:04 job.log
-rw-rw-r-- 1 cryosparcuser cryosparcuser 9919424 Jul 10 09:04 micrographs_rigid_aligned.cs
drwxrwxr-x 2 cryosparcuser cryosparcuser 5038080 Jul 10 09:04 motioncorrected
drwxrwxr-x 2 cryosparcuser cryosparcuser 1937408 Jul 10 09:04 thumbnails
-rw-rw-r-- 1 cryosparcuser cryosparcuser 364904448 Jul 10 05:10 J5/motioncorrected/003877869488527317468_FoilHole_4900068_Data_4834613_4834615_20240706_224420_fractions_patch_aligned_doseweighted.mrc
6080

Hi, I re-ran patch motion correction through to 2D classification in a separate workspace using Chrome and frequent cryosparcm backups. All jobs seem to be running smoothly so far.

You may want to keep a close eye on the available storage capacity for your CryoSPARC database. Do not allow the relevant filesystem to run out of space. To find out where the CryoSPARC database is stored, you may run the command

grep CRYOSPARC_DB_PATH /home/cryosparcuser/Desktop/Cryosparc/cryosparc_master/config.sh

Thanks wolfram,

I have been backing up the cryosparc instance to a directory on the RAID array with over 50TB of free space using the --dir flag.

-Gregor