Hello world,
I updated our cryosparc to v4.5.3 with 240807 patch applied. crryosparc would not run any jobs after that (new/old/cloned etc). I tried several things (cryosparc stopped, then cryosparc started) (separately restarted).
it always returns the error similar to one example below (ab initio): any suggestions?
Traceback (most recent call last):
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/dataset.py”, line 568, in load
dset = cls(indata)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/dataset.py”, line 769, in init
self.add_fields([entry[0] for entry in populate])
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/dataset.py”, line 1044, in add_fields
self._data.addcol_array(name, TYPE_TO_DSET_MAP[dt.base.type], dt.shape)
File “cryosparc/core.pyx”, line 112, in cryosparc.core.Data.addcol_array
TypeError: addcol_array() takes exactly 5 positional arguments (3 given)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 95, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/abinit/run.py”, line 44, in cryosparc_master.cryosparc_compute.jobs.abinit.run.run_homo_abinit
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 678, in load_input_group
dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 678, in
dsets = [load_input_connection_slots(input_group_name, keep_slot_names, idx, allow_passthrough=allow_passthrough, memoize=memoize) for idx in range(num_connections)]
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 642, in load_input_connection_slots
dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 642, in
dsets = [load_input_connection_single_slot(input_group_name, slot_name, connection_idx, allow_passthrough=allow_passthrough, memoize=memoize) for slot_name in slot_names]
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 634, in load_input_connection_single_slot
d = load_output_result_dset(_project_uid, output_result, slotconnection[‘version’], slot_name, memoize=memoize)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_compute/jobs/runcommon.py”, line 589, in load_output_result_dset
d = dataset.Dataset.load(abspath)
File “/spshared/apps/cryosparc24/cryosparc_worker/cryosparc_tools/cryosparc/dataset.py”, line 606, in load
raise DatasetLoadError(f"Could not load dataset from file {file}") from err
cryosparc_tools.cryosparc.errors.DatasetLoadError: Could not load dataset from file /data/RR/RR-104-sdW/CS-rr/J34/J34_020_particles.cs
I also checked if there is any orphan instances making it problematic (output given below). I cant seem to kill the jobs from the grep supervisord…
[spuser@spgpu run]$ ps -ax | grep “supervisord”
1775911 ? Ss 0:06 python /spshared/apps/cryosparc24/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/bin/supervisord -c /spshared/apps/cryosparc24/cryosparc_master/supervisord.conf
1818774 pts/1 S+ 0:00 grep --color=auto supervisord