Second instance of job crashes

We started a job on the first GPU and then tried to run a second instance of the same job on the second GPU, but it fails. Is it because is not creating a second copy of the dataset? To our understanding, each job should be treated as independent. One should be able to run 10 instances of the exact same job and they should run without interfering with each other, or not?

Launched job Js4TfwsmkKaJBJPeP
License is valid.
Job Type: abinit Job ID: Js4TfwsmkKaJBJPeP
Assigned Slots: RAM: [0] GPU: [0]
Starting Ab-Initio Reconstruction , Experiment ID SB4j9zBLrAsbFLtdB
Loading Dataset …
---- Loading File Set. Type: star Path: /progs/cryosparc/run/bulk/./local_fs/data/
Loaded meta data for 635263 images.
Found raw data for 635263 images.
Loading image data…
Found references to 1485 unique data files.
Checking SSD cache for data files…
Traceback (most recent call last):
File “/progs/cryosparc/cryosparc-compute/”, line 193, in
dataset = dataio.dataset_factory(current_dataset_params, do_cache=current_experiment_params.get(‘do_cache’, True))
File “/progs/cryosparc/cryosparc-compute/dataio/”, line 18, in dataset_factory
File “/progs/cryosparc/cryosparc-compute/dataio/”, line 78, in filesets_into_dataset
load_all_referenced_mrcs(dataset, dataset, load_type, do_cache)
File “/progs/cryosparc/cryosparc-compute/dataio/”, line 152, in load_all_referenced_mrcs
File “/progs/cryosparc/cryosparc-compute/sparc/”, line 452, in read_files_to_cache
return [read_file_to_cache(rel_path, **kwargs) for rel_path in rel_paths]
File “/progs/cryosparc/cryosparc-compute/sparc/”, line 438, in read_file_to_cache
register_in_cachedb(rel_path) # marks last requested time & job_id even if already cached
File “/progs/cryosparc/cryosparc-compute/sparc/”, line 228, in register_in_cachedb
del cachedb[rel_path]
File “/progs/cryosparc/anaconda2/lib/python2.7/site-packages/diskcache/”, line 974, in delitem
raise KeyError(key)
KeyError: u’local_fs/data/Particles/Micrographs/20151222_07064685_corrected_sum40_particles.mrcs’

I think this might be the same issue I reported earlier, if so it is a bug and they are working on it: