"Job directory is not empty" error on filesystem with hidden files

Hi,
I’m running Cryosparc v3 on a cluster with an NAS filesystem which keeps snapshots in a hidden directory. This hidden directory is in every directory on the filesystem when snapshots are on, e.g. /some/directory/P1/J1/.atavium-history
will exist. Cryosparc jobs are refusing to run if this directory is not empty, is there a way I can tell the application to ignore the snapshot directory?

I can create filesystems without the snapshots but it is more desirable to use the same filesystem (because of group permissions).

Thanks,
Jeff

Hi @jmcdonal,

Thanks for reporting this. This is a situation we haven’t really considered before- unfortunately, changing this behaviour requires a code change. My only reservation is that I’m not sure if there are cases where a hidden file shouldn’t be ignored when testing if a directory is empty.
Either way, here are the lines of code to replace in cryosparc{2}_master/command_core/__init__.py in the run_job() function around line 2056:

        #make sure we don't overwrite any existing files inside the job directory
        for filename in os.listdir(job_dir_abs):
            if filename in ['job.json', 'events.bson', 'gridfs_data']:
                continue
            if filename.startswith('.'):
                continue
            
            set_job_status(project_uid, job_uid, "failed")# remove from queue
            directory_failure_msg = "Job directory %s is not empty, found: %s" %(job_dir_abs, os.path.join(job_dir_abs, filename))
            job_send_streamlog(project_uid, job_uid, directory_failure_msg)
            return

Once you’ve replaced this little if-statement in your file, save it, and restart cryoSPARC: cryosparcm restart.

Hi @stephan,

I’m having a similar issue on our installation but the files aren’t hidden in our case, they’re actually generated by cryosparc I believe. Please see attached image for some detail:

cryosparc_inspect_error|690x177