Error while 2D classification:Encountered OS error while caching: [Errno 22] Invalid argument:

Hi,
I’m getting this error while trying to run a 2D classification job. Although not best practice, my cache and database are currently stored on the same external hard drive. I was wondering if the issue is due to the file system of the external drive? Is there a way of solving this without having to change the file system of the drive?

Thanks,
Yakup

Encountered OS error while caching: [Errno 22] Invalid argument: '/media/patrick/603A050C3A04E0C0/Yakup/Cryo_spark_cashe-SSD/instance_patrick-Alienware-Aurora-R15:39001'; dumping info
raceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 129, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/class2D/newrun.py", line 89, in cryosparc_master.cryosparc_compute.jobs.class2D.newrun.run_class_2D
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/cryosparc_compute/particles.py", line 120, in read_blobs
    u_blob_paths = cache_run(u_rel_paths)
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/cryosparc_compute/jobs/cache_v2.py", line 881, in run
    return run_with_executor(rel_sources, executor)
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/cryosparc_compute/jobs/cache_v2.py", line 906, in run_with_executor
    drive = activate()
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/cryosparc_compute/jobs/cache_v2.py", line 846, in activate
    CacheDrive.ACTIVE = CacheDrive(
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/cryosparc_compute/jobs/cache_v2.py", line 464, in __init__
    self.path.mkdir(parents=True, exist_ok=True)
  File "/media/patrick/603A050C3A04E0C0/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/pathlib.py", line 1175, in mkdir
    self._accessor.mkdir(self, mode)
OSError: [Errno 22] Invalid argument: '/media/patrick/603A050C3A04E0C0/Yakup/Cryo_spark_cashe-SSD/instance_patrick-Alienware-Aurora-R15:39001'

output of lsblk

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   3.6T  0 disk 
└─sda1        8:1    0   3.6T  0 part /media/patrick/603A050C3A04E0C0
nvme0n1     259:0    0 953.9G  0 disk 
├─nvme0n1p1 259:1    0   200M  0 part /boot/efi
├─nvme0n1p2 259:2    0   128M  0 part 
├─nvme0n1p3 259:3    0 202.5G  0 part 
├─nvme0n1p4 259:4    0  92.7G  0 part /media/patrick/OS
├─nvme0n1p5 259:5    0     1G  0 part 
├─nvme0n1p6 259:6    0  18.6G  0 part 
├─nvme0n1p7 259:7    0   1.5G  0 part 
├─nvme0n1p8 259:8    0 336.8G  0 part /
└─nvme0n1p9 259:9    0 300.4G  0 part /media/patrick/DATA

output of stat -f
(base) patrick@patrick-Alienware-Aurora-R15:/media/patrick/603A050C3A04E0C0/Yakup/Talos_L120C/cryo-full/CS-yakup-cryo/J44/imported$ stat -f /media/patrick/603A050C3A04E0C0/Yakup/Cryo_spark_cashe-SSD/
File: “/media/patrick/603A050C3A04E0C0/Yakup/Cryo_spark_cashe-SSD/”
ID: 0 Namelen: 255 Type: fuseblk
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 976753919 Free: 382523477 Available: 382523477
Inodes: Total: 1530323284 Free: 1530126651

Hallo~ Do you solve this question? I face the same question with your.

(Belated) Welcome to the forum @yakup.

We have not tested CryoSPARC with this filesystem type. It is possible that the filesystem or filesystem driver is incompatible with CryoSPARC caching, and/or other CryoSPARC functions.

This configuration is discouraged due to the different demands on database (required: moderate performance, high reliability; recovery: tedious) and cache (desired: high performance, moderate reliability, concern: shortened live span due to frequent writes of large data volumes).
If, additionally, project directories are stored on the same volume as the particle cache, no benefit is expected from caching because reads from cache would be no faster than from the project directory.

@Applewang Your situation may differ. Please post

  • the full error message
  • output of the the command (replace /path/to/cache/ with actual path to cache)
    df -hT /path/to/cache

@wtempel
Hallo, wtempel. I posted a message last night which contained my error message.
2D Classification Error question - Troubleshooting - CryoSPARC Discuss

output of df -hT /media/zenglab/新加卷/cryosparc_temp/ is

文件系统       类型   大小  已用  可用 已用% 挂载点
/dev/nvme0n1p2 ntfs3  469G  3.7G  466G    1% /media/zenglab/新加卷

Thanks @Applewang. We have not tested CryoSPARC with cache or project directories on the ntfs filesystem. For the cache filesystem, which should be separate from the filesystem(s) for CryoSPARC projects and CryoSPARC database, you may want to try the xfs or ext4 type. We also have not tested CryoSPARC with paths that include non-ASCII characters.