I update cryoSPARC from version 4.0 to version 4.1, after updating the software, when I tried to start a new import job to import a particle stack, it reports error:
But I can still do reconstruction using previously imported particles, so the computing node is all right.
BTW, after I refresh the page and restart the job, the particle can be imported normally. This kind of phenomenon is very stable, the first import always failed and refresh and restart works always. I am so confused how to fix it.
eval $(<path-to-cryosparc_worker>/bin/cryosparcw env):
I executed eval $(/Share/THUDATA/Softwares/cryosparc/cryosparc2_worker/bin/cryosparcw env) and nothing happened.
echo $CRYOSPARC_CUDA_PATH:
output: /Share/THUDATA/Softwares/cuda-10.2
${CRYOSPARC_CUDA_PATH}/bin/nvcc --version
output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
Linux gpu01 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
total used free shared buff/cache available
Mem: 376 60 188 1 126 312
Swap: 47 0 47
Sat Dec 17 15:32:00 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2… Off | 00000000:1A:00.0 Off | 0 |
| N/A 55C P0 84W / 300W | 32016MiB / 32510MiB | 100% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla V100-SXM2… Off | 00000000:1C:00.0 Off | 0 |
| N/A 46C P0 80W / 300W | 28490MiB / 32510MiB | 100% Default |
±------------------------------±---------------------±---------------------+
| 2 Tesla V100-SXM2… Off | 00000000:1D:00.0 Off | 0 |
| N/A 43C P0 76W / 300W | 30514MiB / 32510MiB | 100% Default |
±------------------------------±---------------------±---------------------+
| 3 Tesla V100-SXM2… Off | 00000000:1E:00.0 Off | 0 |
| N/A 51C P0 82W / 300W | 30234MiB / 32510MiB | 100% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 16739 C …naconda3/envs/ZDH_SSK_torch/bin/python3 32003MiB |
| 1 16739 C …naconda3/envs/ZDH_SSK_torch/bin/python3 28477MiB |
| 2 16739 C …naconda3/envs/ZDH_SSK_torch/bin/python3 30501MiB |
| 3 16739 C …naconda3/envs/ZDH_SSK_torch/bin/python3 30221MiB |
±----------------------------------------------------------------------------+
BTW, the gpu device where I ran the command above is occupied by a running program, so the nvidia-smi command get the information above, which is not an error.