Error:'bins` must be positive, when an integer in patch ctf estimation

Hi, I have two problems in patch ctf estimation, Here is the log. How can I correct them ? Thank you !
[CPU: 285.4 MB Avail: 504.12 GB]Child process with PID 236921 terminated unexpectedly with exit code 1.
[CPU: 288.9 MB Avail: 504.12 GB]Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 115, in cryosparc_master.cryosparc_compute.run.main
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py”, line 438, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run
File “cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py”, line 235, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.make_outputs
File “/home/ce01/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/pyplot.py”, line 3236, in hist
return gca().hist(
File “/home/ce01/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/init.py”, line 1465, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File “/home/ce01/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/axes/_axes.py”, line 6862, in hist
m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
File “<array_function internals>”, line 180, in histogram
File “/home/ce01/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numpy/lib/histograms.py”, line 793, in histogram
bin_edges, uniform_bins = _get_bin_edges(a, bins, range, weights)
File “/home/ce01/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numpy/lib/histograms.py”, line 424, in _get_bin_edges
raise ValueError(‘bins must be positive, when an integer’)
ValueError: bins must be positive, when an integer

1 Like

@yangtaili Please can you post the following details

  1. camera model
  2. dimensions (in pixels) and frame count of movie files
  3. pixel size (Å)
  4. Did the error happen on one of the n first exposures being processed (where n refers to the number of GPUs used in the job) or part-way through the job?

hi, I have encountered the same problem as you. Whether I try to change the microscope photo or reinstall cryosparc, I always get the same error after “Calling CUDA init from…” log . How did you solve this problem?

Welcome to the forum @Wanl.

Please can you post

  1. the end of the relevant log, beginning with that Calling CUDA init line.
  2. the output of this command on the CryoSPARC master:
    cryosparcm cli "get_job('P99', 'J199', 'job_type', 'version', 'instance_information', 'status',  'params_spec', 'errors_run', 'started_at')"
    
    where you replace P99, J199 with the failed job’s project and job ID’s, respectively.
  3. the outputs of these commands on the CryoSPARC worker where the command failed
    uname -a
    uptime
    nvidia-smi
    # replace with actual path on your CryoSPARC installation
    /path/to/cryosparc_worker/bin/crosparcw gpulist
    
  4. other details

i have encountered the same problem and would like to ask how to solve it
The data was collected with the help of the partner, and I also need to inquire about the camera model.
After motion the pixel size is 0.8192,and the error was hapeen on the beginning of the job.
The following is the information of the error text:

Child process with PID 4248 terminated unexpectedly with exit code 1
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 129, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py", line 438, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py", line 235, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.make_outputs
  File "/home/lthpc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/pyplot.py", line 3236, in hist
    return gca().hist(
  File "/home/lthpc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/__init__.py", line 1465, in inner
    return func(ax, *map(sanitize_sequence, args), **kwargs)
  File "/home/lthpc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/matplotlib/axes/_axes.py", line 6862, in hist
    m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
  File "<__array_function__ internals>", line 180, in histogram
  File "/home/lthpc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numpy/lib/histograms.py", line 793, in histogram
    bin_edges, uniform_bins = _get_bin_edges(a, bins, range, weights)
  File "/home/lthpc/cryosparc/cryosparc_worker/deps/anaconda/envs/cryosparc_worker_env/lib/python3.10/site-packages/numpy/lib/histograms.py", line 424, in _get_bin_edges
    raise ValueError('`bins` must be positive, when an integer')
ValueError: `bins` must be positive, when an integer

Please post that information when you have it.

The camera model is K3,and the dimensions is 5760*4092 Z:32

Thanks @pyz. Please can you also post the output of the following command, where you would replace P99, J199 with the failed Patch CTF job’s actual project and job IDs, respectively:

cryosparcm joblog P99 J199 | tail -n 40

Thanks for your attention. We have reinstalled CryoSPARC, and so far, we haven’t encountered the same issues as before. Thank you

hi all, I am also encountering this issue on a fresh install of 4.6.2. I am importing micrographs, not movies.
Camera model is Falcon 4i
dimensions 4096 x 4096
pixel size 1.56

I will send you directly the output of the joblog.

Please redact the outputs to conceal confidential information and post on the forum as a basis for continued discussion.
Please can you also post the output of the command

head -n 1 /path/to/cryosparc_master/deps/anaconda/bin/conda

I’m involved in this installation, can comment on this.
This command results in this line
#!/usr/bin/env python

This installation is broken
./bin/cryosparcw connect fails when not using --nogpu option, so this installed used the --nogpu option.

“cryosparcm status” shows this

(/PATH-TO-MY-CRYPOSPARC462/cryosparc_master/deps/anaconda) $ /PATH-TO-MY-CRYPOSPARC462//cryosparc_master$ bin/cryosparcm status
----------------------------------------------------------------------------
CryoSPARC System master node installed at
/PATH-TO-MY-CRYPOSPARC462/cryosparc_master
Current cryoSPARC version: v4.6.2
----------------------------------------------------------------------------

CryoSPARC process status:

app                              RUNNING   pid 2006665, uptime 5:03:53
app_api                          RUNNING   pid 2006747, uptime 5:03:51
app_api_dev                      STOPPED   Not started
command_core                     RUNNING   pid 2005848, uptime 5:04:13
command_rtp                      RUNNING   pid 2006218, uptime 5:04:04
command_vis                      RUNNING   pid 2006153, uptime 5:04:06
database                         RUNNING   pid 2005557, uptime 5:04:19

----------------------------------------------------------------------------
Traceback (most recent call last):
  File "/PATH-TO-MY-CRYPOSPARC462/cryosparc_master/deps/anaconda/bin/conda", line 12, in <module>
    from conda.cli import main
ModuleNotFoundError: No module named 'conda'
Warning: Could not activate conda environment; this indicates that a cryoSPARC installation is either incomplete or in progress
License is valid
----------------------------------------------------------------------------

global config variables:
export CRYOSPARC_LICENSE_ID="SKIP-SKIP"
export CRYOSPARC_MASTER_HOSTNAME="XXXXXX.ista.local"
export CRYOSPARC_DB_PATH="/PATH-TO-MY-CRYPOSPARC462/cryosparc_database"
export CRYOSPARC_BASE_PORT=57000
export CRYOSPARC_DB_CONNECTION_TIMEOUT_MS=20000
export CRYOSPARC_INSECURE=false
export CRYOSPARC_DB_ENABLE_AUTH=true
export CRYOSPARC_CLUSTER_JOB_MONITOR_INTERVAL=10
export CRYOSPARC_CLUSTER_JOB_MONITOR_MAX_RETRIES=1000000
export CRYOSPARC_PROJECT_DIR_PREFIX='CS-'
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_CLICK_WRAP=true

(/PATH-TO-MY-CRYPOSPARC462/cryosparc_master/deps/anaconda) cryoschurgrp@gpu136:/PATH-TO-MY-CRYPOSPARC462/cryosparc_master$

The installation was using some conda-installation based on miniforge3/24.3.0 as well as cuda/12.2.2 (nvidiia driver is on verion 535.183.01 )

This warning/error message

    from conda.cli import main
ModuleNotFoundError: No module named 'conda'

indicates that the installer might not have completed. However, we have been following the install instructions, and do not see what we should have done differently.

This may indicate that /PATH-TO-MY-CRYPOSPARC462 is too long. Please see Micromamba crash issue during installation - #28 by wtempel for a suggestion.

That seems to be indeed an issue. The length of the original install path was 120 characters, after resinstalling it in a path of length 65, the problems went away.

1 Like

thank you both. Unfortunately, the new installation did not resolve my issue. It also did not resolve my other open issue. For these tests I am simply taking either one movie or one micrograph, importing, then running either patch motion or patch ctf respectively. This data has been previously processed and published so there should be no issue with the data itself.

below is the output from

cryosparcm joblog P1 J3 | tail -n 40
  File "/nfs/(group_path)/<cryosparcpath>cryosparc_worker/cryosparc_compute/get_gpu_info.py", line 13, in get_driver_version
    from cryosparc_compute.gpu.driver import get_version
  File "/nfs/(group_path)/cryosparcpath)/cryosparc_worker/cryosparc_compute/gpu/driver.py", line 12, in <module>
    from cuda import cuda, cudart, nvrtc
ModuleNotFoundError: No module named 'cuda'
MONITOR PROCESS PID 3210403
***************************************************************
Transparent hugepages setting: [always] madvise never

Running job on hostname %s <hostname>
Allocated Resources :  {'fixed': {'SSD': False}, 'hostname': '<hostname>', 'lane': '<hostname>', 'lane_type': 'cluster', 'license': True, 'licenses_acquired': 1, 'slots': {'CPU': [0, 1], 'GPU': [0], 'RAM': [0]}, 'target': {'cache_path': '/ssdpool/(path)/v4.4_PORT', 'cache_quota_mb': None, 'cache_reserve_mb': 10000, 'custom_var_names': [], 'custom_vars': {}, 'desc': None, 'hostname': '<hostname>', 'lane': '<hostname>', 'name': '<hostname>', 'qdel_cmd_tpl': 'scancel {{ cluster_job_id }}', 'qinfo_cmd_tpl': 'sinfo', 'qstat_cmd_tpl': 'squeue -j {{ cluster_job_id }}', 'qstat_code_cmd_tpl': None, 'qsub_cmd_tpl': 'sbatch {{ script_path_abs }}', 'script_tpl': '#!/usr/bin/env bash\n#### cryoSPARC cluster submission script template for SLURM\n## Available variables:\n## {{ run_cmd }}            - the complete command string to run the job\n## {{ num_cpu }}            - the number of CPUs needed\n## {{ num_gpu }}            - the number of GPUs needed. \n##                            Note: the code will use this many GPUs starting from dev id 0\n##                                  the cluster scheduler or this script have the responsibility\n##                                  of setting CUDA_VISIBLE_DEVICES so that the job code ends up\n##                                  using the correct cluster-allocated GPUs.\n## {{ ram_gb }}             - the amount of RAM needed in GB\n## {{ job_dir_abs }}        - absolute path to the job directory\n## {{ project_dir_abs }}    - absolute path to the project dir\n## {{ job_log_path_abs }}   - absolute path to the log file for the job\n## {{ worker_bin_path }}    - absolute path to the cryosparc worker command\n## {{ run_args }}           - arguments to be passed to cryosparcw run\n## {{ project_uid }}        - uid of the project\n## {{ job_uid }}            - uid of the job\n## {{ job_creator }}        - name of the user that created the job (may contain spaces)\n## {{ cryosparc_username }} - cryosparc username of the user that created the job (usually an email)\n## {{ job_type }}           - CryoSPARC job type\n##\n## What follows is a simple SLURM script:\n\n#SBATCH --job-name cryosparc_{{ project_uid }}_{{ job_uid }}\n#SBATCH -n {{ num_cpu }}\n#SBATCH --gres=gpu:{{ num_gpu }}\n#SBATCH --ntasks=2\n#SBATCH --partition=gpu\n#SBATCH --mem=40000MB\n#SBATCH -o /nfs/(group_path)/cryosparc_slurm_outputs/output_{{ project_uid }}_{{ job_uid }}.txt\n#SBATCH -e /nfs/(group_path)/cryosparc_slurm_outputs/error_{{ project_uid }}_{{ job_uid }}.txt\n#SBATCH --exclude=gpu280,gpu279,gpu278,gpu281,gpu139,gpu227,gpu228,gpu138,gpu150,gpu148,gpu145\n#SBATCH --time=240:00:00\n#SBATCH --constraint=bookworm # debian12\n\necho $available_devs\necho $CUDA_HOME\necho "$(hostname)"\necho $SLURM_TMPDIR\n\n/usr/bin/nvidia-smi\n\nmodule list\n\nexport CRYOSPARC_SSD_PATH="${SLURM_TMPDIR}"\n\n{{ run_cmd }}\n\n', 'send_cmd_tpl': '{{ command }}', 'title': '<hostname>', 'tpl_vars': ['cluster_job_id', 'ram_gb', 'project_dir_abs', 'run_args', 'run_cmd', 'command', 'job_log_path_abs', 'worker_bin_path', 'cryosparc_username', 'job_dir_abs', 'num_cpu', 'project_uid', 'job_uid', 'num_gpu', 'job_creator', 'job_type'], 'type': 'cluster', 'worker_bin_path': '/nfs/(group_path)/<cryosparcpath>cryosparc_worker/bin/cryosparcw'}}
Process Process-1:
Traceback (most recent call last):
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/nfs/(group_path)/<cryosparcpath>cryosparc_worker/cryosparc_compute/jobs/pipeline.py", line 199, in process_work_simple
    process_setup(proc_idx) # do any setup you want on a per-process basis
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py", line 75, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.process_setup
  File "cryosparc_master/cryosparc_compute/gpu/gpucore.py", line 21, in init cryosparc_master.cryosparc_compute.gpu.gpucore
ModuleNotFoundError: No module named 'cuda'
**** handle exception rc
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 129, in cryosparc_master.cryosparc_compute.run.main
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py", line 438, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run
  File "cryosparc_master/cryosparc_compute/jobs/ctf_estimation/run.py", line 235, in cryosparc_master.cryosparc_compute.jobs.ctf_estimation.run.run.make_outputs
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/matplotlib/pyplot.py", line 3236, in hist
    return gca().hist(
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/matplotlib/__init__.py", line 1465, in inner
    return func(ax, *map(sanitize_sequence, args), **kwargs)
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/matplotlib/axes/_axes.py", line 6862, in hist
    m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
  File "<__array_function__ internals>", line 180, in histogram
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/numpy/lib/histograms.py", line 793, in histogram
    bin_edges, uniform_bins = _get_bin_edges(a, bins, range, weights)
  File "/nfs/(group_path)/CS38df2812/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.10/site-packages/numpy/lib/histograms.py", line 424, in _get_bin_edges
    raise ValueError('`bins` must be positive, when an integer')
ValueError: `bins` must be positive, when an integer
set status to failed

@orangeboomerang Please can you post the outputs of these commands (in a fresh shell):

eval $(cryosparcm env) # no output expected
cd $CRYOSPARC_ROOT_DIR # no output expected
pwd | wc -c
pwd -P | wc -c
head -n 1 deps/anaconda/bin/conda

Hi,

output is below:

66
66
#!/<full_path>/cryosparc_master/deps/anaconda/bin/python

thanks

Thanks @orangeboomerang . Please exit the shell after running these commands (to avoid inadvertently running general commands in the modified environment).
Please can you also run, in a fresh shell on the worker where the job failed, and post the outputs of these commands

eval $(/path/to/cryosparc_worker/bin/cryosparcw env)
cd $CRYOSPARC_ROOT_DIR
pwd | wc -c
pwd -P | wc -c
basename $NUMBA_CUDA_INCLUDE_PATH
echo $CRYOSPARC_CONDA_ENV
head -n 1 $CONDA_EXE
basename $(pwd)
exit

the output is

66
66
include
cryosparc_worker_env
#!/path/to/cryosparc_worker/deps/anaconda/bin/python
cryosparc_worker

1 Like

To confirm I was not doing something silly, I had another experienced person in the lab try to run a few jobs (patch motion and patch ctf, as I did) and he encountered the same issues. So I am thinking we can rule out user error at this point.