Topaz Train - TypeError: concat() takes 1 positional argument but 2 were given

I support a group of cryosparc users who have recently run into an issue using Topaz Train. Here is the output of the failed Topaz Train job.

Micrograph preprocessing command complete.

[CPU:  228.9 MB]

Starting particle pick preprocessing by running command /users/r/a/rcat/miniconda3/envs/topaz/bin/topaz convert --down-scale 4 --threshold 0 -o /netfiles/rcat_lab/cryosparc/cs-data/J185/topaz_particles_processed.txt /netfiles/rcat_lab/cryosparc/cs-data/J185/topaz_particles_raw.txt

[CPU:  228.9 MB]

Particle pick preprocessing command complete.

[CPU:  228.9 MB]

Preprocessing done in 301.530s.
[CPU:  228.9 MB]

--------------------------------------------------------------
[CPU:  228.9 MB]

Starting train-test splitting...

[CPU:  228.9 MB]

tarting dataset splitting by running command /users/r/a/rcat/miniconda3/envs/topaz/bin/topaz train_test_split --number 17 --seed 541036979 --image-dir /netfiles/rcat_lab/cryosparc/cs-data/J185/preprocessed /netfiles/rcat_lab/cryosparc/cs-data/J185/topaz_particles_processed.txt

[CPU:  228.9 MB]

# splitting 85 micrographs with 434 labeled particles into 68 train and 17 test micrographs
[CPU:  228.9 MB]

Traceback (most recent call last):
[CPU:  228.9 MB]

File "/users/r/a/rcat/miniconda3/envs/topaz/bin/topaz", line 8, in <module>
[CPU:  228.9 MB]

sys.exit(main())
[CPU:  228.9 MB]

File "/users/r/a/rcat/miniconda3/envs/topaz/lib/python3.8/site-packages/topaz/main.py", line 148, in main
[CPU:  228.9 MB]

args.func(args)
[CPU:  228.9 MB]

File "/users/r/a/rcat/miniconda3/envs/topaz/lib/python3.8/site-packages/topaz/commands/train_test_split.py", line 108, in main
[CPU:  228.9 MB]

targets_train = pd.concat(groups_train, 0)
[CPU:  228.9 MB]

TypeError: concat() takes 1 positional argument but 2 were given
[CPU:  228.9 MB]

Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 96, in cryosparc_compute.run.main
  File "/gpfs2/scratch/rcat/cryosparc/cryosparc_worker/cryosparc_compute/jobs/topaz/run_topaz.py", line 307, in run_topaz_wrapper_train
    utils.run_process(split_command)
  File "/gpfs2/scratch/rcat/cryosparc/cryosparc_worker/cryosparc_compute/jobs/topaz/topaz_utils.py", line 98, in run_process
    assert process.returncode == 0, f"Subprocess exited with status {process.returncode} ({str_command})"
AssertionError: Subprocess exited with status 1 (/users/r/a/rcat/miniconda3/envs/topaz/bin/topaz train_test_split --number 17 --seed 541036979 --image-dir /netfiles/rcat_lab/cryosparc/cs-data/J185/preprocessed /netfiles/rcat_lab/cryosparc/cs-data/J185/topaz_particles_processed.txt)

I should mention that this user group had previously been running into issue with Topaz denoise jobs - that issue turned out to be related to PyTorch. The Topaz installation instructions on the Cryosparc site suggest using Python3.6 and installing via Conda - when following those steps PyTorch was using a version of Cuda that was too old for our A100 cards (these cards need Cuda 11+).

I created a new conda environment using Python3.8 (hoping to get a newer version of PyTorch). I then had to install everything via pip rather than conda install as I was only getting the CPU version of PyTorch when installing via conda, even when specifying cuda toolkit.

Here is the output of pip freeze

 pip freeze
certifi==2022.12.7
charset-normalizer==3.1.0
future==0.18.3
idna==3.4
joblib==1.2.0
numpy==1.24.3
pandas==2.0.1
Pillow==9.5.0
python-dateutil==2.8.2
pytz==2023.3
requests==2.28.2
scikit-learn==1.2.2
scipy==1.10.1
six==1.16.0
threadpoolctl==3.1.0
topaz-em==0.2.5
torch==1.13.1+cu116
torchaudio==0.13.1+cu116
torchvision==0.14.1+cu116
typing_extensions==4.5.0
tzdata==2023.3
urllib3==1.26.15

That solved the issues we were seeing with denoise jobs “failing successfully” and resulted with the “denoised_micrographs” directory being empty. My concern is that using newer versions of Pytorch may have introduced an issue with Topaz Train jobs. I don’t actually use the software much myself so I’m unsure where to look next.

Any help or advice on how to get this environment working for Topaz jobs would be helpful. Thank you in advance to the community here.

-Travis

Not directly related to your question, but I have run into this issue with lots of PyTorch and Tensorflow software, and found conda is installing the CPU version when it doesn’t detect any CUDA capable devices. We are using a cluster with a shared filesystem, so I SSH’d to one of the GPU nodes and installed the conda environment for Topaz there.

Here is my conda environment that is working properly with Topaz:

_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       2_gnu    conda-forge
blas                      1.0                         mkl    conda-forge
bzip2                     1.0.8                h7f98852_4    conda-forge
ca-certificates           2022.12.7            ha878542_0    conda-forge
certifi                   2021.5.30        py36h5fab9bb_0    conda-forge
cudatoolkit               11.1.1              ha002fc5_11    conda-forge
dataclasses               0.8                pyh787bdff_2    conda-forge
ffmpeg                    4.3                  hf484d3e_0    pytorch
freetype                  2.12.1               hca18f0e_1    conda-forge
future                    0.18.2           py36h5fab9bb_3    conda-forge
gmp                       6.2.1                h58526e2_0    conda-forge
gnutls                    3.6.13               h85f3911_1    conda-forge
icu                       70.1                 h27087fc_0    conda-forge
intel-openmp              2023.0.0         h9e868ea_25371  
joblib                    1.2.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h0b41bf4_3    conda-forge
lame                      3.100             h166bdaf_1003    conda-forge
lcms2                     2.12                 hddcbb42_0    conda-forge
ld_impl_linux-64          2.40                 h41732ed_0    conda-forge
lerc                      3.0                  h9c3ff4c_0    conda-forge
libblas                   3.9.0           1_h86c2bf4_netlib    conda-forge
libcblas                  3.9.0           5_h92ddd45_netlib    conda-forge
libdeflate                1.10                 h7f98852_0    conda-forge
libffi                    3.4.2                h7f98852_5    conda-forge
libgcc-ng                 12.2.0              h65d4601_19    conda-forge
libgfortran-ng            12.2.0              h69a702a_19    conda-forge
libgfortran5              12.2.0              h337968e_19    conda-forge
libgomp                   12.2.0              h65d4601_19    conda-forge
libhwloc                  2.9.0                hd6dc26d_0    conda-forge
libiconv                  1.17                 h166bdaf_0    conda-forge
liblapack                 3.9.0           5_h92ddd45_netlib    conda-forge
libnsl                    2.0.0                h7f98852_0    conda-forge
libopenblas               0.3.21          pthreads_h78a6416_3    conda-forge
libpng                    1.6.39               h753d276_0    conda-forge
libsqlite                 3.40.0               h753d276_0    conda-forge
libstdcxx-ng              12.2.0              h46fd767_19    conda-forge
libtiff                   4.3.0                h0fcbabc_4    conda-forge
libuv                     1.44.2               h166bdaf_0    conda-forge
libwebp-base              1.2.4                h166bdaf_0    conda-forge
libxml2                   2.10.3               h7463322_0    conda-forge
libzlib                   1.2.13               h166bdaf_4    conda-forge
mkl                       2023.0.0         h6d00ec8_25399  
ncurses                   6.3                  h27087fc_1    conda-forge
nettle                    3.6                  he412f7d_0    conda-forge
ninja                     1.11.1               h924138e_0    conda-forge
numpy                     1.19.5           py36hfc0c790_2    conda-forge
olefile                   0.46               pyh9f0ad1d_1    conda-forge
openh264                  2.1.1                h780b84a_0    conda-forge
openjpeg                  2.5.0                h7d73246_0    conda-forge
openssl                   1.1.1t               h0b41bf4_0    conda-forge
pandas                    1.1.5            py36h284efc9_0    conda-forge
pillow                    8.3.2            py36h676a545_0    conda-forge
pip                       20.0.2                   py36_1    conda-forge
python                    3.6.15          hb7a2778_0_cpython    conda-forge
python-dateutil           2.8.2              pyhd8ed1ab_0    conda-forge
python_abi                3.6                     2_cp36m    conda-forge
pytorch                   1.10.2          py3.6_cuda11.1_cudnn8.0.5_0    pytorch
pytorch-mutex             1.0                        cuda    pytorch
pytz                      2022.7.1           pyhd8ed1ab_0    conda-forge
readline                  8.1.2                h0f457ee_0    conda-forge
scikit-learn              0.24.2           py36hc89565f_1    conda-forge
scipy                     1.5.3            py36h81d768a_1    conda-forge
setuptools                49.6.0           py36h5fab9bb_3    conda-forge
six                       1.16.0             pyh6c4a22f_0    conda-forge
sqlite                    3.40.0               h4ff8645_0    conda-forge
tbb                       2021.8.0             hf52228f_0    conda-forge
threadpoolctl             3.1.0              pyh8a188c0_0    conda-forge
tk                        8.6.12               h27826a3_0    conda-forge
topaz                     0.2.5                      py_0    tbepler
torchvision               0.11.3               py36_cu111    pytorch
typing_extensions         4.1.1              pyha770c72_0    conda-forge
wheel                     0.34.2                   py36_0    conda-forge
xz                        5.2.6                h166bdaf_0    conda-forge
zlib                      1.2.13               h166bdaf_4    conda-forge
zstd                      1.5.2                h3eb15da_6    conda-forge

This is working for us with A100s and A40s on CUDA 11.3.

When I made the conda environment, I believe I pinned Python 3.6, and then ran:

conda install numpy pandas scikit-learn
conda install -c pytorch pytorch torchvision

For use with CryoSPARC, we recommend “wrapping” the topaz command in a shell script as described here for better control over the command’s environment.

What where the Version and Build of the pytorch package so installed? 1.10.2 and py3.6_cuda11.3_cudnn8.2.0_0, respectively?

Hi,

I’m running into the same issue with topaz 0.2.5a. I just moved in a lab that handle their cluster via SBgrid. I don’t have any control of how topaz was installed, but what I know is it was not installed through conda.

What could I do to resolve the error ? (see below)

[CPU:  233.6 MB]
Starting Topaz process using version 0.2.5a...
[CPU:  233.6 MB]
Random seed used is 58879618
[CPU:  233.6 MB]
--------------------------------------------------------------
[CPU:  233.6 MB]
Starting preprocessing...

[CPU:  233.6 MB]
Starting micrograph preprocessing by running command /programs/x86_64-linux/system/sbgrid_bin/topaz preprocess --scale 4 --niters 200 --num-workers 8 -o /data/work/kmartin/cryosparc/CS-XX/J63/preprocessed [10 MICROGRAPH PATHS EXCLUDED FOR LEGIBILITY]

[CPU:  233.6 MB]
Preprocessing over 2 processes...
[CPU:  233.8 MB]
Inverting negative staining...
[CPU:  233.9 MB]
Inverting negative staining complete.

[CPU:  233.9 MB]
Micrograph preprocessing command complete.

[CPU:  233.9 MB]
Starting particle pick preprocessing by running command /programs/x86_64-linux/system/sbgrid_bin/topaz convert --down-scale 4 --threshold 0 -o /data/work/kmartin/cryosparc/CS-XX/J63/topaz_particles_processed.txt /data/work/kmartin/cryosparc/CS-XX/J63/topaz_particles_raw.txt

[CPU:  233.9 MB]
Particle pick preprocessing command complete.

[CPU:  233.9 MB]
Preprocessing done in 83.719s.
[CPU:  233.9 MB]
--------------------------------------------------------------
[CPU:  233.9 MB]
Starting train-test splitting...

[CPU:  233.9 MB]
Starting dataset splitting by running command /programs/x86_64-linux/system/sbgrid_bin/topaz train_test_split --number 2 --seed 58879618 --image-dir /data/work/kmartin/cryosparc/CS-XX/J63/preprocessed /data/work/kmartin/cryosparc/CS-XX/J63/topaz_particles_processed.txt

[CPU:  233.9 MB]
# splitting 10 micrographs with 2060 labeled particles into 8 train and 2 test micrographs
[CPU:  233.9 MB]
Traceback (most recent call last):
[CPU:  233.9 MB]
File "/programs/x86_64-linux/topaz/0.2.5_cu11.2/bin/topaz", line 33, in <module>
[CPU:  233.9 MB]
sys.exit(load_entry_point('topaz-em==0.2.5', 'console_scripts', 'topaz')())
[CPU:  233.9 MB]
File "/programs/x86_64-linux/topaz/0.2.5_cu11.2/lib/python3.9/site-packages/topaz/main.py", line 148, in main
[CPU:  233.9 MB]
args.func(args)
[CPU:  233.9 MB]
File "/programs/x86_64-linux/topaz/0.2.5_cu11.2/lib/python3.9/site-packages/topaz/commands/train_test_split.py", line 108, in main
[CPU:  233.9 MB]
targets_train = pd.concat(groups_train, 0)
[CPU:  233.9 MB]
TypeError: concat() takes 1 positional argument but 2 were given
[CPU:  233.9 MB]
Traceback (most recent call last):
  File "cryosparc_master/cryosparc_compute/run.py", line 96, in cryosparc_compute.run.main
  File "/net/cemaster/data/software/cryoSPARC/V2/cryosparc2_worker/cryosparc_compute/jobs/topaz/run_topaz.py", line 307, in run_topaz_wrapper_train
    utils.run_process(split_command)
  File "/net/cemaster/data/software/cryoSPARC/V2/cryosparc2_worker/cryosparc_compute/jobs/topaz/topaz_utils.py", line 98, in run_process
    assert process.returncode == 0, f"Subprocess exited with status {process.returncode} ({str_command})"
AssertionError: Subprocess exited with status 1 (/programs/x86_64-linux/system/sbgrid_bin/topaz train_test_split --number 2 --seed 58879618 --image-dir /data/work/kmartin/cryosparc/CS-XX/J63/preprocessed /data/work/kmartin/cryosparc/CS-XX/J63/topaz_particles_processed.txt)

I’m totally lost. So far by using the conda environment I never had this kind of problem.
Any help will be welcomed…

Thank you all

Kevin

Welcome to the forum @KevinM.
Did you try

  1. installing an additional copy of topaz into a fresh conda environment
  2. creating a wrapper script around the conda-based topaz installation
  3. pointing Path to Topaz executable to the wrapper script

?

Hey @wtempel, thanks for your response.
The problem is that I have no administration privilege on that cluster.
Topaz do not seems to work in a conda environment: “conda activate topaz” returns “conda : command not found”.

I created the wrapper script pointing to path to topaz executable but since conda is not working, I’m kind of stuck.

You would need to install your own copy of conda software, like miniforge, then create a new conda environment as described in the topaz repository. These steps do not require admin privileges: conda software and topaz can be installed in your home directory.

Hi,

I finally end up by using topaz 0.2.4 instead of 0.2.5a. It works now.

I have the same issue and warp it up already.
#!/usr/bin/env bash
if command -v conda > /dev/null 2>&1; then
conda deactivate > /dev/null 2>&1 || true # ignore any errors
conda deactivate > /dev/null 2>&1 || true # ignore any errors
fi
unset _CE_CONDA
unset CONDA_DEFAULT_ENV
unset CONDA_EXE
unset CONDA_PREFIX
unset CONDA_PROMPT_MODIFIER
unset CONDA_PYTHON_EXE
unset CONDA_SHLVL
unset PYTHONPATH
unset LD_PRELOAD
unset LD_LIBRARY_PATH

source $HOME/anaconda3/etc/profile.d/conda.sh

conda activate topaz

exec /home/user/anaconda3/bin/topaz $@

conda environments:

base * /home/user/anaconda3
relion-5.0 /home/user/anaconda3/envs/relion-5.0
scipion3 /home/user/anaconda3/envs/scipion3
topaz /home/user/anaconda3/envs/topaz

(base) user@vivaldi:~$ which topaz
/home/user/anaconda3/bin/topaz

Welcome to the forum @V3eneno .
Please can you post the log entries like in Topaz Train - TypeError: concat() takes 1 positional argument but 2 were given - #4 by KevinM
so that the

  • full topaz command
  • the code file paths and line numbers where the error occurred

are shown.

Turn out my old topaz env was not clear, it fix after the clean env and install again

1 Like

Thanks for the update @V3eneno. For the benefit of users who also experienced this error, like under Topaz TypeError: concat() takes 1 positional argument but 2 were given, please can you explain in which way your topaz environment was broken, and post

  • the full commands you used to
    • create the environment
    • install topaz
  • the version of python in that environment:
    $(dirname $(which topaz))/python -V
    

First was unable to call CUDA. I reinstall but not clean the env, this is when TypeError comes. Then remove whole env and create a new one.
(conda env remove topaz) (conda create -n topaz) (conda install tbepler::topaz)
topaz 2.0.5, python 3.6.13

Hello,
I am having similar issues with topaz train:
error:
[CPU: 256.9 MB Avail: 504.40 GB]
Traceback (most recent call last):

[CPU: 256.9 MB Avail: 504.40 GB]
File “/usr/local/anaconda3/envs/topaz/bin/topaz”, line 33, in

[CPU: 256.9 MB Avail: 504.40 GB]
sys.exit(load_entry_point(‘topaz-em==0.2.5’, ‘console_scripts’, ‘topaz’)())

[CPU: 256.9 MB Avail: 504.40 GB]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[CPU: 256.9 MB Avail: 504.40 GB]
File “/usr/local/anaconda3/envs/topaz/lib/python3.11/site-packages/topaz/main.py”, line 148, in main

[CPU: 256.9 MB Avail: 504.40 GB]
args.func(args)

[CPU: 256.9 MB Avail: 504.40 GB]
File “/usr/local/anaconda3/envs/topaz/lib/python3.11/site-packages/topaz/commands/train_test_split.py”, line 108, in main

[CPU: 256.9 MB Avail: 504.40 GB]
targets_train = pd.concat(groups_train, 0)

[CPU: 256.9 MB Avail: 504.40 GB]
^^^^^^^^^^^^^^^^^^^^^^^^^^

[CPU: 256.9 MB Avail: 504.40 GB]
TypeError: concat() takes 1 positional argument but 2 were given

[CPU: 256.9 MB Avail: 504.57 GB]
Traceback (most recent call last):
File “cryosparc_master/cryosparc_compute/run.py”, line 96, in cryosparc_compute.run.main
File “/home/yeagerlab/cryosparc/cryosparc_worker/cryosparc_compute/jobs/topaz/run_topaz.py”, line 307, in run_topaz_wrapper_train
utils.run_process(split_command)
File “/home/yeagerlab/cryosparc/cryosparc_worker/cryosparc_compute/jobs/topaz/topaz_utils.py”, line 98, in run_process
assert process.returncode == 0, f"Subprocess exited with status {process.returncode} ({str_command})"
AssertionError: Subprocess exited with status 1

We installed into a fresh conda environment using the command:
conda create -n topaz python=3.11.5
conda activate topaz
conda install topaz cudatoolkit=11.8 -c tbepler -c pytorch
System-wide version of CUDA is 11.8 and GPUs are A5000 NVIDIA (running cryosparc version 4)
Pytorch that gets installed is: 2.2.0 cpu_py311h6d93b4c_0
If the topaz environment created for installation is changed to python 3.6 the pytorch that gets installed is 1.10.2 py3.6_cpu_0
Both installation give the same error mentioned in this thread.

We would greatly appreciate any suggestions on how to fix it.
IK

What motivated selecting python 3.11.5 instead of the recommended python=3.6 (as of February 2024)?

You may want to try installation

  • in a python 3.6 environment. The python version can be specified during creation of the conda environment, and does not need to match the version of any “system” python
  • omitting the cudatoolkit=11.8 specification and “letting conda decide” on a suitable version of the toolkit, which may well differ from the versions of toolkits already present on the system.

Thank you for your quick reply.

I installed topaz with python 3.6 and left cudatoolkit version unspecified.
The end result was that cuda toolkit 11.8 was installed (as before), so unfortunately this did not fix the problem.

It seems that no matter how I install topaz with conda, an older or newer version of pytorch gets installed designed for CPU and not GPU. Could this be the source of my problems?

Please can you

  • post the output of the command
    conda list
    
    from inside the current conda environment (for future reference)
  • try creating a new environment
    conda deactivate
    conda create -n topaz_test2 python=3.6
    conda activate topaz_test2
    conda install topaz pytorch=1.10.2=py3.6_cuda11.3_cudnn8.2.0_0 -c tbepler -c pytorch
    

Does this help?

Thank you for your help.
I followed your suggestions and now topaz is working fine.

Here is the list of packages for old (non-working) and new (working) installation:

Packages for our old, non-working installation with python3.11
(conda install topaz cudatoolkit=11.8 -c tbepler -c pytorch)

Name Version Build Channel

_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl
bottleneck 1.3.7 py311hf4808d0_0
brotli-python 1.0.9 py311h6a678d5_7
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.12.12 h06a4308_0
certifi 2024.2.2 py311h06a4308_0
cffi 1.16.0 py311h5eee18b_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
cudatoolkit 11.8.0 h6a678d5_0
filelock 3.13.1 py311h06a4308_0
freetype 2.12.1 h4a9f257_0
fsspec 2023.10.0 py311h06a4308_0
future 0.18.3 py311h06a4308_0
gmp 6.2.1 h295c915_3
gmpy2 2.1.2 py311hc9b5ff0_0
idna 3.4 py311h06a4308_0
intel-openmp 2023.1.0 hdb19cb5_46306
jinja2 3.1.3 py311h06a4308_0
joblib 1.2.0 py311h06a4308_0
jpeg 9e h5eee18b_1
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libpng 1.6.39 h5eee18b_0
libprotobuf 3.20.3 he621ea3_0
libstdcxx-ng 11.2.0 h1234567_1
libtiff 4.5.1 h6a678d5_0
libuuid 1.41.5 h5eee18b_0
libwebp-base 1.3.2 h5eee18b_0
lz4-c 1.9.4 h6a678d5_0
markupsafe 2.1.3 py311h5eee18b_0
mkl 2023.1.0 h213fc3f_46344
mkl-service 2.4.0 py311h5eee18b_1
mkl_fft 1.3.8 py311h5eee18b_0
mkl_random 1.2.4 py311hdb19cb5_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.3.0 py311h06a4308_0
ncurses 6.4 h6a678d5_0
networkx 3.1 py311h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
numexpr 2.8.7 py311h65dcdc2_0
numpy 1.26.3 py311h08b1b3b_0
numpy-base 1.26.3 py311hf175353_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.13 h7f8727e_0
pandas 2.1.4 py311ha02d727_0
pillow 10.2.0 py311h5eee18b_0
pip 23.3.1 py311h06a4308_0
pycparser 2.21 pyhd3eb1b0_0
pysocks 1.7.1 py311h06a4308_0
python 3.11.5 h955ad1f_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
pytorch 2.2.0 cpu_py311h6d93b4c_0
pytz 2023.3.post1 py311h06a4308_0
readline 8.2 h5eee18b_0
requests 2.31.0 py311h06a4308_1
scikit-learn 1.2.2 py311h6a678d5_1
scipy 1.11.4 py311h08b1b3b_0
setuptools 68.2.2 py311h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h5eee18b_0
sympy 1.12 py311h06a4308_0
tbb 2021.8.0 hdb19cb5_0
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h1ccaba5_0
topaz 0.2.5 py_0 tbepler
torchvision 0.15.2 cuda118py311h4cc2eb7_0
typing_extensions 4.9.0 py311h06a4308_1
tzdata 2023d h04d1e81_0
urllib3 2.1.0 py311h06a4308_1
wheel 0.41.2 py311h06a4308_0
xz 5.4.5 h5eee18b_0
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0

Packages for the new, working installation with python3.6
(conda install topaz pytorch=1.10.2=py3.6_cuda11.3_cudnn8.2.0_0 -c tbepler -c pytorch)

Name Version Build Channel

_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.12.12 h06a4308_0
certifi 2021.5.30 py36h06a4308_0
cudatoolkit 11.3.1 h2bc3f7f_2
dataclasses 0.8 pyh4f3eec9_6
ffmpeg 4.3 hf484d3e_0 pytorch
freetype 2.12.1 h4a9f257_0
future 0.18.2 py36_1
giflib 5.2.1 h5eee18b_3
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
intel-openmp 2022.1.0 h9e868ea_3769
joblib 1.0.1 pyhd3eb1b0_0
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libdeflate 1.17 h5eee18b_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libpng 1.6.39 h5eee18b_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libuv 1.44.2 h5eee18b_0
libwebp 1.2.4 h11a3e52_1
libwebp-base 1.2.4 h5eee18b_1
lz4-c 1.9.4 h6a678d5_0
mkl 2020.2 256
mkl-service 2.3.0 py36he8ac12f_0
mkl_fft 1.3.0 py36h54f3939_0
mkl_random 1.1.1 py36h0573a6f_0
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
numpy 1.19.2 py36h54aff64_0
numpy-base 1.19.2 py36hfa32c7d_0
olefile 0.46 pyhd3eb1b0_0
openh264 2.1.1 h4ff587b_0
openssl 1.1.1w h7f8727e_0
pandas 1.1.5 py36ha9443f7_0
pillow 8.3.1 py36h5aabda8_0
pip 21.2.2 py36h06a4308_0
python 3.6.13 h12debd9_1
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.10.2 py3.6_cuda11.3_cudnn8.2.0_0 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2021.3 pyhd3eb1b0_0
readline 8.2 h5eee18b_0
scikit-learn 0.24.2 py36ha9443f7_0
scipy 1.5.2 py36h0b6359f_0
setuptools 58.0.4 py36h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h5eee18b_0
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h1ccaba5_0
topaz 0.2.5 py_0 tbepler
torchvision 0.11.3 py36_cu113 pytorch
typing_extensions 4.1.1 pyh06a4308_0
wheel 0.37.1 pyhd3eb1b0_0
xz 5.4.5 h5eee18b_0
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0