Topaz denoised micrographs not showing in Manual Picker

I can see the topaz denoised micrographs in Inspect Picks, but not in Manual Picker. It just appears as a gray image. I am trouble shooting with a subset of 20 images. The corresponding non-denoised images show up fine in Manual Picker. I know the denoised images are not gray because I visualized them in an Inspect Picks job.

On another instance (both running v2.14.2) using other data I don’t have this problem. What could be causing this?

Hi @Geoffrey,

Could you paste the output of the log for this specific job please?

cryosparcm joblog P1 J721

There could also be an error on the browser side. Could you check if there’s an error in the browser console (right click, ‘inspect element’, ‘console’ tab)?

Thanks,
Suhail

Hey @Geoffrey,

Can you also let us know the characteristics of the image? Resolution? Pixel Size?
Can you also send us the image of the micrograph?
Can you also try this in Exposure Curation with the “Calculate Average Intensity” switch on and off and see if it works?

Grayed out image

[mohammad@uhncryoem ~]$ cryosparcm joblog P1 J721
Traceback (most recent call last):
  File "/home/mohammad/cryosparc2c/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/home/mohammad/cryosparc2c/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/mohammad/cryosparc2c/cryosparc2_master/cryosparc2_compute/client.py", line 83, in <module>
    print eval("cli."+command)
  File "<string>", line 1, in <module>
  File "/home/mohammad/cryosparc2c/cryosparc2_master/cryosparc2_compute/client.py", line 57, in func
    assert False, res['error']
AssertionError: {u'message': u"OtherError: argument of type 'NoneType' is not iterable", u'code': 500, u'data': None, u'name': u'OtherError'}

Image showing normally

[mohammad@uhncryoem ~]$ cryosparcm joblog P1 J720
Traceback (most recent call last):
  File "/home/mohammad/cryosparc2c/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/home/mohammad/cryosparc2c/cryosparc2_master/deps/anaconda/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/mohammad/cryosparc2c/cryosparc2_master/cryosparc2_compute/client.py", line 83, in <module>
    print eval("cli."+command)
  File "<string>", line 1, in <module>
  File "/home/mohammad/cryosparc2c/cryosparc2_master/cryosparc2_compute/client.py", line 57, in func
    assert False, res['error']
AssertionError: {u'message': u"OtherError: argument of type 'NoneType' is not iterable", u'code': 500, u'data': None, u'name': u'OtherError'}

I did some more manual picker jobs and found that I got the grayed out image from normal micrographs (referencing a subset made by Curate Exposures job, J746).

So why don’t we bracket out the denoising micrographs, and first resolve this gray image issue for the normal micrographs.

Here is what I see when I do inspect elements (these are normal, rigid aligned migrographs)

When I do inspect picks with the same inputs (from Curate Exposures, J746) I can see the image fine.

From the import of the original movies:

[CPU: 198.6 MB]    mscope_params/accel_kv :  set([300.0])
[CPU: 198.6 MB]     mscope_params/cs_mm :  set([2.7])
[CPU: 198.6 MB]    mscope_params/total_dose_e_per_A2 :  set([45.599998])
[CPU: 198.6 MB]    mscope_params/exp_group_id :  set([1])
[CPU: 198.6 MB]    mscope_params/phase_plate :  set([0])
[CPU: 198.6 MB]    mscope_params/neg_stain :  set([0])
[CPU: 198.6 MB]      movie_blob/psize_A :  set([1.03])
[CPU: 198.6 MB]        movie_blob/shape :  [  30 4096 4096]

The images failing to display are created by a Full-frame motion correction job. At least that’s the filename that the manual picker and inspect picks have in the Name column

I toggled on “Calculate Average Intensity” in Exposure Curation. I could see the images in this interactive job, as I can when I toggle off “Calculate Average Intensity”. I fed this output into some new jobs to see what happened. I could see the image in inspect particle picks, but not in manual picker. So there doesn’t seem to be a change.

Hi @Geoffrey,

Can you try cloning the original manual picker job that produced the gray images? Then in the interactive screen, toggle on “Override” and move the “Contrast Intensity Override” sliders around. Let us know if the images remain gray after moving the sliders in various ways. Specifically try moving one slider closer to the middle and the other to near its extremum for both sliders.

Regards,
Jay Yoo

Hi @Geoffrey,

This bug has been fixed in the latest version of cryoSPARC v2.15.0. Please update to this version and let us know if this issue persists!

Hi,
I’m having a problem with this in version 2.15. The denoised micrographs can not be normally imported into the builder. The drag and drop does not work. I can import the micrographs into a workspace, then import the exposers into manual picker. While this works, I think the CTF information is missing.
Reza

@stephan - I still see this issue in the latest patched version of 3.2 - the micrograph is a question mark, the particles seem to be loaded but no image. The outputs of Topaz denoise do not work in manual picker or inspect picks.

joblog here:

  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
Job complete
usr+sys time: 849.17 seconds
wall clock time: 14 minutes 48.40 seconds (888.40 seconds total)
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/exx/cryosparc/cryosparc_master/cryosparc_compute/jobs/manual_picker/run_inspect_simple.py", line 150, in get_micrograph_data
    return read_mrc_image(path_rel, psize, lp_filter_A)
  File "/home/exx/cryosparc/cryosparc_master/cryosparc_compute/jobs/manual_picker/run_inspect_simple.py", line 161, in read_mrc_image
    arr = current['mrcdata'][0,:,:]
KeyError: 'mrcdata'
[2021-07-03 10:50:45,642] ERROR in app: Exception on /get_micrograph_data [POST]
Traceback (most recent call last):
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/exx/cryosparc/cryosparc_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/exx/cryosparc/cryosparc_master/cryosparc_compute/jobs/manual_picker/run_inspect_simple.py", line 150, in get_micrograph_data
    return read_mrc_image(path_rel, psize, lp_filter_A)
  File "/home/exx/cryosparc/cryosparc_master/cryosparc_compute/jobs/manual_picker/run_inspect_simple.py", line 161, in read_mrc_image
    arr = current['mrcdata'][0,:,:]
KeyError: 'mrcdata'
========= sending heartbeat```

I think I see the issue. Topaz denoise completed successfully and generated outputs, but the outputs are incomplete (only 14 of 29 denoised mics are found in the job dir). Here is the error:

[CPU: 191.4 MB]  Using Topaz provided pretrained model.
[CPU: 191.4 MB]  
Beginning Topaz denoising command by running command /usr/local/envs/topaz/bin/topaz denoise [MICROGRAPH PATHS EXCLUDED FOR LEGIBILITY] --device 0 --format mrc --normalize --patch-size 1536 --patch-padding 256 --output /home/exx/processing/cryosparc_projects/francesca/P2/J887/denoised_micrographs --lowpass 1 --gaussian 0 --inv-gaussian 0 --deconv-patch 1 --pixel-cutoff 0 --deconvolve --model unet

[CPU: 191.4 MB]  Distributing over 4 processes...
[CPU: 186.2 MB]  # using device=0 with cuda=True
[CPU: 186.3 MB]  # Loading model: unet
[CPU: 186.3 MB]  Traceback (most recent call last):
[CPU: 186.3 MB]    File "/usr/local/envs/topaz/bin/topaz", line 11, in <module>
[CPU: 186.3 MB]      load_entry_point('topaz-em==0.2.4', 'console_scripts', 'topaz')()
[CPU: 186.3 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/main.py", line 148, in main
[CPU: 186.3 MB]      args.func(args)
[CPU: 186.3 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/commands/denoise.py", line 547, in main
[CPU: 186.3 MB]      , use_cuda=use_cuda
[CPU: 186.3 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/commands/denoise.py", line 292, in denoise_image
[CPU: 186.3 MB]      mic += dn.denoise(model, x, patch_size=patch_size, padding=padding)
[CPU: 186.3 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 68, in denoise
[CPU: 186.3 MB]      return denoise_patches(model, x, patch_size, padding=padding)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 92, in denoise_patches
[CPU: 186.4 MB]      yij = model(xij).squeeze() # denoise the patch
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 476, in forward
[CPU: 186.4 MB]      p1 = self.enc1(x)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward
[CPU: 186.4 MB]      input = module(input)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 399, in forward
[CPU: 186.4 MB]      return self._conv_forward(input, self.weight, self.bias)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 396, in _conv_forward
[CPU: 186.4 MB]      self.padding, self.dilation, self.groups)
[CPU: 186.4 MB]  RuntimeError: CUDA out of memory. Tried to allocate 588.00 MiB (GPU 0; 23.69 GiB total capacity; 286.48 MiB already allocated; 27.69 MiB free; 290.00 MiB reserved in total by PyTorch)
[CPU: 186.4 MB]  # using device=0 with cuda=True
[CPU: 186.4 MB]  # Loading model: unet
[CPU: 186.4 MB]  Traceback (most recent call last):
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/bin/topaz", line 11, in <module>
[CPU: 186.4 MB]      load_entry_point('topaz-em==0.2.4', 'console_scripts', 'topaz')()
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/main.py", line 148, in main
[CPU: 186.4 MB]      args.func(args)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/commands/denoise.py", line 547, in main
[CPU: 186.4 MB]      , use_cuda=use_cuda
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/commands/denoise.py", line 292, in denoise_image
[CPU: 186.4 MB]      mic += dn.denoise(model, x, patch_size=patch_size, padding=padding)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 68, in denoise
[CPU: 186.4 MB]      return denoise_patches(model, x, patch_size, padding=padding)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 92, in denoise_patches
[CPU: 186.4 MB]      yij = model(xij).squeeze() # denoise the patch
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/topaz/denoise.py", line 510, in forward
[CPU: 186.4 MB]      h = self.dec2(h)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward
[CPU: 186.4 MB]      input = module(input)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
[CPU: 186.4 MB]      result = self.forward(*input, **kwargs)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 714, in forward
[CPU: 186.4 MB]      return F.leaky_relu(input, self.negative_slope, self.inplace)
[CPU: 186.4 MB]    File "/usr/local/envs/topaz/lib/python3.6/site-packages/torch/nn/functional.py", line 1378, in leaky_relu
[CPU: 186.4 MB]      result = torch._C._nn.leaky_relu(input, negative_slope)
[CPU: 186.4 MB]  RuntimeError: CUDA out of memory. Tried to allocate 294.00 MiB (GPU 0; 23.69 GiB total capacity; 1.18 GiB already allocated; 29.69 MiB free; 1.43 GiB reserved in total by PyTorch)
[CPU: 186.4 MB]  # using device=0 with cuda=True
[CPU: 186.4 MB]  # Loading model: unet
[CPU: 186.4 MB]  # 1 of 7 completed.
[CPU: 186.4 MB]  # 2 of 7 completed.
[CPU: 186.4 MB]  # 3 of 7 completed.
[CPU: 186.4 MB]  # 4 of 7 completed.
[CPU: 186.4 MB]  # 5 of 7 completed.
[CPU: 186.4 MB]  # 6 of 7 completed.
[CPU: 186.4 MB]  # 7 of 7 completed.
[CPU: 186.4 MB]  # using device=0 with cuda=True
[CPU: 186.4 MB]  # Loading model: unet
[CPU: 186.4 MB]  # 1 of 7 completed.
[CPU: 186.4 MB]  # 2 of 7 completed.
[CPU: 186.4 MB]  # 3 of 7 completed.
[CPU: 186.4 MB]  # 4 of 7 completed.
[CPU: 186.4 MB]  # 5 of 7 completed.
[CPU: 186.4 MB]  # 6 of 7 completed.
[CPU: 186.4 MB]  # 7 of 7 completed.
[CPU: 186.6 MB]  Topaz denoising command complete in 57.580s.

[CPU: 186.6 MB]  Converting Topaz outputs to cryoSPARC outputs...

Hi @olibclarke,

I think you are correct - looks like in the last traceback, Topaz tried to run with 4 processes but 2 of them failed out of memory on the GPU. Strangely the out of memory report indicates that the topaz processes themselves were not really using that much memory - maybe something else was running?

You can change the “Number of parallel threads” parameter for topaz denoise if that helps reduce the memory pressure perhaps?

Yes I’m not sure what was wrong, but the other issue is that when this happens, the job does not fail - it appears to be successfully completed with the correct number of outputs, it is only upon reading the log that you can see there is something wrong - I think if it does not successfully complete and generate the correct number of output mics, it should probably be marked as failed?

Hi @olibclarke you are likely correct - it looks like our topaz wrapper doesn’t catch the failure of subprocesses when parallelizing topaz correctly. We will look into this!

Hi @apunjani - this bug is occurring in CryoSparc v4.4.1. In my case 3/4 topaz workers failed and only 3/4 of images were denoised. And it’s quite hard to spot that this is happening as there are no errors or warning messages to be seen.

Maybe the topaz wrapper could count the number of denoised images in denoised_micrographs and compare it to the input number? That way it could print a warning/fail or split output into denoised/non-denoised… Thanks!