Hi - I’ve seen several closed topics on if/when cryoSPARC was going to add the ability to do helical reconstructions, but hadn’t seen any update to that lately. Is there any time table for when that might be added?
Hi @hansenbry,
In v3.0, we have released a set of BETA tools for Helical processing, including a Helical Refinement job type as well as a Filament Tracer job for particle picking. For more information, please see the changelog as well as the guide pages.
Best,
Michael
Hi @mmclean I saw that in the release note! We are very excited to get our HPC version upgraded and give it a go. I’ll be sure to let you know what we find. Thanks again
Dear cryoSPARC developers/users
I am excited to explore new “helical refinement -beta” now, and it seems by default this job needs 4 cpus and 1 gpu. But it didn’t give me an option to use more computing resource to speed it up. Is there any way I can edit this and make the reconstruction faster (this workstation has 32 cpu and 4 gpu)?
Many thanks!
Jerry
Hi @Jerry,
Thank you for the feedback! As of now, this job unfortunately cannot be parallelized over multiple GPUs (or more than 4 CPUs) in order to further increase speed. There are some other tricks you can use to speed up the job, depending on your use cases; for example:
- If you are just looking to get an initial model, you can try running the “Downsample Particles” job on your data in order to reduce the box size while keeping the same spatial extent. This can be helpful to use helical refinement or ab-initio reconstruction to generate an initial model, and then use the original data along with the initial model to refine to high resolution. Decreasing the box size by downsampling data will probably make a significant speed difference if your box size started out moderate or large (e.g. over ~450)
- Similarly, if you are in the initial model generation phase, and your particle stack is medium-large (> 100k particles), you can use the “Particle sets tool” to split the particle set into a smaller subset and thus process only a subset of your data for the purposes of initial model generation
Otherwise, if you have an SSD on your compute node, you should ensure that SSD caching is turned on, which can dramatically increase the speed of the main iterative alignment/reconstruction loop, since the main bottleneck for alignment/reconstruction is data transfer from disk to memory (rather than raw CPU/GPU speed).
Please let me know if you have any other questions about any of the new helical processing tools, and best regards,
Michael
Many thanks Michael. It indeed didn’t take very long. I already have a ~4Å starting model and I know the helical symmetry. I submitted a job (320px box, ~400,000 particles) and amazed to see the reconstruction finished overnight just using 1GPU.
Hi Jerry,
That is awesome! I’m glad to know that the refinement is working well for you
Best,
Michael
Hi Michael,
Thanks. May I ask how to estimate the GPU memory usage based on the pixel size and number of particles?
Looking forward to your response,
Jianming
It’s more box size dependent than anything else for the GPU. Maximum particles is more a function of system RAM.
Estimates can be found here:
They’re not perfect limits, as sometimes other things can be using enough VRAM to cause box sizes close to the limit of VRAM to crash.
And boxes larger than ~1120 pixels will crash regardless of VRAM available to the GPU for other reasons.