SLURM config for multiple nodes

Does anyone have an example of how to configure CryoSPARC running under SLURM and allocating multiple nodes for a single job? I am not clear how CryoSPARC distributes work across multiple nodes. Does it use MPI, ssh or something unique? Is there a limit to --ntasks-per-node for GPU nodes?

Thanks,
David

AFAIK, there is no multi-node support. See below.