I am speccing up a new workstation that will primarily be used for cryoSPARC Live. Basic specs:
256 GB ECC RAM
Questions come to SSD. We have a fileserver on a 10G connection in the same rack, and movies will be stored here.
I was thinking to avoid spinning discs entirely, and go for 2TB of NVME (cache) plus 6-10 TB of SSD for projects. Then I can move projects to the fileserver once the scope session is complete.
What is the best way to configure boot, cryoSPARC install, and the project directory in this format? Do I want 1x drive boot, 1x drive for cryoSPARC install, and 1x drive for cache? Or can I combine boot and cryoSPARC install?
We have a Live instance running mainly over two supermicro servers.
Each one is having 4 GeForce RTX 2080 Ti as in your case, 380 GB of RAM and 32 CPUs (not so much on CPU usage by the way).
- Both systems run CentOS on an SSD
- CryoSPARC live is installed on a network account and visible from both
- We have a 6.6 TB /data SSD raid for cache
- We get the data from yet another network drive
So yes, we have 1 drive for boot, 1 drive (network based) for the install, and 1 drive for cache.
The cache is filled up with data from yet another network share.
This setup has proven to be quite faster than other alternatives…the problem is we need to clean the cache from time to time. Unfortunately we don’t do benchmarks, so I can’t give you numbers.
In a single workstation setup, you can definitely combine your boot and cryoSPARC installation drives. If you intend on using cryoSPARC in a multiple worker setup, keeping your cryoSPARC installation on the shared fileserver similar to how @jucastil did it would be the best option. Otherwise, your process of keeping your projects on SATA SSDs, then transferring them to the fileserver while keeping your scratch drive on a PCIe based SSD is most optimal.