Hi, I ran a motion correction on approximately 8,000 micrographs (around 4.3 TB in size), with a raw pixel size of 0.41Å. Each movie consists of 70 frames. During the motion correction process, I binned the micrographs by a factor of 2. The process was executed using 2 Nvidia GPUs with CUDA version 12.2, each with 24 GB of memory. It took 15 hours for the motion correction to complete. Given that our computational infrastructure is relatively new, I’m curious to benchmark whether this 15-hour duration is typical for such a run? Additionally, I noticed that the processing load is not evenly distributed between the two GPUs, is that typical?
That timing is not entirely unreasonable, but depends on various factors you don’t mention…
The uneven GPU load is likely due to waiting for micrographs to be read: with network storage, I find reading the raw micrograph movies is longer than when compared to local storage (and whether it is on a single HDD, HDD RAID array, SSD etc…) so were the micrographs on local storage (and what type) or network storage?
You don’t mention which detector, and whether or not super resolution was used… sensor size plays a role in how many patches the algorithm defaults to, so does have an impact in speed. K3 super res data takes a bit more time to process than K2 counting mode, for example, even if file format is the same.
You also don’t mention which GPU. “24GB” covers a lot of ground, all the way from Pascal-era GPUs like the Tesla P40 through to Quadro M6000, Titan RTX, RTX 3090, RTX 4090, A5000 and A5500 or A4500Ada… all of which have very different levels of compute power, despite being 24GB cards.
Thank you for highlighting the details.
Micrographs are stored locally on an SSD drive.
The camera used is the K3, and yes, the data is collected using SuperRes.
The GPUs are GeForce RTX 4090.