This might be a niche case, but when I try and run multi-refine with 14 classes (I know that’s asking a lot, but I have 14 starting models I want to classify against), I repeatedly run out of GPU memory as it checks for assignment convergence (ie on the 21st, 41st or 61st iteration). This seems to be sensitive to the number of particles the classification has seen (if I reduce the batch size per class it will run for more iterations and then crash) I am using a box size of 64. Is there any way around this at present? What parameter might I change to reduce the memory requirements?
Testing for assignment convergence …
Engine Started.
Traceback (most recent call last):
…
MemoryError: cuMemAlloc failed: out of memory