Multi-refine out of memory

This might be a niche case, but when I try and run multi-refine with 14 classes (I know that’s asking a lot, but I have 14 starting models I want to classify against), I repeatedly run out of GPU memory as it checks for assignment convergence (ie on the 21st, 41st or 61st iteration). This seems to be sensitive to the number of particles the classification has seen (if I reduce the batch size per class it will run for more iterations and then crash) I am using a box size of 64. Is there any way around this at present? What parameter might I change to reduce the memory requirements?

Testing for assignment convergence …
Engine Started.
Traceback (most recent call last):

MemoryError: cuMemAlloc failed: out of memory

I have the same issue… I can use a max of 6 classes with a refinement box size of 128px, running on a Titan X with 12GB GPU memory.



Well, one thing to do is for the time being:
a) use a smaller box size
b) divide the set into two sets and run each of them independently