Given the necessity to repeatedly read the same subset of movie stacks during hyperparameter optimisation, would it make sense to allow optional SSD caching of said movies (and gain_ref where appropriate) for jobs that initiate the parameter search step? On a multi-GPU worker with, admittedly, sub-optimal network I/O bandwidth, we’ve observed significant impact on processes, e.g. Live RTP workers, that run concurrent with such RBMC jobs. I assume this is due to the I/O bottleneck in our setup.
I recognise that depending on the density of particles per movie, demand on cache space may be prohibitive. However, where it can fit and reasonably coexist with other datatypes, the option may be worth having, but perhaps toggled independently of the default project-wide cache setting?
EDIT: I’m unclear what happens behind the scenes during dose-weighting optimisation, but if iterative reading is involved, this may help there as well.
The hyperparameter search portion of the job caches movie data in RAM. The parameter “In-memory cache size” under “Compute settings” (at the bottom of the job builder) controls how much RAM is set aside for this caching. I’ll record this as a feature request, though. For some datasets it may help, as you’re saying. Thanks for bringing this up
To your other remark about dose weight optimization: the dose weighting optimization only makes one pass across the data.
Thanks for the reply. Since posting, we’ve done a bit more interrogation and it seems to occur on our setup more readily with EER datasets (vs K3 TIFFs), which makes sense given the larger stack sizes and smaller detector size. This may be consistent with the explanation. I will recommend to the team to increase their default cache setting, assuming there’s sufficient RAM to go around.
On a related note, the samples we handle tend to suffer from other kinds of limitations, but are there plans on the roadmap to support custom temporal sampling and/or upsampling of EER data, i.e. independent of the initial import parameters, at the stage of RBMC?
EDIT: in case it hasn’t been said enough already, a big thank you to the team for implementing RBMC. There’s a lot of excitement resurrecting past datasets to test it out.
Glad you’re enjoying RBMC . The suggestions that you’ve made about leveraging EER’s capabilities in a more sophisticated way are already on our radar, yes, but thanks for bringing them up.