Particle substraction memory error

HI everyone,
I’m having an error when i’m doing my particule substraction.

I am quite confused as to why it can not allocate a 220 GB array because weither it be my scratch (1 tera) or my hard (3.3 tera left) and my ram all have the necassary space to allocate such an array.

this does not seems to be linked to the number of particule as I tried to specificy to deal with only 57 particule and it failed the same way.

I’m dealing with 267k particule with a pixel size of 1.06 does anyone have any idea how to circumvent this problem ?

Respectefully
Yann

Please can you post the output of the command

cryosparcm cli "get_job('P99', 'J199', 'job_type', 'version', 'instance_information', 'status',  'params_spec', 'errors_run')"

where P99, J199 need to be replaced with the failed job’s actual project and job IDs, respectively.

What is the box size (in pixels) of the particles?

Hi thank you for your answer.

{‘_id’: ‘67335a85603ac039fce74a95’, ‘errors_run’: [{‘message’: ‘Unable to allocate 228. GiB for an array with shape (61160257636,) and data type float32’, ‘warning’: False}], ‘instance_information’: {‘CUDA_version’: ‘11.8’, ‘available_memory’: ‘241.42GB’, ‘cpu_model’: ‘AMD Ryzen Threadripper 3960X 24-Core Processor’, ‘driver_version’: ‘12.3’, ‘gpu_info’: [{‘id’: 0, ‘mem’: 25435766784, ‘name’: ‘NVIDIA GeForce RTX 3090’, ‘pcie’: ‘0000:01:00’}, {‘id’: 1, ‘mem’: 25438126080, ‘name’: ‘NVIDIA GeForce RTX 3090’, ‘pcie’: ‘0000:4e:00’}], ‘ofd_hard_limit’: 262144, ‘ofd_soft_limit’: 1024, ‘physical_cores’: 24, ‘platform_architecture’: ‘x86_64’, ‘platform_node’: ‘lvx0970b’, ‘platform_release’: ‘4.18.0-513.11.1.el8_9.x86_64’, ‘platform_version’: ‘#1 SMP Wed Jan 10 22:58:54 UTC 2024’, ‘total_memory’: ‘251.23GB’, ‘used_memory’: ‘7.53GB’}, ‘job_type’: ‘particle_subtract’, ‘params_spec’: {‘compute_use_ssd’: {‘value’: False}, ‘n_particles’: {‘value’: 57}}, ‘project_uid’: ‘P6’, ‘status’: ‘failed’, ‘uid’: ‘J282’, ‘version’: ‘v4.6.0’}

Here is the output of the command and the box sixe is 512 px.

This is my ram usage, it drops as soon as the job fail could it be the problem ? that the ram is already fully use and it can not add another array ?

Sounds very plausible!

I’m just a bit surprised that it does not depend on the number of particule.
I divided my particule group in 10 sub group and it still failed with the same error and number.
the group was only 26k particule and 512 px still.

any idea of what is that ram used for ?

… because not all particles are loaded at once, but loaded in fixed-size badges

Various volumes, such as original, masked and subtracted volumes are loaded. RAM usage therefore depends on the box size primarily.