Scenario: all analysis data in the Project directory P1 have been deleted. Of course all information is still available in mongodb (as well as experimental data)
I was wondering if there is an automatic way to recreate automatically all analysis data in that P1 directory without cloning manually jobs?
Unfortunately, the mongoDB saves mainly the project management info.
Here is a list of what information saved where:
The ‘jobs’ database saves the logical connections among the jobs, mainly the input/output groups. These are roughly the same information dumped in the job.json files in each job directory.
The ‘events’ database saves the log messages and links to the images displayed in the web interface.
The fs.files and fs.chunks databases take the bulk of the storage space of the mongoDB. They save the png/pdf/txt files that are displayed during and after the job runs. These files are also dumped in the gridfsdata_0 files in the job directories. Most of the files saved in them are intermediate results (if the generation of intermediate plots is not turned off by the user). If the interest lies in finding an image for the 2D class averages or a FSC plot, then yes, you can recover from the mongoDB.
The actual anaylsis result files are the .cs files and the .mrc files in the job directories. Meta inforamtions are saved in .cs files. Image/volume data are saved in .mrc files. As mentioned above, the job info and plot files are dumped into the job diretories too. Therefore, it is possbile to reconstruct the work flow if you have the job directories. But if you only have the mongoDB left then you might have to redo all the computations.
@jcducom Unfortunately, an automated procedure to replicate the deleted processing results has not been implemented. Re-creation and re-running of individual jobs based on information from the database seems to be the way forward. Since significant intervention may be involved, be sure to explore the cli documentation (if you haven’t already) for some potentially time-saving commands.