Imported Project no Tree View

Hey Cryosparc,

I’ve been splitting an old project up into individual workspaces for archiving. I’ll copy the files for a specific workspace from the original project folder to a brand new project folder along with the 3 .json files (workspaces, project, job_manifest) and import with the attach project. I then go and delete the other workspaces that were not copied in the new project in cryosparc. The card view for the newly made project containing the individual workspace shows the jobs however the tree view very briefly shows the 9 dots for loading before showing a blank grey background without anything loading for 30+ minutes. Using the cli.update_parents_and_children_for_project(project_uid=‘P34’) works for some of the newer workspaces whereby the tree view now appears, however this doesn’t work for some of my older workspaces.

Example: I have a project with 2 workspaces both with ~70-90 jobs in them.
W4 J749 Patch CTF from 20210924 → P34W4
W22 J3064 Patch CTF from 20221117 → P35W22
P35W22 in the new project i’ve copied the file to and I’m able to view the Tree view however P34W4 I am unable to see anything. I should note the tree view works in the original project for all the workspaces.

An icli dump for the new projects, shows different representation between the two projects P34 and P35, this is after running the icli.update_parents command. P35 with the working tree view has some signatures with /x**/x**/x**** which isn’t observed in P34 in addition to some other differences.

Brief view of the two newly made project dumps for children/parents
P34 non-working tree view

queued_at’: datetime.datetime(2021, 9, 24, 21, 20, 52, 288000), ‘launched_at’: datetime.datetime(2021, 9, 24, 21, 20, 52, 818000), ‘started_at’: datetime.datetime(2021, 9, 24, 21, 20, 54, 118000), ‘running_at’: datetime.datetime(2021, 9, 24, 21, 21, 1, 345000), ‘waiting_at’: None, ‘completed_at’: datetime.datetime(2021, 9, 25, 0, 44, 5, 250000), ‘killed_at’: None, ‘failed_at’: None, ‘heartbeat_at’: datetime.datetime(2021, 9, 25, 0, 43, 59, 942000), ‘token_acquired_at’: None, ‘tokens_requested_at’: None, ‘last_scheduled_at’: None, ‘last_accessed’: {‘name’: ‘xxx’, ‘accessed_at’: datetime.datetime(2021, 10, 5, 21, 53, 3, 153000)}, ‘priority’: 0, ‘resources_needed’: {‘slots’: {‘CPU’: 2, ‘GPU’: 1, ‘RAM’: 1}, ‘fixed’: {‘SSD’: False}}, ‘resources_allocated’: {}, ‘run_on_master_direct’: False, ‘queued_to_lane’: ‘’, ‘queue_index’: None, ‘queue_status’: None, ‘queue_message’: None, ‘queued_job_hash’: None, ‘interactive’: False, ‘interactive_hostname’: ‘’, ‘interactive_port’: None, ‘PID_monitor’: 135924, ‘PID_main’: 135925, ‘PID_workers’: , ‘cluster_job_id’: None, ‘parents’: [‘J739’], ‘children’: [‘J757’, ‘J753’], ‘created_by_user_id’: ‘xxx’, ‘created_by_job_uid’: None, ‘cloned_from’: None, ‘is_experiment’: False, ‘job_dir’: ‘J749’, ‘job_dir_size’: 4759026436, ‘experiment_worker_path’: None, ‘ui_tile_width’: 2, ‘ui_tile_height’: 1, ‘enable_bench’: False, ‘bench’: {}, ‘instance_information’: {‘platform_node’: ‘c110294’, ‘platform_release’: ‘3.10.0-1127.19.1.el7.x86_64’, ‘platform_version’: ‘#1 SMP Tue Aug 25 17:23:54 UTC 2020’, ‘platform_architecture’: ‘x86_64’, ‘physical_cores’: 40, ‘max_cpu_freq’: 4000.0, ‘total_memory’: ‘376.38GB’, ‘available_memory’: ‘361.85GB’, ‘used_memory’: ‘13.40GB’, ‘gpu_info’: [{‘id’: 1, ‘name’: ‘GeForce RTX 3090’, ‘mem’: 25447170048}], ‘CUDA_version’: ‘11.1.0’}, ‘project_uid_num’: 1, ‘uid_num’: 749, ‘ui_layouts’: {}, ‘last_exported’: datetime.datetime(2023, 3, 17, 18, 30, 59, 383000), ‘queued_to_hostname’: ‘c110294’, ‘queued_to_gpu’: [1], ‘no_check_inputs_ready’: False, ‘num_tokens’: 1, ‘job_sig’:

P35 working tree view

status_num’: 35, ‘job_dir_size_last_updated’: datetime.datetime(2023, 11, 15, 18, 9, 9, 441000), ‘progress’: , ‘intermediate_results_size_bytes’: 0, ‘last_intermediate_data_cleared_amount’: 0, ‘last_intermediate_data_cleared_at’: datetime.datetime(2023, 6, 9, 16, 21, 7, 244000), ‘imported’: True, ‘is_final_result’: False, ‘is_ancestor_of_final_result’: False, ‘intermediate_results_size_last_updated’: datetime.datetime(2023, 11, 15, 18, 9, 9, 441000)}, {‘_id’: ObjectId(‘6554fa527f4fe5b29e9cffad’), ‘project_uid’: ‘P35’, ‘uid’: ‘J3064’, ‘PID_main’: 31402, ‘PID_monitor’: 31401, ‘PID_workers’: , ‘bench’: {}, ‘children’: [‘J3075’], ‘cloned_from’: None, ‘cluster_job_id’: None, ‘cluster_job_monitor_event_id’: None, ‘cluster_job_monitor_last_run_at’: None, ‘cluster_job_monitor_retries’: 0, ‘cluster_job_status’: None, ‘cluster_job_status_code’: None, ‘completed_at’: datetime.datetime(2022, 11, 17, 18, 21, 31, 272000), ‘completed_count’: 1, ‘created_at’: datetime.datetime(2022, 11, 16, 17, 38, 25, 97000), ‘created_by_job_uid’: None, ‘created_by_user_id’: ‘xxx’, ‘deleted’: False, ‘description’: ‘Enter a description.’, ‘enable_bench’: False, ‘errors_build_inputs’: {}, ‘errors_build_params’: {}, ‘errors_run’: , ‘experiment_worker_path’: None, ‘failed_at’: None, ‘heartbeat_at’: datetime.datetime(2022, 11, 17, 18, 21, 27, 74000), ‘input_slot_groups’: [{‘type’: ‘exposure’, ‘name’: ‘exposures’, ‘title’: ‘Micrographs’, ‘description’: ‘Micrographs for CTF estimation’, ‘count_min’: 1, ‘count_max’: inf, ‘repeat_allowed’: False, ‘slots’:

‘params_secs’: {‘general_settings’: {‘title’: ‘General settings’, ‘desc’: ‘’, ‘order’: 0}, ‘ctf_settings’: {‘title’: ‘CTF estimation’, ‘desc’: ‘’, ‘order’: 1}, ‘compute_settings’: {‘title’: ‘Compute settings’, ‘desc’: ‘’, ‘order’: 2}}, ‘params_spec’: {}, ‘parents’: [‘J3063’], ‘priority’: 0, ‘project_uid_num’: 1, ‘queue_index’: None, ‘queue_message’: None, ‘queue_status’: None, ‘queued_at’: datetime.datetime(2022, 11, 16, 19, 39, 25, 362000), ‘queued_job_hash’: None, ‘queued_to_lane’: ‘’, ‘resources_allocated’: {}, ‘resources_needed’: {‘slots’: {‘CPU’: 2, ‘GPU’: 1, ‘RAM’: 1}, ‘fixed’: {‘SSD’: False}}, ‘run_as_user’: None, ‘run_on_master_direct’: False, ‘running_at’: datetime.datetime(2022, 11, 17, 12, 25, 45, 414000), ‘started_at’: datetime.datetime(2022, 11, 17, 12, 25, 38, 985000), ‘status’: ‘completed’, ‘title’: ‘New Job J3064’

May I suggest an alternative approach for this use case. Because I have not tested this approach myself, you may want to hold on to the original project directory at least until you have refined the procedure and ensured that it produces the intended results. Also, review the Data Management Guide regarding project attach/detach, archive/unarchive concepts and procedures.

  1. Archive the original project (to ensure a consistent, “copyable” state of the project on the filesystem).
  2. Create one copy of the original project directory for each smaller project you want to “spawn”.
  3. Remove cs.lock from the new project directory copies. This lock removal is a rare exception from the general recommendation to not manually remove or otherwise manipulate cs.lock files.
  4. Attach the new project directory copies.
  5. In each of the newly attached projects:
    1. create a “trash” workspace.
    2. select jobs to be moved to “trash” and move them to the “trash” workspace.
    3. delete the “trash” workspace.