Is it possible to repopulate the workspaces.json file? The workspaces.json seems to have gotten corrupted during transfer and now when I import the project into another workstation, the workspaces are empty. TIA
@vamsee What are the versions and patch levels of cryoSPARC on the old and new workstations? This information is contained in the cryosparc_master/version
and cryosparc_master/patch
files, but cryosparc_master/patch
may not exist.
Also, can you confirm that the workspaces.json
for the project in question differ between the workstations?
I’m also interested in repopulating workspaces.json, as it seems it is not always generated from our Live sessions. We have a dedicated cryosparc instance for Live and every now and then, import to another instace fails because the file is missing. Initially I thought is was an error during import, but after monitoring a bit more it seems to happen before that.
During one scope session last week, three Live sessions were run in three different projects. The first project/session didn’t contain the workspaces.json file, while the two next ones did. All were started and stopped the same way. The file seems to missing before attempting to import it into another instance.
The Live instance shows the workspace and jobs fine in the master.
We’re on version 3.3.2+220518.
edit:
Perhaps this post: Workspace json missing
is more similar to our issue. Restarting the master also didn’t generate the file in our case.
I also suspect this might be the same issue. Were all Live sessions explicitly paused before the master was restarted?
Not just paused, but marked as completed. I also checked for hanging python processes and even restarted the server without any luck.
Two more projects with missing workspaces.json files today. Should the file appear as soon a a project is created? Or when the first workspace is created?
The “ground truth” for the workspace configuration is maintained in the cryoSPARC instance’s database, and should be copied automatically and frequently to a file workspaces.json
inside the project directory. If workspaces.json
is absent or outdated, the project’s import to another cryoSPARC instance will be faulty or will fail altogether.
We are currently working to make the creation and update of the workspaces.json
file more reliable.
Until the update is released, and only if problems with the automatic creation or update of workspaces.json
are encountered, one may force creation/update of workspaces.json
for that project using this script written by a member of our team:
import json
import os
import sys
import time
from bson import json_util
from cryosparc_compute import client
from pymongo import MongoClient
def export_workspaces(project_uid):
tic = time.time()
host = os.getenv('CRYOSPARC_MASTER_HOSTNAME')
command_port = int(os.getenv('CRYOSPARC_BASE_PORT')) + 2
db_port = int(os.getenv('CRYOSPARC_BASE_PORT')) + 1
cli = client.CommandClient(host, command_port)
mongo_url = f"mongodb://{host}:{db_port}"
db = MongoClient(mongo_url)['meteor']
project_dir = cli.get_project_dir_abs(project_uid)
all_workspaces = list(db.workspaces.find(
{'project_uid': project_uid, 'deleted': False}, {'_id': 0}))
print(f"Exporting {len(all_workspaces)} workspace(s) in {project_uid}...")
for workspace in all_workspaces:
if workspace['workspace_type'] == 'live':
if workspace['status'] in ['paused', 'completed']:
workspace['rtp_workers'] = {}
workspace['rtp_childs'] = []
export_workspace_path = "workspaces.json"
abs_export_workspace_path = os.path.join(
project_dir, export_workspace_path)
with open(abs_export_workspace_path, 'w') as openfile:
json.dump(all_workspaces, openfile, indent=4,
default=json_util.default)
print(f"Done exporting workspaces in {time.time()-tic:.3f}s")
if __name__ == '__main__':
assert len(sys.argv) > 1, "Must provide a project_uid to export."
export_workspaces(sys.argv[1])
This script should be saved to a file export_workspaces.py
. If there are problems with the creation/update of hypothetical project P1
, one can then force the update of workspaces.json
with this command:
cryosparcm call python export_workspaces.py P1
I recommend that this command be executed by the Linux user who runs the cryoSPARC instance as I otherwise foresee various file ownership/permissions problems.
Thank you for the workaround!
Hi again,
I got a chance to try the script today but got the following error (internal paths changed to ****):
*** client.py: command (http://****:39002/api) did not reply within timeout of 300 seconds, attempt 1 of 3
*** client.py: command (http://****:39002/api) did not reply within timeout of 300 seconds, attempt 2 of 3
*** client.py: command (http://****:39002/api) did not reply within timeout of 300 seconds, attempt 3 of 3
Traceback (most recent call last):
File "export_workspaces.py", line 34, in <module>
export_workspaces(sys.argv[1])
File "export_workspaces.py", line 16, in export_workspaces
all_workspaces = cli.list_workspaces(project_uid)
File "****_master/cryosparc_compute/client.py", line 58, in func
res = r.json()
File "****_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/site-packages/requests/models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "****_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "****_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "****_master/deps/anaconda/envs/cryosparc_master_env/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The initial timeouts are reported within a second - not in 3 times 300 seconds.
Any ideas on how to proceed?
The new version of the script works well in our hands. Thanks for the workaround!