Project import error in 3.2 name 'audit' is not defined

Hi @gabor,

We’ve fixed this bug in Patch 210401 (for v3.2.0).

To apply this patch, please follow the instructions available here: Apply Patches

3 Likes

Hi Stephan,

Thank you for the patch, it works now.

Best wishes,
Gabor

Hi @stephan

Just to add to this, I followed the above for patching and could now import the project but my workspaces are empty. I followed this.

I can see thumbnails from the Projects window but when I open the workspace window there is nothing there. The details for my project show its size of 1.7 TB and all the data seems fine but I just can’t access it.

Any help would be greatly appreciated?

Best
Mike


1 Like

Hi,

I had the same before. In fact when you first tried to import your project without success, the workspaces.json (in your project directory) is emptied. Luckily I had a backup and I could restore it from there.

Best,
Gabor

1 Like

hi @gabor

that worked wonderfully thank you.

@stephan, would it be possible to have a constant workspaces.json.bak or something similar for these cases?

Best
Mike

Hi @mplockhartuom, @gabor,

Sorry for the inconvenience. This looks like some sort of bug- can you explain in detail what you did? Do you also think you can provide any logs from the failed import (by running cryosparcm log command_core, pressing ctrl+c then scrolling up to any error so you can copy+paste it here)?

Dear @stephan,

I don’t have error in the log, it maybe cleared after the patch install.
What I did was the following:

  1. tried to import a project (P2) to the new cryoSPARC instance - failed with ‘audit’ is not defined
  2. installed the patch
  3. tried again to import P2, but I couldn’t access the workspaces like @mplockhartuom .
  4. I copied from a backup the workspaces.json to P2/ replacing the empty file in the directory.
  5. I re-imported the P2 project and it was successful.

Best,
Gabor

1 Like

We have exactly the same as described by @mplockhartuom

1 Like

Hey Everyone,

We’ve released another patch that addresses the “empty workspaces” issue- please install the patch, restart your instance, and let us know if it helps.

Hi @stephan
I have exactly the same as described by @mplockhartuom
I have installed the newest patch and re-import the project but the workspaces remain empty.
I have no backup for the old workspace.json. What should I do now?
Thanks,
Fang K

Hi @phoeningo - we had the same situation. Not a great solution, but if you just create empty workspaces in the newly imported project, they will populate with your jobs - e.g this is what we did, just creating new workspaces until all jobs are populated:

1 Like

Hi @olibclarke Interesting !
Thank you very much!

I updated to latest version. Trying to import project, I still get the audit error. Trying add’l workspaces doesnt do anything.

Any help appreciated.

Hi @rshek,

Can you paste the logs from cryosparcm log command_core?

I think it was because I didnt restart cryosparcm after the patch.It worked after I reinstalled cryosparc and applied patch again (suprised the latest install files don’t have the patch, but is the base 3.2 ver?) Also, maybe this is a naive question, but I can’t seem to apply the patch cryosparc_worker on a single workstation. I moved the cryosparc patch tar.gz file to the cryosparc_worker folder. Then run tar xvzf on it. But there isn’t a bin/cryosparcw patch? Currently stuck since version mismatch. Thanks in advance

nvm, i was being dump haha. ./cryosparcw patch worked…working fine now

okay, its still happening. No audit error. but blank import.


COMMAND CORE STARTED ===  2021-04-04 20:24:32.887411  ==========================
*** BG WORKER START
 * Serving Flask app "command_core" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
****** UPDATING WORKER GPU INFO
****** UPDATING ALL JOB SIZES IN 10s
****** EXPORTING ALL PROJECTS IN 60s
======== Creating User: rshek@uw.edu Password123
****** UPDATING WORKER GPU INFO
COMMAND CORE STARTED ===  2021-05-26 23:18:48.090824  ==========================
*** BG WORKER START
 * Serving Flask app "command_core" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
****** UPDATING WORKER GPU INFO
****** UPDATING ALL JOB SIZES IN 10s
****** EXPORTING ALL PROJECTS IN 60s
[GPU_INFO]: Failed calling the python function to get GPU info on ip-172-31-12-64.us-west-2.compute.internal: Command '['ssh', 'ubuntu@ip-172-31-12-64.us-west-2.compute.internal', 'bash -c "eval $(/home/ubuntu/cryosparc_worker/bin/cryosparcw env); timeout 30 python /home/ubuntu/cryosparc_worker/cryosparc_compute/get_gpu_info.py"']' returned non-zero exit status 255.
****** UPDATING WORKER GPU INFO
[GPU_INFO]: Failed calling the python function to get GPU info on ip-172-31-12-64.us-west-2.compute.internal: Command '['ssh', 'ubuntu@ip-172-31-12-64.us-west-2.compute.internal', 'bash -c "eval $(/home/ubuntu/cryosparc_worker/bin/cryosparcw env); timeout 30 python /home/ubuntu/cryosparc_worker/cryosparc_compute/get_gpu_info.py"']' returned non-zero exit status 255.
COMMAND CORE EXITED ====  2021-05-26 23:22:28.168126  ==========================
COMMAND CORE STARTED ===  2021-05-26 23:22:28.948310  ==========================
*** BG WORKER START
 * Serving Flask app "command_core" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
COMMAND CORE STARTED ===  2021-05-26 23:22:48.808913  ==========================
*** BG WORKER START
 * Serving Flask app "command_core" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
****** UPDATING WORKER GPU INFO
****** UPDATING ALL JOB SIZES IN 10s
****** EXPORTING ALL PROJECTS IN 60s
COMMAND CORE STARTED ===  2021-05-26 23:24:54.300398  ==========================
*** BG WORKER START
 * Serving Flask app "command_core" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
****** UPDATING WORKER GPU INFO
****** UPDATING ALL JOB SIZES IN 10s
****** EXPORTING ALL PROJECTS IN 60s
Indexing /home/ubuntu ----
Base path:  /home/ubuntu
Indexing /home ----
Base path:  /home
Indexing / ----
Base path:  /
Indexing /data ----
Base path:  /data
Indexing /data/P1 ----
Base path:  /data/P1
[IMPORT_PROJECT] : Importing project from /data/P1
[IMPORT_PROJECT] :     Created project P1
[IMPORT_PROJECT] :     Created workspace W1 in P1
[IMPORT_PROJECT] :     Inserting jobs into project...
[IMPORT_PROJECT] :     Uploading project image data...
[IMPORT_PROJECT] :     Done. Uploaded 7 files in 0.21s
[IMPORT_PROJECT] :     Inserted job document in 0.21s...
[IMPORT_PROJECT] :     Inserting streamlogs into jobs...
[IMPORT_PROJECT] :     Done. Inserted 0 streamlogs in 0.00s...
[IMPORT_PROJECT] :   Imported J1 into P1 in 0.21s...
[IMPORT_PROJECT] : Imported project from /data/P1 as P1 in 0.24s
[EXPORT_PROJECT] : Exporting project P1...
[EXPORT_PROJECT] : Exported project P1 to /data/P1/project.json in 0.00s
[EXPORT_PROJECT] : Exporting project P1...
[EXPORT_PROJECT] : Exported project P1 to /data/P1/project.json in 0.00s
[EXPORT_PROJECT] : Exporting project P1...
[EXPORT_PROJECT] : Exported project P1 to /data/P1/project.json in 0.00s
[EXPORT_PROJECT] : Exporting project P1...
[EXPORT_PROJECT] : Exported project P1 to /data/P1/project.json in 0.00s

This seems to have fixed my issue after installing the patches. Thanks Oli.

Hi @rshek,

Are you still having this issue? Do you have access to the original project directory?
Can you update to the latest patch and try re-importing the project?

The workaround I used was to use older .json file. but its missing all the recent jobs. Just started random new jobs (which obviously failed) till it ran up to most recent job no. Havent encountered the issue recently. Thank you for following up Stephan!