Hello! I am having troubles with importing data from one project to another one. As described in the tutorial (Tutorial: Data Management in cryoSPARC - CryoSPARC Guide) it should be easy: export the results from the output tab of your job, creating a .csg file and import it in another project via the “Import result group” tool using the absolute path to the exported file.
Problem is, I am getting the following error message:
Nothing I tried so far helps: importing the file inside the new project, import the file inside the job folder itself, using soft links in the new project folder… really not sure what else I can try.
Maybe somebody can help? I would highly appreciate it!
Thanks in advance.
Thank you so much for your reply.
I can answer all points with yes, and in the meantime it finally worked.
I am really not sure if I understood something wrong or it is supposed to work like that but this way it worked for me: as mentioned, I had soft links to the files I want to use. Turns out, I had to use exactly this directory…
Create a link of your exported job inside the new project and when entering the directory, go to the .csg file via the link. Hope this is understandable.
And thanks again for trying to help!!!
Hello, I am also struggling with the import results group job type. Like the original post, I have been following the “Guide: Manipulating .cs Files Created By cryoSPARC”. Basically, I export particles from the output tab of my job, edit the .cs file in cryosparcm icli, and select the accompanying .csg file in the import results group dialogue. I am not switching Projects or anything so the job exports to “exports/groups/P8_J100_particles”, and I am importing from exactly this folder. The error message I get is below. When you have time can you let me know what I might be doing wrong?
thanks
License is valid.
Running job on master node
[CPU: 79.5 MB] Project P8 Job J111 Started
[CPU: 79.7 MB] Master running v3.3.2+220824, worker running v3.3.2+220824
[CPU: 79.7 MB] Working in directory: /mnt/Woody/EM3/cryosparc-PROJECTS/P8/J111
[CPU: 79.7 MB] Running on lane default
[CPU: 79.7 MB] Resources allocated:
[CPU: 79.7 MB] Worker: 192.168.1.86
[CPU: 79.7 MB] --------------------------------------------------------------
[CPU: 79.7 MB] Importing job module for job type import_result_group...
[CPU: 240.2 MB] Job ready to run
[CPU: 240.2 MB] ***************************************************************
[CPU: 240.2 MB] Importing result group from /mnt/BuzzLightyear/EM3/cryosparc-PROJECTS/P8/exports/groups/P8_J100_particles/P8_J100_particles_exported.csg
[CPU: 240.2 MB] Traceback (most recent call last):
File "cryosparc_master/cryosparc_compute/run.py", line 85, in cryosparc_compute.run.main
File "/home/cryosparcuser/cryosparc/cryosparc_master/cryosparc_compute/jobs/imports/run.py", line 1149, in run_import_result_group
"Result group to import must be located inside the project directory"
AssertionError: Result group to import must be located inside the project directory
[CPU: 407.4 MB] Cleared -3409 bytes by deleting 0 items in 0.03s
Sorry to revive an old discussion, but this seems very relevant to my current problem.
We also run into this infamous Result group to import must be located inside the project directory error. We are trying to import particle indices from a cryoDRGN job, so we can pursue their analysis in cryoSPARC (using the original particles, so all the linked metadata remain valid in the project). The problem is that none of the options we have is fully satisfactory:
Assign the cryosparcuser and all Linux accounts that need this feature to the same group. If we do this, all Linux accounts in this group can write to their project directories, but they can also write to each other’s project directories, which is a big no-no.
Bother @daniel.s.d.larsson every single time we need to get write permission into a cryosparc project directory, on a case-by-case basis. He already has more than a full-time job running our local facility and doing research, so we’d like to minimize recourse to this option.
Use ad-hoc automation to make this happen. We are setting this up now, but this is basically a different version of option 2, so not ideal in the long run.
So my main question: is there a reason why “Import job” and “Import result group” require the target files to be inside the project directory? I could not find out in the documentation.
On the other hand, when importing raw movies, they can be anywhere on the filesystem and the import will work (provided that the cryosparcuser has read permission all the way to the movie files). Why can’t it be the same for jobs and result groups?
Hi @Guillaume, thanks so much for your detailed explanation.
Out dev team definitely has this issue on our roadmap, future versions of CryoSPARC will improve the experience and behaviour when importing jobs and results.
It seems like cryosparc-tools could maybe work, but I cannot test it because connecting to the instance requires the license. It is a shared instance, set up by somebody else, so I don’t know its license (I guess I could ask, but this is sure going to raise eyebrows…).
If this is only a mechanism for making sure the script connects to the correct instance (and there is no actual verification of the license), maybe it would be better to have the instance advertise an API token (in the web UI, list it somewhere in “Instance Information”, for example) that is different from the license. Then all users of the instance could use the cryosparc-tools but without needing to know the license.
The license is needed for verification at the moment unfortunately, it’s used to prevent patient unauthorized API calls to CryoSPARC. This is a limitation we are aware of. A token-based authentication system will be implemented for future cryosparc-tools releases.