Hi cryosparc team,
Previously I run cryosparc on ubuntu 20.04 with default port 39000. As required for version 5.0, I have updated the system to 22.04 and did the cryosparc update. Now when I start the cryosparc, there is warning of 39000 overlaps with ephemeral port range, otherwise the jobs run like before. I tried to manually change the port to 61000 in config.sh file, but then the startup failed:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /spshared/apps/cryosparc4/cryosparc_master/cli/database.py:77 in configure │
│ │
│ 74 │ logfile = core.settings.run_dir / "database.log" │
│ 75 │ print_fn = out.print if verbose else lambda *_: None │
│ 76 │ try: │
│ ❱ 77 │ │ database_management.configure_mongo(core.settings, logfile, print=print_fn) │
│ 78 │ except Exception: │
│ 79 │ │ err.log() │
│ 80 │ │ err.print_exception() │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/core/database_management.py:53 in configure_mongo │
│ │
│ 50 │ │
│ 51 │ try: │
│ 52 │ │ print("Initializing replica set") │
│ ❱ 53 │ │ initialize_replica_set(settings.db_port, settings.mongo_db_name, timeout_ms=sett │
│ 54 │ │ │
│ 55 │ │ # create client for configuration │
│ 56 │ │ mongo_client = try_get_pymongo_client( │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/core/database_management.py:99 in │
│ initialize_replica_set │
│ │
│ 96 │ ) │
│ 97 │ try: │
│ 98 │ │ admin_db = try_get_pymongo_db(mongo_client) │
│ ❱ 99 │ │ admin_db.command("replSetGetStatus") # check replset │
│ 100 │ except OperationFailure as e: │
│ 101 │ │ # https://github.com/mongodb/mongo/blob/master/src/mongo/base/error_codes.yml │
│ 102 │ │ if e.code == 94 and admin_db is not None: │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/_csot.py:119 in csot_wrapper │
│ │
│ 116 │ │ │ │ if timeout is not None: │
│ 117 │ │ │ │ │ with _TimeoutContext(timeout): │
│ 118 │ │ │ │ │ │ return func(self, *args, **kwargs) │
│ ❱ 119 │ │ │ return func(self, *args, **kwargs) │
│ 120 │ │
│ 121 │ return cast(F, csot_wrapper) │
│ 122 │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/synchronous/database.py:930 in command │
│ │
│ 927 │ │ │ connection, │
│ 928 │ │ │ read_preference, │
│ 929 │ │ ): │
│ ❱ 930 │ │ │ return self._command( │
│ 931 │ │ │ │ connection, │
│ 932 │ │ │ │ command, │
│ 933 │ │ │ │ value, │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/synchronous/database.py:770 in _command │
│ │
│ 767 │ │ │
│ 768 │ │ command.update(kwargs) │
│ 769 │ │ with self._client._tmp_session(session) as s: │
│ ❱ 770 │ │ │ return conn.command( │
│ 771 │ │ │ │ self._name, │
│ 772 │ │ │ │ command, │
│ 773 │ │ │ │ read_preference, │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/synchronous/helpers.py:45 in inner │
│ │
│ 42 │ │ from pymongo.synchronous.pool import Connection │
│ 43 │ │ │
│ 44 │ │ try: │
│ ❱ 45 │ │ │ return func(*args, **kwargs) │
│ 46 │ │ except OperationFailure as exc: │
│ 47 │ │ │ if no_reauth: │
│ 48 │ │ │ │ raise │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/synchronous/pool.py:538 in command │
│ │
│ 535 │ │ if self.op_msg_enabled: │
│ 536 │ │ │ self._raise_if_not_writable(unacknowledged) │
│ 537 │ │ try: │
│ ❱ 538 │ │ │ return command( │
│ 539 │ │ │ │ self, │
│ 540 │ │ │ │ dbname, │
│ 541 │ │ │ │ spec, │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/synchronous/network.py:218 in command │
│ │
│ 215 │ │ │ if client: │
│ 216 │ │ │ │ client._process_response(response_doc, session) │
│ 217 │ │ │ if check: │
│ ❱ 218 │ │ │ │ helpers_shared._check_command_response( │
│ 219 │ │ │ │ │ response_doc, │
│ 220 │ │ │ │ │ conn.max_wire_version, │
│ 221 │ │ │ │ │ allowable_errors, │
│ │
│ /spshared/apps/cryosparc4/cryosparc_master/.pixi/envs/master/lib/python3.12/site-packages/pymong │
│ o/helpers_shared.py:247 in _check_command_response │
│ │
│ 244 │ elif code == 43: │
│ 245 │ │ raise CursorNotFound(errmsg, code, response, max_wire_version) │
│ 246 │ │
│ ❱ 247 │ raise OperationFailure(errmsg, code, response, max_wire_version) │
│ 248 │
│ 249 │
│ 250 def _raise_last_write_error(write_errors: list[Any]) -> NoReturn: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OperationFailure: Our replica set config is invalid or we are not a member of it, full error: {'ok': 0.0, 'errmsg': 'Our replica set config is invalid or we
are not a member of it', 'code': 93, 'codeName': 'InvalidReplicaSetConfig'}
2026-01-31T16:07:51.483034 Error configuring database. Most recent database log lines:
2026-01-31T16:07:48.000-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-61001.sock
2026-01-31T16:07:48.000-0500 I NETWORK [signalProcessingThread] Shutting down the global connection pool
2026-01-31T16:07:48.000-0500 I STORAGE [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
2026-01-31T16:07:48.001-0500 I REPL [signalProcessingThread] Shutting down the ReplicationCoordinator
2026-01-31T16:07:48.001-0500 I REPL [signalProcessingThread] shutting down replication subsystems
2026-01-31T16:07:48.001-0500 I ASIO [Replication] Killing all outstanding egress activity.
2026-01-31T16:07:48.001-0500 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
2026-01-31T16:07:48.001-0500 I COMMAND [signalProcessingThread] Killing all open transactions
2026-01-31T16:07:48.001-0500 I - [signalProcessingThread] Killing all operations for shutdown
2026-01-31T16:07:48.001-0500 I NETWORK [signalProcessingThread] Shutting down the ReplicaSetMonitor
2026-01-31T16:07:48.001-0500 I REPL [signalProcessingThread] Shutting down the LogicalTimeValidator
2026-01-31T16:07:48.002-0500 I CONTROL [signalProcessingThread] Shutting down free monitoring
2026-01-31T16:07:48.002-0500 I FTDC [signalProcessingThread] Shutting down full-time data capture
2026-01-31T16:07:48.002-0500 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2026-01-31T16:07:49.001-0500 I STORAGE [signalProcessingThread] Shutting down the HealthLog
2026-01-31T16:07:49.001-0500 I STORAGE [signalProcessingThread] Shutting down the storage engine
2026-01-31T16:07:49.001-0500 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
2026-01-31T16:07:49.001-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2026-01-31T16:07:49.014-0500 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
2026-01-31T16:07:49.014-0500 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
2026-01-31T16:07:49.036-0500 I STORAGE [signalProcessingThread] Downgrading WiredTiger datafiles.
2026-01-31T16:07:49.261-0500 I STORAGE [signalProcessingThread] WiredTiger message [1769893669:261588][11374:0x7678efdff640], txn-recover: Main recovery loop: starting at 13393/168064 to 13394/256
2026-01-31T16:07:49.334-0500 I STORAGE [signalProcessingThread] WiredTiger message [1769893669:334578][11374:0x7678efdff640], txn-recover: Recovering log 13393 through 13394
2026-01-31T16:07:49.382-0500 I STORAGE [signalProcessingThread] WiredTiger message [1769893669:382088][11374:0x7678efdff640], txn-recover: Recovering log 13394 through 13394
2026-01-31T16:07:49.418-0500 I STORAGE [signalProcessingThread] WiredTiger message [1769893669:418886][11374:0x7678efdff640], txn-recover: Set global recovery timestamp: 0
2026-01-31T16:07:49.600-0500 I STORAGE [signalProcessingThread] Starting to check the table logging settings for existing WiredTiger tables
2026-01-31T16:07:51.199-0500 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2026-01-31T16:07:51.199-0500 I - [signalProcessingThread] Dropping the scope cache for shutdown
2026-01-31T16:07:51.200-0500 I CONTROL [signalProcessingThread] now exiting
2026-01-31T16:07:51.200-0500 I CONTROL [signalProcessingThread] shutting down with code:0
Could I keep using 39000 ports? If I should change the ports, could you help me on this issue?
Thank you!