Migrate MongoDB 5 to 7 for HCL Connections 8 CR9 update

Created:
Last Update:

Author: Christoph Stoettner
Read in about 5 min · 892 words

High-quality photo of a plant branch on a white background

Photo by Mockup Graphics | Unsplash

For HCL Connections 8 CR9, it is mandatory to update MongoDB to version 7. During my first migrations, I encountered some issues and would like to provide workarounds and additional troubleshooting tips to help others with this process.

Understanding the Default Path Configuration

Check the script , where the root folder is defined. If $NFS_ROOT is undefined, it is set to /mnt.

NFS_ROOT=${NFS_ROOT:-/mnt}

To use a different path, use export before running the migration command:

export NFS_SHARE=/pv-connections
./mongo_migration.sh

I’ve added a check for the MongoDB folder to the script. This is the snippet:

# Check if shares are mounted to the used path
if [ ! -d $NFS_ROOT/mongo7-node-0/data/db ] ; then
  echo "No Mongo DB share in $NFS_ROOT"
  exit 1
fi

Alternatively, you can use the adjusted version of the script that includes this check.

Dealing with Lock Files

Most commonly, we encounter .lock files in the MongoDB folders after scaling down the Mongo 5 containers. The migration script will fail if any of these files are present:

  • WiredTiger.lock
  • mongod.lock

To delete .lock files from mongo7 shares, run:

find mongo7-node-* -iname \*.lock -exec rm {} \;

Troubleshooting

If you’re experiencing issues with the migration, here are some helpful troubleshooting steps:

  • Run the script with bash -x to see the commands being executed and all variables replaced:
root@hostname # bash -x mongo_migration.sh
++ date +%Y%m%d_%H%M%S
+ LOG_FILE=migration_20250408_094159.log
+ exec
++ tee -a migration_20250408_094159.log
+ NFS_ROOT=/pv-connections
+ REPL_SET=rs0
+ NETWORK_NAME=mongo5-network
+ HOST_PORT_BASE=27010
+ CONTAINER_PORT=27017
+ FCV_VERSIONS=(["6.0"]="bitnami/mongodb:6.0" ["7.0"]="bitnami/mongodb:7.0")
+ declare -A FCV_VERSIONS
+ declare -A container_map
+ docker network ls
+ grep -q mongo5-network
+ log_info 'Starting temporary MongoDB container to retrieve replica set info (MongoDB 5.0)'
+ echo '[INFO] Starting temporary MongoDB container to retrieve replica set info (MongoDB 5.0)'
[INFO] Starting temporary MongoDB container to retrieve replica set info (MongoDB 5.0)
+ docker run -dt --name mongo-get-host-name -p 27020:27017 -v /pv-connections/mongo7-node-0/data/db:/bitnami/mongodb/data/db:Z -e MONGODB_EXTRA_FLAGS=--replSet=rs0 bitnami/mongodb:5.0
e6f250b84d9906e1ab0193f3d417a093bc2a8cb78edbd27c21a8427f703f2527
+ wait_for_mongo mongo-get-host-name
+ log_info 'Waiting for MongoDB in mongo-get-host-name to be ready...'
+ echo '[INFO] Waiting for MongoDB in mongo-get-host-name to be ready...'
[INFO] Waiting for MongoDB in mongo-get-host-name to be ready...
+ docker exec mongo-get-host-name mongosh --host 127.0.0.1 --eval 'db.runCommand({ ping: 1 })'
  • Check log files of the mongo-get-host-name container if the script shows no progress. In this example, WiredTiger.lock is in place, preventing the Bitnami container from mounting the database:
root@hostname # docker logs mongo-get-host-name
mongodb 07:40:30.52 INFO  ==>
mongodb 07:40:30.52 INFO  ==> Welcome to the Bitnami mongodb container
mongodb 07:40:30.52 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 07:40:30.52 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 07:40:30.52 INFO  ==>
mongodb 07:40:30.53 INFO  ==> ** Starting MongoDB setup **
mongodb 07:40:30.54 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 07:40:30.59 INFO  ==> Initializing MongoDB...
mongodb 07:40:30.64 INFO  ==> Deploying MongoDB with persisted data...
mongodb 07:40:30.66 INFO  ==> ** MongoDB setup finished! **

mongodb 07:40:30.67 INFO  ==> ** Starting MongoDB **

{"t":{"$date":"2025-04-08T07:40:30.706+00:00"},"s":"I",  "c":"CONTROL",  "id":20698,   "ctx":"-","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2025-04-08T07:40:30.706+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2025-04-08T07:40:30.708+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2025-04-08T07:40:30.708+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2025-04-08T07:40:30.708+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2025-04-08T07:40:30.709+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2025-04-08T07:40:30.709+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2025-04-08T07:40:30.709+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
{"t":{"$date":"2025-04-08T07:40:30.709+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2025-04-08T07:40:30.710+00:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2025-04-08T07:40:30.710+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"f6ad5f9552c2"}}
{"t":{"$date":"2025-04-08T07:40:30.710+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.24","gitVersion":"f034f0c51b3dffef4b8c9452d77ede9888f28f66","openSSLVersion":"OpenSSL 1.1.1w  11 Sep 2023","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian11","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2025-04-08T07:40:30.710+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 11 (bullseye)\"","version":"Kernel 4.18.0-553.45.1.el8_10.x86_64"}}}
{"t":{"$date":"2025-04-08T07:40:30.710+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"replSet":"rs0"},"security":{"authorization":"disabled"},"setParameter":{"enableLocalhostAuthBypass":"true"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2025-04-08T07:40:30.713+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/bitnami/mongodb/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2025-04-08T07:40:30.713+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=23480M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2025-04-08T07:40:31.216+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":11,"message":"[1744098031:216706][1:0x7f9b2d09fc80], wiredtiger_open: __posix_file_lock, 393: /bitnami/mongodb/data/db/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable"}}
{"t":{"$date":"2025-04-08T07:40:31.216+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":16,"message":"[1744098031:216813][1:0x7f9b2d09fc80], wiredtiger_open: __conn_single, 1807: WiredTiger database is already being managed by another process: Device or resource busy"}}
{"t":{"$date":"2025-04-08T07:40:31.217+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":11,"message":"[1744098031:217338][1:0x7f9b2d09fc80], wiredtiger_open: __posix_file_lock, 393: /bitnami/mongodb/data/db/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable"}}
{"t":{"$date":"2025-04-08T07:40:31.217+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":16,"message":"[1744098031:217370][1:0x7f9b2d09fc80], wiredtiger_open: __conn_single, 1807: WiredTiger database is already being managed by another process: Device or resource busy"}}
{"t":{"$date":"2025-04-08T07:40:31.217+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":11,"message":"[1744098031:217845][1:0x7f9b2d09fc80], wiredtiger_open: __posix_file_lock, 393: /bitnami/mongodb/data/db/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable"}}
{"t":{"$date":"2025-04-08T07:40:31.217+00:00"},"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":16,"message":"[1744098031:217898][1:0x7f9b2d09fc80], wiredtiger_open: __conn_single, 1807: WiredTiger database is already being managed by another process: Device or resource busy"}}
{"t":{"$date":"2025-04-08T07:40:31.218+00:00"},"s":"W",  "c":"STORAGE",  "id":22347,   "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."}
{"t":{"$date":"2025-04-08T07:40:31.218+00:00"},"s":"F",  "c":"STORAGE",  "id":28595,   "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"16: Device or resource busy"}}
{"t":{"$date":"2025-04-08T07:40:31.218+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":689}}
{"t":{"$date":"2025-04-08T07:40:31.218+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

Important: Clean Up After Failed Attempts

If the script does not terminate properly, don’t forget to clean up temporary containers to prevent resource conflicts in subsequent attempts:

docker rm mongo-get-host-name

Planning Your Migration

Before starting the migration process, it’s important to ensure that you have:

  1. Made a backup of your MongoDB data
  2. Properly scaled down your Mongo 5 containers
  3. Verified that your storage paths are correctly configured
  4. Allocated sufficient time for the migration process, as it can take longer for large databases

The migration to MongoDB 7 is critical for maintaining compatibility with HCL Connections 8 CR9, and while it may present some challenges, following these steps and troubleshooting tips should help ensure a smooth transition.

Comments
Error
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.
Success!
Your comment has been posted successfully.

Comments

Loading comments...

Leave a Comment

Your email address will not be published. Required fields are marked with *

Suggested Reading
Card image cap

Last week I attended Social Connections 14 in Berlin, the event location had a great view and weather was great.

Created:
Last Update:
Read in about 1 min
Card image cap

The official documentation, “Migrating data from MongoDB 3 to 5”, wants to dump the MongoDB databases in 3.6 and then restore this data into the newly deployed MongoDB 5.

One issue with this process is that we can’t run the two MongoDB versions in parallel on Kubernetes because the provided helm charts and container for MongoDB 3.6 stop running after Kubernetes 1.21. On the other side, the helm chart providing MongoDB 5 can’t be installed on those old Kubernetes versions. So the process to update is:

Created:
Last Update:
Read in about 7 min
Card image cap

After rebooting the Kubernetes server for HCL Connections Componentpack, I sometimes see that Orient Me is not working and just shows:

{"error":{"statusCode":500,"message":"Internal Server Error"}}
Created: Read in about 1 min