HCL Digital Solutions

Card image cap

The official documentation, “Migrating data from MongoDB 3 to 5”, wants to dump the MongoDB databases in 3.6 and then restore this data into the newly deployed MongoDB 5.

One issue with this process is that we can’t run the two MongoDB versions in parallel on Kubernetes because the provided helm charts and container for MongoDB 3.6 stop running after Kubernetes 1.21. On the other side, the helm chart providing MongoDB 5 can’t be installed on those old Kubernetes versions. So the process to update is:

Created:
Last Update:
Read in about 7 min
Card image cap

After updating HCL Connections to 8CR3 and Tiny Editors to 4.9.2.24 the lines of tables are no longer visible during editing.

Here is the edit form with Tiny Editors 4.8.2.0:

Created: Read in about 2 min
Card image cap

During a troubleshooting session in Component Pack, I checked the Kubernetes events.

Created: Read in about 3 min
Card image cap

To install the Component Pack for HCL Connections 8, you need to create a Mongodb5 image. The image sources can be found in the HCL MongoDB repository , and the process for creating the image is documented in Installing MongoDB5 for Component Pack 8 . The process involves using Docker, so if you have it installed, you can follow the instructions provided.

Created:
Last Update:
Read in about 2 min
Card image cap

Elasticsearch in HCL Connections Componentpack is secured with Searchguard and needs certificates to work properly. These certificates are generated by bootstrap during the initial container deployment with helm.

These certificates are valid for 10 years (chain_ca.pem) or 2 years (elasticsearch*.pem) and stored in the Kubernetes secrets elasticsearch-secret, elasticsearch-7-secret. So when your HCL Connections deployment is running for 2 years, the certficates stop working.

Created: Read in about 3 min
Card image cap

Last week I played around with the HCL Connections documentation to backup Elasticsearch in the article Backup Elasticsearch Indices in Component Pack .

In the end I found that I couldn’t get the snapshot restored and that I have to run a command outside of my Kubernetes cluster to get a snapshot on a daily basis. That’s not what I want.

Created: Read in about 4 min