Troubleshooting Top Updates in HCL Connections

Underwater Air Bubbles

Photo by Mostafa Ashraf Mostafa | Unsplash

Last week, I had three systems with issues displaying the Top Updates in the Orient Me. So I tried to find out which applications and containers are involved in generating the content for this view.

First, we need to know that Top Updates are part of the Component Pack, and the content of Latest Updates is the Activity stream data, which is read from the homepage database.

Screenshot of the Connections 8 Homepage

If the Top Updates tab is not visible after deploying the Component Pack, check LotusConnections-config.xml; the serviceReference for orientme needs to be enabled. There is only one serviceReference allowed for an application in this file, so check for duplicate definitions when the tab is still missing.

<!--Uncomment the following serviceReference definition if OrientMe feature is enabled-->
<!-- BEGIN Enabling OrientMe -->
<sloc:serviceReference bootstrapHost="cnx8-db2-was.stoeps.home" bootstrapPort="admin_replace" clusterName="" enabled="true" serviceName="orient" ssl_enabled="true">
  <sloc:href>
    <sloc:hrefPathPrefix>/social</sloc:hrefPathPrefix>
    <sloc:static href="http://cnx8-db2-was.stoeps.home" ssl_href="https://cnx8-db2-was.stoeps.home"/>
    <sloc:interService href="https://cnx8-db2-was.stoeps.home"/>
  </sloc:href>
</sloc:serviceReference>
<!-- END Enabling OrientMe -->

So where is the data for Top Updates stored?

The data for Top Updates is read from the Opensearch/Elasticsearch index orient-me-collection. The data is processed first in WebSphere (/search/eventTracker) and sent through the SIB message store.

Then the message store publication point connections.events exports to Redis (redis-server:30379 via the haproxy-redis service) on Kubernetes. The indexing-service reads the Redis data and writes to the Opensearch orient-me-collection index.

I’m not 100% sure, but I expect that the retrievalservice and middleware-graphql pods are involved in reading the data for Top Updates. The GraphQL query is processed through orient-web-client.

Dependencies and troubleshooting steps

Flow of Homepage Lastest Updates and Top Updates

So the first step is to check if the events are written to the Opensearch index. Open a shell in the opensearch-cluster-client-0 and switch to the folder probe. Make a copy of sendRequest.sh and change the last print statement:

cd probe
cp sendRequest.sh send.sh
vi send.sh

Change line 27 to (add "" around the variable), this will print the output with newlines instead of one long string:

echo "${response_text}"

Now let’s check the index:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
./send.sh GET /_cat/indices

[opensearch@opensearch-cluster-client-0 probe]$ ./send.sh GET /_cat/indices
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   810  100   810    0     0   6793      0 --:--:-- --:--:-- --:--:--  6864
green open icmetricsconfig              daEZC8iTRESjm8VbM7hH_g 5 1   1  0     9kb   4.5kb
green open .opensearch-observability    B58L49_sR7-h8eyjGXhxzA 1 2   0  0    624b    208b
green open icmetrics_a_2023_2h          Ngqg9EK4SF6Sk2VJOojtzw 5 1 960  0   1.3mb 697.5kb
green open quickresults                 yewsBz9xQD24ljamtPdDaw 8 2 145 25   2.2mb 794.9kb
green open icmetricscommlist            BbybBZFFS9m18RS7l9bNsQ 5 1   1  0    10kb     5kb
green open orient-me-collection_tmp     ZsxezJzCSV2aeb8I4rxYfg 3 2   0  0   1.8kb    624b
green open .opendistro_security         FRHNF_XNTeSwcvpvK7-Skg 1 2  10  0 219.2kb    73kb
green open orient-me-collection         a2bPe_zCSkOQYhSqxcbDUQ 1 1 164  5 273.5kb 155.4kb

In the last line, we see that the orient-me-collection, the 8th column with 164 needs to increase when new events appear. When you create a community, for example, this number should increase after a few seconds.

To check if the data is sent to Component Pack, you can check the Redis queue. You can open a shell on redis-server-0 and use redis-cli. Alternativly you can use telnet or netcat on any host to open a session on port 30379 on your Haproxy (connections-automation installs an haproxy on the Nginx host) or Kubernetes worker.

nc <haproxy or k8s-worker> 30379

To connect, you need your redis-password:

kubectl get secret redis-secret --template={{.data.secret}} | base64 -d

Example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
nc cnx8-db2.stoeps.home 30379
auth your-redis-password
+OK
subscribe connections.events
*3
$9
subscribe
$18
connections.events
:1

Now create a community and check if messages appear in this nc session.

The article Verify Redis server traffic adds some more details on this topic.

If nothing happens in Redis, check the Gatekeeper settings at https://cnx-url/connections/config/highway.main.settings.tiles

Gatekeeper Settings

The value for c2.export.redis.pass is ~@64 + the base64 encoded password.

There is a configure script in the Connections Automation repository - Redis Config , which does not update the Gatekeeper directly, but adds some update JSON files to the folder shared directory/configuration/update.

To make a long story short, in all three systems, the connection to Redis was somehow broken. I assume that during data migration, the gatekeeper settings were overwritten.

After checking the hostname, port and password, I restarted the environment, and Top Updates started showing updates.

Author
Suggested Reading
Card image cap

With HCL Connections 6.5 and later, we got the add-on HCL Connections Engagement Center (aka CEC, HCEC, ICEC or XCC) included in a normal HCL Connections deployment.

Created: Read in about 6 min
Card image cap

I had one Connections’ environment that I wanted to switch from OpenLDAP to Active Directory LDAP. The old OpenLDAP environment used LDAPS to connect, and so I assumed that the change was done quickly.

The first step was to make a copy of the tdisol folder I used for OpenLDAP and start changing the configuration files for the new LDAP server.

Created: Read in about 4 min
Card image cap

The official documentation, “Migrating data from MongoDB 3 to 5”, wants to dump the MongoDB databases in 3.6 and then restore this data into the newly deployed MongoDB 5.

One issue with this process is that we can’t run the two MongoDB versions in parallel on Kubernetes because the provided helm charts and container for MongoDB 3.6 stop running after Kubernetes 1.21. On the other side, the helm chart providing MongoDB 5 can’t be installed on those old Kubernetes versions. So the process to update is:

Created:
Last Update:
Read in about 7 min