HorizontalPodAutoscalers (HPA) with HCL Connections Component Pack

Author: Christoph Stoettner
Read in about 3 min · 499 words

Containership Monrovia

Photo by Chris Linnet | Unsplash

During a troubleshooting session in Component Pack, I checked the Kubernetes events.

kubectl get events -n connections

18m         Warning   FailedGetScale   horizontalpodautoscaler/middleware-jsonapi            no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/mwautoscaler                  no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/te-creation-wizard            no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/teams-share-service           no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/teams-share-ui                no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/teams-tab-api                 no matches for kind "Deployment" in group "extensions"
18m         Warning   FailedGetScale   horizontalpodautoscaler/teams-tab-ui                  no matches for kind "Deployment" in group "extensions"

Or in k9s:


So, there are several thousand messages of a failed autoscaler. The documentation does not mention HPA anywhere. So, I checked the Kubernetes documentation: HorizontalPodAutoscaler Walkthrough

One prerequisite to using HPA (HorizontalAutoscaler), is the installation of Metrics Server on the Kubernetes cluster.

Install Metrics Server


Install with kubectl

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

Install with helm

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/

helm upgrade --install metrics-server metrics-server/metrics-server

Fix apiVersion

Even after the Metrics server is installed, the events still show errors. Therefore, let's check:

kubectl describe hpa teams-tab-ui -n connections

  Type     Reason          Age                      From                       Message
  ----     ------          ----                     ----                       -------
  Warning  FailedGetScale  27m (x22287 over 3d21h)  horizontal-pod-autoscaler  no matches for kind "Deployment" in group "extensions"

Searching the error message and found: Horizontal Pod Autoscaling failing after upgrading to Google Kubernetes Engine 1.16 with error: no matches for kind "Deployment" in group "extensions"

Since Kubernetes 1.16 the HPA configuration needs to be changed from:

    apiVersion: extensions/v1beta
    kind: Deployment
    name: admin-portal


    apiVersion: apps/v1
    kind: Deployment
    name: admin-portal

Fix customizer HPA

Now most of the HPA are start working, except of the mwautoscaler. Here, the deployment name in scaleTargetRef is wrong and needs to be changed from mwautoscaler to mw-proxy. To adjust the minimum pod count, which is set to 1 in all other HPA, I changed the default 3 to 1 here.

  apiVersion: autoscaling/v2
  kind: HorizontalPodAutoscaler
      meta.helm.sh/release-name: mw-proxy
      meta.helm.sh/release-namespace: connections
    creationTimestamp: "2023-02-08T15:51:28Z"
      app.kubernetes.io/managed-by: Helm
      chart: mw-proxy-0.1.0-20230329-171529
      environment: ""
      heritage: Helm
      name: fsautoscaler
      release: mw-proxy
      type: autoscaler
    name: mwautoscaler
    namespace: connections
    resourceVersion: "2105787"
    uid: 1bf749b4-f4cd-4760-a2e0-357ff0e6772a
    maxReplicas: 3
    - resource:
        name: cpu
          averageUtilization: 80
          type: Utilization
      type: Resource
    minReplicas: 1
      apiVersion: apps/v1
      kind: Deployment
      name: mw-proxy
    - lastTransitionTime: "2023-05-30T10:41:57Z"
      message: recommended size matches current size
      reason: ReadyForNewScale
      status: "True"
      type: AbleToScale
    - lastTransitionTime: "2023-05-30T10:41:57Z"
      message: the HPA was able to successfully calculate a replica count from cpu resource
        utilization (percentage of request)
      reason: ValidMetricFound
      status: "True"
      type: ScalingActive
    - lastTransitionTime: "2023-05-30T10:41:57Z"
      message: the desired count is within the acceptable range
      reason: DesiredWithinRange
      status: "False"
      type: ScalingLimited
    - resource:
          averageUtilization: 10
          averageValue: 5m
        name: cpu
      type: Resource
    currentReplicas: 1
    desiredReplicas: 1

Working HPA

With these changes HPA starts working:


Interesting to see that the new introduced pod middleware-jsonapi has an HPA configuration, but uses the same old apiVersion as the other ones.

Add a comment
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.

Your email address will not be published. Required fields are marked with *

Suggested Reading
Card image cap

The official documentation, “Migrating data from MongoDB 3 to 5”, wants to dump the MongoDB databases in 3.6 and then restore this data into the newly deployed MongoDB 5.

One issue with this process is that we can’t run the two MongoDB versions in parallel on Kubernetes because the provided helm charts and container for MongoDB 3.6 stop running after Kubernetes 1.21. On the other side, the helm chart providing MongoDB 5 can’t be installed on those old Kubernetes versions. So the process to update is:

Migration process

  1. Dump databases in MongoDB 3.6 (version delivered with Connections 7)
  2. Update Kubernetes to 1.25 or 1.27
  3. Restore MongoDB databases to version 5.0
Last Update:
Read in about 6 min
Card image cap

After updating HCL Connections to 8CR3 and Tiny Editors to the lines of tables are no longer visible during editing.

Here is the edit form with Tiny Editors

Created: Read in about 2 min
Card image cap

This year, Engage took place at the Felix Meritis in Amsterdam . The Engage board (Hilde, Theo and Kris) did a great job and made this very special conference a great success.

Created: Read in about 1 min