Orient Me, Elasticsearch and Disk space

Last Update:

Author: Christoph Stoettner
Read in about 5 min · 1049 words

Fountain pen and a notebook

Photo by Aaron Burden | Unsplash

With IBM Connections 6 you can deploy the additional component Orient Me , which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Klaus Bild showed in a blog post some weeks ago how to add a container with Kibana to use the deployed Elasticsearch for visualizing the environment.

I found two issues with the deployed Elasticsearch container, but let me explain from the beginning.

On Monday I checked my demo server and the disk was full, so I searched a little bit and found that Elasticsearch is using around 50GB of disk space for the indices. On my server the data path for Elasticsearch is /var/lib/elasticsearch/data. With du -hs /var/lib/* you can check the used space.

You will see something like this and I would recommend to create a seperate mount point for /var/lib or two on /var/lib/docker and /var/lib/elasticsearch for your CFC/Orient Me server:

du -hs /var/lib/*
15G /var/lib/docker
0   /var/lib/docker.20170425072316
6,8G    /var/lib/elasticsearch
451M    /var/lib/etcd

So I searched how to show and delete Elasticsearch indices .

On your CFC host run:

curl localhost:9200/_aliases


[root@cfc ~]# curl http://localhost:9200/_aliases?pretty=1
  "logstash-2017.06.01" : {
    "aliases" : { }
  "logstash-2017.05.30" : {
    "aliases" : { }
  "logstash-2017.05.31" : {
    "aliases" : { }
  ".kibana" : {
    "aliases" : { }
  "heapster-2017.06.01" : {
    "aliases" : {
      "heapster-cpu-2017.06.01" : { },
      "heapster-filesystem-2017.06.01" : { },
      "heapster-general-2017.06.01" : { },
      "heapster-memory-2017.06.01" : { },
      "heapster-network-2017.06.01" : { }

On my first try, the list was “a little bit” longer. So it is a test server, so I just deleted the indices with:

curl XDELETE http://localhost:9200/logstash-*
curl XDELETE http://localhost:9200/heapster-*

For this post, I checked this commands from my local machine and curl XDELETE …​ with IP or hostname are working too! Elasticsearch provides no real security for the index handling, so best practice is to put a Nginx server in front and only allow GET and POST on the URL . So in a production environment, you should think about securing the port 9200 (Nginx, iptables), or anybody could delete the indices. Only logs and performance data, but I don’t want to allow this.

Now the server was running again and I digged a little bit deeper. So I found that there is a container indices-cleaner running on the server:

[root@cfc ~]# docker ps | grep clean
6c1a52fe0e0e ibmcom/indices-cleaner:0.1 "cron && tail -f /..." 51 minutes ago Up 51 minutes k8s_indices-cleaner.a3303a57_k8s-elasticsearch-

So I checked this container:

docker logs 6c1a52fe0e0e

shows nothing. Normally it should show us the curator log. The container command is not selected in the best way.

cron && tail -f /var/log/curator-cron.log

shall show the log file of curator (a tool to delete Elasticsearch indices), but with && it only starts tail when cron is ended with status true. So that’s the reason that docker logs shows nothing.

I started a bash in the container with docker exec -it 6c1a52fe0e0e bash and checked the settings there.

cat /etc/cron.d/curator-cron
59 23 * * * root /bin/bash /clean-indices.sh
# An empty line is required at the end of this file for a valid cron file.

There is a cronjob which runs each day at 23:59. The started script runs:

/usr/local/bin/curator --config /etc/curator.yml /action.yml

Within the /action.yml the config shows that logstash-* should be deleted after 5 days and heapster-* after 1 day.

I checked /var/log/curator-cron.log, but it was empty! So the cronjob never ran. To test if the script works as expected, I just started /clean-indices.sh and the log file shows:

cat /var/log/curator-cron.log
2017-05-31 08:17:01,654 INFO      Preparing Action ID: 1, "delete_indices"
2017-05-31 08:17:01,663 INFO      Trying Action ID: 1, "delete_indices": Delete logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:01,797 INFO      Deleting selected indices: [u'logstash-2017.05.08', u'logstash-2017.05.09', u'logstash-2017.05.03', u'logstash-2017.04.28', u'logstash-2017.04.27', u'logstash-2017.04.26', u'logstash-2017.05.18', u'logstash-2017.05.15', u'logstash-2017.05.12', u'logstash-2017.05.11']
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.08
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.09
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.03
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.28
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.27
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.26
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.18
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.15
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.12
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.11
2017-05-31 08:17:02,130 INFO      Action ID: 1, "delete_indices" completed.
2017-05-31 08:17:02,130 INFO      Preparing Action ID: 2, "delete_indices"
2017-05-31 08:17:02,133 INFO      Trying Action ID: 2, "delete_indices": Delete heapster prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:02,161 INFO      Deleting selected indices: [u'heapster-2017.04.26', u'heapster-2017.04.27', u'heapster-2017.04.28', u'heapster-2017.05.03', u'heapster-2017.05.15', u'heapster-2017.05.12', u'heapster-2017.05.11', u'heapster-2017.05.09', u'heapster-2017.05.08']2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.26
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.27
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.28
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.03
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.15
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.12
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.11
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.09
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.08
2017-05-31 08:17:02,366 INFO      Action ID: 2, "delete_indices" completed.
2017-05-31 08:17:02,367 INFO      Job completed.

I checked the log file daily after the research and after running the task manually the cron job is working as expected and curator does it’s job. No full disk since last week.

CFC uses kubernetes and so stopping the clean-indices container creates a new one immediately! All changes disappear then and the cronjob stops working. I don’t want to wait until IBM provides a container update, so I searched a way to run the curator even with a new container on a regular basis.

I created a script:

id=`docker ps | grep indices-cleaner | awk '{print $1}'`
docker exec -t $id /clean-indices.sh
docker exec -t $id tail /var/log/curator-cron.log

and added it to my crontab on the CFC server.

crontab -e 59 23 * * * script >> /var/log/curator.log

When you use Kibana to analyse the logs, you maybe want to have more indices available. docker inspect containerid shows us:

"Mounts": [
                "Type": "bind",
                "Source": "/etc/cfc/conf/curator-action.yml",
                "Destination": "/action.yml",
                "Mode": "",
                "RW": true,
                "Propagation": ""

So you can edit the file /etc/cfc/conf/curator-action.yml on the CFC host instead of the container file and your changes will be persistent.

Add a comment
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.

Your email address will not be published. Required fields are marked with *

Suggested Reading
Aaron Burden: Fountain pen and a notebook

Last week I wrote a post about Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log , but i wasn’t happy with my date filter and how the websphere response code is analyzed. The main problem was, that the WAS response code is not always on the beginning of a log message, or do not end with “:” all the time.

Last Update:
Read in about 3 min
Aaron Burden: Fountain pen and a notebook

I often get SystemOut.log files from customers or friends to help them analyzing a problem. Often it is complicated to find the right server and application which generates the real error, because most WebSphere Applications (like IBM Connections or Sametime) are installed on different Application Servers and Nodes. So you need to open multiple large files in your editor, scroll each to the needed timestamps and check the lines before for possible error messages.

Last Update:
Read in about 5 min
Aaron Burden: Fountain pen and a notebook

:icons: font

After some deployments of IBM Connections pink and IBM Cloud private, I want to share some tools, links and hopefully helpful information around these products.

Last Update:
Read in about 4 min