Better logstash filter to analyze SystemOut.log and some more

Last Update:

Author: Christoph Stoettner
Read in about 3 min · 549 words

Fountain pen and a notebook

Photo by Aaron Burden | Unsplash

Last week I wrote a post about Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log , but i wasn’t happy with my date filter and how the websphere response code is analyzed. The main problem was, that the WAS response code is not always on the beginning of a log message, or do not end with “:” all the time.

I replaced the used filter (formerly 4 lines with match) with following code:

grok {
        # was_shortname need to be regex, because numbers and $ can be in the word
        match => ["message", "\[%{DATA:wastimestamp} %{WORD:tz}\] %{BASE16NUM:was_threadID} (?<was_shortname>\b[A-Za-z0-9\$]{2,}\b) %{SPACE}%{WORD:was_loglevel}%{SPACE} %{GREEDYDATA:message}"]
        overwrite => [ "message" ]
        #tag_on_failure => [ ]
grok {
        # Extract the WebSphere Response Code
        match => ["message", "(?<was_responsecode>[A-Z0-9]{9,10})[:,\s\s]"]
        tag_on_failure => [ ]

You see i replaced the different patterns with a regular expression to find the response code. tag_on_failure ⇒ [] prevents generating an error, when no resonse code was logged.

Now i’m able to use was_responsecode to generate a graph with the different response codes over a timeline, so i’m able to see when errors appear more often.


I created a new search for was_loglevel:E AND was_responsecode:* (show me all log messages with a response code and of loglevel E) and created a line chart on basis of this search:

2016 05 29 22 54 38

You see a strong peak for one of the response codes:

2016 05 29 22 56 05

Now we get the information, that CLFRW0034E is the reponse code of this peak. Let’s check what log message comes with this code:

2016 05 29 22 57 36
buildService CLFRW0034E: Error reading or writing to the index directory.
Please check permissions and capacity.

Ok, that’s quite interesting, disk full or problem with NAS, NFS or something like this. I know this issue is already solved, because no more errors after this peak, but when kibana would send me an information when some error counts increase (and that’s possible) it would be great.

Adding timezone

To get the time in my local timezone or utc, even when the log was generated outside in an other timezone, i added following lines:

# add timezone information
    translate {
        field       => 'tz'
        destination => 'tz_num'
        dictionary  => [
            'CET',   '+0100',
            'CEST',  '+0200',
            'EDT', '-0400'
    mutate {
        replace => ['timestamp', '%{wastimestamp} %{tz_num}']

I need to install the translate plugin for logstash and you need to extend the list of timezones manually:

/opt/logstash/bin/logstash-plugin install logstash-filter-translate

Add plugins to Docker container

Since this weekend i like Docker more and more. It’s really easy to test different filters (i work on IBM Domino console.log filter and additional filebeat stuff) and restart with a clean environment again.

The official images for ELK do not have all plugins i wanted to use installed, so i need to create my own Docker containers for ElasticSearch and Logstash, only small changes were need for docker-compose.

Dockerfile Elasticsearch

FROM elasticsearch:latest
RUN bin/plugin install lmenezes/elasticsearch-kopf

Creating elasticsearch container:

docker build -t "stoeps:elasticsearch" .

Dockerfile Logstash

FROM logstash:latest

# Install logstash-input-beats
RUN /opt/logstash/bin/logstash-plugin install logstash-input-beats && /opt/logstash/bin/logstash-plugin install logstash-filter-translate

Creating logstash container:

docker build -t "stoeps:logstash" .


  image: stoeps:elasticsearch
  image: stoeps:logstash
    - "5000:5000"
    - "5044:5044"

So you see i changed the image names to the container names i created before and i added an extra port to enable filebeat.

Add a comment
There was an error sending your comment, please try again.
Thank you!
Your comment has been submitted and will be published once it has been approved.

Your email address will not be published. Required fields are marked with *

Suggested Reading
Aaron Burden: Fountain pen and a notebook

I often get SystemOut.log files from customers or friends to help them analyzing a problem. Often it is complicated to find the right server and application which generates the real error, because most WebSphere Applications (like IBM Connections or Sametime) are installed on different Application Servers and Nodes. So you need to open multiple large files in your editor, scroll each to the needed timestamps and check the lines before for possible error messages.

Last Update:
Read in about 5 min
Aaron Burden: Fountain pen and a notebook

With IBM Connections 6 you can deploy the additional component Orient Me , which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Last Update:
Read in about 5 min
Aaron Burden: Fountain pen and a notebook

Watson Workspace Clients are only available for Windows and Mac OS. I’m a 100% Linux user on my devices and I use a Windows virtual machine only if I can’t avoid it. To communicate with colleagues, IBM and DNUG I need to use Watson Workplace, opening the web view is possible, but then I need to search the right tab or forget to open it. Since some weeks there is Zoom (Web / Video meetings) integrated with Watson Workspace too.:

Watson Workspace clients are based on Electron . I’m not a big fan of Electron clients, most of them are big and need tons of system resources. There is enough written about advantages and disadvantages, so I just leave it that way.

Last Update:
Read in about 3 min