- Show Community Create Button only to users with role "community-creator"
- Makefile to process all Asciidoctor files in a directory
- Update on the Touchpoint workaround
- Linkdump Week 44 / 2020
- Linkdump Week 42 / 2020
- Linkdump Week 41 / 2020
- Elasticsearch index creation problem
- Linkdump Week 40 / 2020
- Selfhost Shaarli
- Touchpoint in HCL Connections 6.5CR1
Last week I wrote a post about Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log, but i wasn’t happy with my date filter and how the websphere response code is analyzed. The main problem was, that the WAS response code is not always on the beginning of a log message, or do not end with “:” all the time. I replaced the used filter (formerly 4 lines with match) with following code: grok { # was_shortname need to be regex, because numbers and $ can be in the word match => ["message", "\[%{DATA:wastimestamp} %{WORD:tz}\] %{BASE16NUM:was_threadID} (?<was_shortname>\b[A-Za-z0-9\$]{2,}\b) %{SPACE}%{WORD:was_loglevel}%{SPACE} %{GREEDYDATA:message}"] overwrite => [ "message" ] #tag_on_failure => [ ] } grok { # Extract the WebSphere Response Code match => ["message", "(?
I often get SystemOut.log files from customers or friends to help them analyzing a problem. Often it is complicated to find the right server and application which generates the real error, because most WebSphere Applications (like IBM Connections or Sametime) are installed on different Application Servers and Nodes. So you need to open multiple large files in your editor, scroll each to the needed timestamps and check the lines before for possible error messages. Klaus Bild showed on several conferences and in his blog the functionality of ELK, so I thought about using ELK too. I started to build a virtual machine with ELK Stack (Elasticsearch, Logstash & Kibana) and imported my local logs and logs i got mailed.