Send Your Docker Container Logs to ELK (Elasticsearch, Logstash and Kibana) with Gelf Driver

Ridwan Fajar
8 min readMay 14, 2020
Fog scenery at Passangrahan Village, Cimenyan District, Bandung Regency, West Java, Indonesia

Foreword

This article is an introduction for beginners who want to manage their docker services log with ELK Stack. ELK stands for Elasticsearch, Logstash and Kibana. We will use Gelf Driver to send out docker service log to ELK Stack. Then, we could see our logs visualize in Kibana Dashboard.

Imagine that you are managing more than 50 docker services within your orchestration system. What would you do if you have to inspect logs from some services just in case there are some failures after deployment ? Should we trace that failure through each service’s console ? I think that’s not the best thing to do and might be time consuming for you just to trace the failure alone across the nodes for those vary services.

It doesn’t stop there. What would you do if you don’t have a time at that moment to trace or debug that failure and you only have the chance three weeks later ? You might miss the container log since it might be retained only for a short time. In that case, you have to build a centralized logging management that could save your services log for longer time.

In this massive Docker adoption era, we need tools that help us to trace those distributed service across the cluster and centralised it in single point. ELK (Elasticsearch, Logstash and Kibana) stack is just one of solution that might be suitable for your need to build centralized logging management for your Docker services.

The other benefit of ELK stack are:

  • Provide you a full text search query to filter some relevant messages from your service logs
  • Could retain your log more than year (depend on how big enough your server to host them)
  • Provide on-premise plan to use some common features such as ELK stack (for another feature, you should upgrade it to enterprise plan)
  • Have a nice visualization tools for your queries
  • and many more

With this article, I’d like to make you aware on how to manage your docker service logs with the ELK stack.

Let’s jump to the next section to try a simple demo how to build a centralized logging management using ELK stack for your Docker services on top of Docker Compose.

A. Requirements

In this section, we will prepare our requirements to build the demo. We need a web application and key-value store engine to hold client informations. We will use snippet from my another article: Build Python Web Application using Flask and Docker. Based on that article, we will have these three files for this demo:

  • main.py
import redisfrom flask import Flaskapp = Flask(__name__)
redis = redis.Redis(host='redis', port=6379, db=0)
@app.route('/')
def hello_world():
return 'Hello, World!'
@app.route('/visitor')
def visitor():
redis.incr('visitor')
visitor_num = redis.get('visitor').decode("utf-8")
return "Visitor: %s" % (visitor_num)
@app.route('/visitor/reset')
def reset_visitor():
redis.set('visitor', 0)
visitor_num = redis.get('visitor').decode("utf-8")
return "Visitor is reset to %s" % (visitor_num)
if __name__ == '__main__':
app.run(host='0.0.0.0')
  • requirements.txt
Flask==1.1.2
redis==3.4.1
gunicorn>=19,<20
  • Dockerfile
FROM python:3.7-alpineRUN mkdir /appWORKDIR /appADD requirements.txt /appADD main.py /appRUN pip3 install -r requirements.txtCMD ["gunicorn", "-w 4", "-b", "0.0.0.0:8000", "main:app"]

Put those three files under directory called demo2. That directory will be our project directory. We will build that web application by using Docker Compose in the next section along with the ELK stack itself.

To get further explanation, you may visit that article to get an understanding about how we use the Dockerfile to build a Python web application as a Docker container and orchestrate it by using Docker Compose.

B. Configuring Logstash to accept message with Gelf format

Still under demo2 directory, we will create another file called logstash.conf that will store our Logstash configuration to accept message with Gelf format through port 12201via UDP connection. It’s allow Docker services to send messages to Elasticsearch with port 9200.

Here is the snippet for logstash.conf:

input {
gelf {
port => 12201
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{+YYYY-MM-dd}"
}
}

Within hosts , we specify elasticsearch as a hostname instead localhost. Because Docker will recognize elasticsearch instead localhost while they are communicating each other. On the index key, we specify index with prefix logstash- and suffixed by date string that will change every day (e.g. 2020–04–20, 2020–04–21, 2020–04–22, etc.)

C. Orchestrating ELK stack along with our web application

In this section, we will try to run a demo using Docker Compose to simulate centralized logging management for Docker service using ELK stack. It might give you a concept to build this thing. On the other hand, you might configure ELK stack on your container management or as separate instance within your cloud service.

OK, now we have to write docker-compose.yml under demo2 directory:

version: '3'
services:
app:
build: .
volumes:
- .:/app
ports:
- "8000:8000"
links:
- redis:redis
depends_on:
- redis
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
tag: "demo2_app"
redis:
image: "redis:alpine"
expose:
- "6379"
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
tag: "demo2_redis"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- discovery.type=single-node
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- 5601:5601
depends_on:
- elasticsearch
- logstash
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
tag: "demo2_kibana"
logstash:
image: docker.elastic.co/logstash/logstash:7.6.2
links:
- elasticsearch
volumes:
- .:/etc/logstash
command: logstash -f /etc/logstash/logstash.conf
ports:
- 12201:12201/udp
depends_on:
- elasticsearch

To understand better regarding Docker Compose structure for app and redis services, it is recommended to visit once again this article: Build Python Web Application using Flask and Docker.

On the other hand, let’s going through for several lines from the configuration above:

  • We will use Gelf driver for Docker service logging. And it’s messages will be sent to Elasticsearch through Logstash. You need to specify gelf-address and tag for the options . We will use UDP protocol and port 12201 to send messages from Docker services. Than it will be distinguished based on the tag that we specified for each Docker service. Every Docker container messages will be sent to Logstash via this configuration.
  logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
tag: "demo2_app"
  • For the Elasticsearch itself, we will use image from Elastic official Docker Registry instead from DockerHub. Then, we will set the environment as single-node . Next, we have to map the port from 9200 in container to port 9200 at host.
  elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- discovery.type=single-node
ports:
- 9200:9200
  • For the Logstash, we also use the docker image from Elasticsearch Official Docker Registry. Then, we linked it to elasticsearch service. Then, we are mounting our project directory (host) to /etc/logstash (container). We specify custom command, logstash -f /etc/logstash/logstash.conf, that will run Logstash service with desired configuration from path /etc/logstash/logstash.conf in the Docker internal directory itself. In the ports key, we are trying to open port 12201 from Docker to 12201 at host with UDP protocol to allow other Docker services send log messages into it.
  logstash:
image: docker.elastic.co/logstash/logstash:7.6.2
links:
- elasticsearch
volumes:
- .:/etc/logstash
command: logstash -f /etc/logstash/logstash.conf
ports:
- 12201:12201/udp
depends_on:
- elasticsearch
  • For the Kibana, it doesn’t have a quite different with app service. It’s use Gelf driver as well to send it’s log messages.

Now let’s run our Docker Compose, and see how it works:

$ docker-compose up

Our ELK stack along with Flask web application and Redis will be running on our local machine. But there are differences now, we couldn’t see logs from kibana , app , and redis . Because we activated Gelf driver to log container messages and send those messages to ELK Stack.

If you are still curious why is this happening, you could see the picture below that show you of logs from those services aren’t unable to be fetched by using docker-compose logs <service> command.

We cannot fetch containers output after you configured it with Gelf driver

We have finished with the logging mechanism. Now, let’s jump into visualisation section.

D. Visualise our container logs to Kibana

We can’t see our service logs right away in Kibana without load the index first. We should load logstash index first into Kibana. Then, we could analyze those logs for further processing.

On the leftside bar, choose Management menu then hit Index Patterns link. You will see a panel with title Index patterns . Finally, hit Create index pattern button on top right corner.

step C.1 — start loading logstash index to Kibana

Since we are using Logstash, our index will be named as logstash-* . Our Logstash index might have names such as logstash-2020.01.01 , logstash-2020.01.02 and so on. That’s why we have to load it with wildcard pattern. Finally, hit Next step button.

Step C.2 — Choose Logstash index with wildcard pattern

Now, you have to specify field name that will act as time filter. We could choose @timestamp field in our Logstash index. Then, hit Create index patterns button.

Step C.3 — choosing desired time field as time filter

Finally, our Logstash index which is containing our services messages, now loaded to Kibana. We could explore it more in the next section.

Step C.4 — Our Logstash index is loaded to Kibana

E. Explore our container logs with Kibana

On the Discover page, you are able to pin some fields as filter. For example in those pictures, we are trying to pin tag field, and you could specify which service that you want to show on Kibana Discover.

Example D.1 — Filter the logs by tag field with demo2_kibana value
Example D.2 — Filter the logs by tag field with demo2_app value
Example D.3 — Filter the logs by tag field with demo2_redis value

Using that basic technique, you could save your time to exclude unwanted logs while you are inspecting some logs only for certain services. You just have to be focus on your desired service.

In the Visualize page, you could do more than in the Discover page. You could jump into this page by click tag field on the leftside bar within Discover page. Then, it will be expanded and show the basic chart. At the bottom of tag field section, you might see visualize button that will bring you to Visualize page. You can perform aggregation through your services log and you might explore with different field.

For example, we are trying to count logs and grouped it by using tag field. As a result, you might see a bar chart such in this picture below that show demo2_kibana as the highest number of logs existing in our Logstash index.

Example D.4 — Explore our Logstash logs using Visualize page in Kibana

Conclusion

If you have a ton of Docker services and multiple instance of server, it’s better to use ELK Stack as a solution to trace and analyze your logs in a single place. Just in case, if you want to manage those logs by yourself.

There are multiple similar solution such as Datadog, Logz.io, Elasticsearch on Cloud (Elastic Cloud, AWS, Azure) and etc. But if you think you are still capable to manage it with your capacity planing, ELK stack on-premise should be sufficient.

References

Thanks to Charalambos Paschalides for proofreading my article before it’s published. Also to Ricky Hariady for guiding me to understand centralized logging management concept.

--

--