Using Filebeat to ship logs to Logstash

I have already written different posts on ELK stack ( Elasticsearch, Logstash and Kibana), the super-heroic application log monitoring setup. If you are not familiar with them, please check my posts here. We were using SCP to copy files from the application server to the log server and the logs were then read by the Logstash component. As the application moved to microservices-based architecture, we started having more and more servers running small services and setting up the SCP from each server to ELK server became a pain.

Time for Filebeat

This is where Filebeat came to the rescue. To quote about Filebeat from the official website.

Forget using SSH when you have tens, hundreds, or even thousands of servers, virtual machines, and containers generating logs. Filebeat helps you keep the simple things simple by offering a lightweight way to forward and centralize logs and files.

Basically, what it means is that ,Filebeat can reside in the application server and then monitor a folder location to send the logs as events to Logstash running in the ELK Server. Filebeat connects using the IP address and the socket on which Logstash is listening for the Filebeat events.

In this post, we will see how we can configure Filebeat to post data to Logstash server. We need to configure Logstash also to listen and receive the events from Filebeat.

Filebeat configuration

We will start with the configuration of the Filebeat. First we need to download the latest filebeat application and install in the application servers where the logs are getting generated. Please note that I am using a RHEL CentOS based server and the installation instructions would be based on that platform. But the steps are almost similar for other platforms as well.

First off, download the latest file beat zip or tar.gz file from the Filebeat download page. As of this writing, Filebeat is 5.1.1 version and I am the link for Linux 64 bit.

Extract the files from the archive using the command

tar –xvzf  filebeat-5.1.1-linux-x86_64.tar.gz

This will create the folder filebeat-5.1.1-linux-x86_64  and  will have the files required for filebeat. Filebeat configuration is stored in the filebeat.yml file and we need to edit it for configuring the following options:

Configure the logs path

 Open the  filebeat.yml file in your favorite editor ( I am using vi , you could use any commandline text editor ). Find the following entry in the  file
selection_227
You need to specify the location of the logs in the highlighted section. I have only given the the name to a single file in the path as this is the latest file always.
In case you want to have filebeat scan all files, you can put a path like /path/to/logs/*/*.log . This will traverse all the subdirectories under /path/to/logs and find the files ending with .log extension.
NOTE : The formatting and the tabs in the yml (Yaml ) file are very important. Make sure that you enter the file path properly as in the image.

Configure the logstash instance

We need to add the host and the port of the logstash service in the Filebeat for shipping of the events. Filebeat by default is configured to put the events to Elasticsearch. I have custom transformations in the Logstash and would prefer to send it via Logstash.
Open the filebeat.yml file in the editor and find the location of the text in the below image.
selection_228
We need to comment out output.elasticsearch and the hosts section under that using ‘#’. This is to disable the posting to elastic search ( which is the default ).
After that we need to uncomment the  output.logstash and the hosts section under that. Enter the hostname and the port for the logstash service.
The host name is system where the logstash is running and port is the port on which logstash is listening to events from Filebeat. This need to be the same port we are configuring the logstash in logstash.conf file

Starting Filebeat

Once the configurations are done, we can start Filebeat by running the following command.

nohup ./filebeat -e -c  filebeat.yml -d “publish” > /dev/null 2>&1 &

You need to run this command from the Filebeat installation directory. You may also make this as a script and then run from a different location. In that case make sure that the filepaths are absolute for the filebeat and filebeat.yml files

I am redirecting the logs to /dev/null as I found that the logs generated by filebeat for each line of log shipped is big and for a high traffic system  the logs were getting really huge. You can get filebeat to log the content by specifying a log file name instead of /dev/null

NOTE : nohup is used to make sure that the program does not hang up when you close the SSH session.

Logstash configuration

The configuration requirement in logstash is relatively simple. We need to have a plugin installed in logstash that will enable the communication with filebeat and set logstash to listen to a port  for filebeat events.

Add the beats plugin

The beats plugin enables logstash to receive and interpret the events sent from Filebeats system. This plugin is not installed by default. We are going to install it manully.

  1. Change the directory to /bin of logstash installation path.
  2. Run the following command
    ./logstash-plugin install logstash-input-beats
  3. Wait for the command to finish and you should get a message that plugin installed successfully.

Filebeat configuration in logstash.sh

Next we need to make sure that logstash is listening to the filebeats events as input. For that open the logstash.conf file in the bin directory of logstash

selection_229

The input {} section should have the beats configured as in the image. Save and close the file.

Start logstash using the following command

nohup ./logstash -f logstash.conf &

You can provide any port number available. But need to make sure that the same port number is specified in the hosts section of filebeat.yml file as well.

Opening the port in firewall

One important thing is to open the port for the logstash . This is mandatory step to avoid connectivity issues between Filebeat and logstash

For Centos 6

$ sudo iptables -I INPUT -p tcp -m tcp –dport 5044 -j ACCEPT
$ sudo service iptables save

For CentOS 7

$ sudo firewall-cmd –zone=public –add-port=5044/tcp –permanent
$ sudo firewall-cmd –reload

Final thoughts

Once the configurations are in place, logstash will receive the events whenever there is a new log added to the location watched by filebeat from any server. This is a lot more efficient method than the SCP option I was using before. Also Filebeat has inherent capabilities to remember the last read locations and to start from the same location in case the reading was stopped inadvertently

Please raise your queries and suggestions in the comment box.  Hope this helps someone having hard time shipping logs.

regards
S

You may also like...

2 Responses

  1. March 3, 2017

    […] the container ship the logs to a remote server ( using Filebeat or similar […]

  2. September 18, 2018

    […] Filebeat for shipping logs […]

Leave a Reply

Your email address will not be published. Required fields are marked *