LAB 001:Setup

In this lab we will learn how to setup the entire stack and how to retrieve the alerts from our threat hunting platform using Slack

During the course each steps and tools are widely explained from the trainers. The VM are also provided to the attendees with the complete setup


  • Elasticsearch, Kibana and ElastAlert up and running (we will call this machine "Threat Hunting machine")

git clone
cd belk-threat-hunting
docker-compose build && docker-compose up
  • Windows VM up and running (Target Machine).

  • Slack account

What's a BELK Stack

BELK stands for Beats - Elasticsearch - Logstash - Kibana, and it is a stack of open-source projects that together build one of the most powerful data ingestion, processor and visualisation engine, provided by Elastic. In these labs we will setup the stack to use it as a SIEM for intrusion detection and threat hunting. We will explore how to grow from a simple stack to a more complex and complete one, covering most of the MITRE ATT&CK scenarios for post infection analysis. Let's start with the Beats.

Target machine (Windows VM)


Beats is the platform for single-purpose data shippers. They send data from hundreds or thousands of machines and systems to Logstash or Elasticsearch. Beats are shipped with different modules to collect information from a host in the network. Beats must be installed on the target machine (Windows VM)


This module will help us collecting network traffic.


  1. Download the plugin

  2. Copy the folder in C:\Program Files

  3. Edit the file packetbeat.yaml to point to your running Elasticsearch instance(s)

  4. Run it!

If the service does not start we can debug the error, using the following command from a cmd.exe terminal in Administrator mode

.\packetbeat.exe -c packetbeat.yml -e -v -d "*"

If everything is ok, we should see the logs printed out in the console. To check that our plugin works fine we can check the running services and see if the status is Running.

In your ElasticSearch instance you should be able to query the index:*/_search


Fetch logs from the Windows Event Logs and send it to Elasticsearch

  1. Download the plugin

  2. Extract it and move it to C:\Program Files

  3. Rename it to WinLogBeat

  4. Edit the winlogbeat.yml adding the url of the Kibana instance and Elasticsearch

  5. Run .\winlogbeat.exe -c winlogbeat.yml

  6. Add the fields to the Kibana dashboard using .\winlogbeat.exe setup --dashboards

  7. After running the powershell installation, make sure to run the service using Start-Service winlogbeat

If sysmon is not installed on the Windows VM , you must install it yourself. ⬇️Download

In your ElasticSearch instance you should be able to query the index:*/_search


It allows to analyse running processes, subprocesses and activities on the host machine.

If we follow the guide from Elastic, we should have our plugins in C:\Program Files\

In your ElasticSearch instance you should be able to query the index:*/_search

Threat Hunting machine


"Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. Logstash and Beats facilitate collecting, aggregating, and enriching your data and storing it in Elasticsearch. Kibana enables you to interactively explore, visualise, and share insights into your data and manage and monitor the stack. Elasticsearch is where the indexing, search, and analysis magic happen.

Elasticsearch provides real-time search and analytics for all types of data. Whether you have structured or unstructured text, numerical data, or geospatial data, Elasticsearch can efficiently store and index it in a way that supports fast searches. You can go far beyond simple data retrieval and aggregate information to discover trends and patterns in your data. And as your data and query volume grows, the distributed nature of Elasticsearch enables your deployment to grow seamlessly right along with it." (Elastic)

A complete guide can be found here


"Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.

Kibana makes it easy to understand large volumes of data. It's simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time." (Elastic)

A complete guide can be found here

If we correctly installed Auditbeat and WinLogBeat we should be able to see the new indexes auditbeat-7.4.0-2019 , winlogbeat-7.4.0-2019 and elastalert_status in the Kibana dashboard at


as shown in the screenshot below

At the same time we can verify that Elasticsearch can see our target machine, using the SIEM module at



"ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. ElastAlert works with all versions of Elasticsearch."

If ElastAlert has been initialised correctly, we should see the Alert sign on the left menu

As we can see, we have already two rules that are loaded from the folder ./elastaler/rule/

we are going to create a simple rule to match any Python instances that is started on the target machine. Ore rule will match any process that is named Python. Simple right?

Let's go!

The Kibana UI gives us the possibility to create rules directly in the browser. For now we will use this feature, but in the next lab we will create files on our own and load it in the rules/ folder of ElastAlert.

So, let's create a simple rule like this:

- "debug"
description: "Detect Python" 
 - query:
    wildcard: { '*Python*'}
index: auditbeat-*
name: Test0
priority: 3
  seconds: 1
type: any

alert:debug : the alert is logged in the ElastAlert logs (not really convenient, so in later we will setup a Slack alerting system)

description: Simple description of the alert

filter: This is the main part of our matching rule. In this case we want to catch any process that is created or destroyed, named Python. We use the wildcard query so we can match more than just the exact word.

index : the index form where we want to pull out information. In this case this info comes from Auditbeat, so we will use the Auditbeat index

name : this is the name of the alert, that will be visualised where the alert is published.

make it as explicit as possible

realert: can be seconds, minutes, hours, days etc

type:any : in this case we use all the rules available

To test the rule, create a Python file with the following code on the target machine

	print "hey"

and run it with python.exe

If everything is ok, we should see all the Python processes started in the Kibana dashboard

Sigma rules

The set of rules that we are going to use in these labs are the conversion of the Sigma rules and few custom ones.

What are the Sigma rules?

Written by Florian Roth & Thomas Patzke Sigma is a high level generic language for analytics and one of the best method so far of solving logging signature problem! Sigma decouples rule logic from SIEM vendor and field names and provides a standard way to define rules. Sigma rules are written using the YAML syntax

A list of available rules can be found here:

In order to convert Sigma rules to vendor/open-source SIEM, conversion tools are available online. A fancy one is For this lab we are going to use a tool provided by the Sigma creators called sigmac

From Sigma to ElastAlert

We are going to use the SigMac tool to translate the Sigma rules to ElastAlert rules.

Signac takes in input:

  • --target the alerting tool (or SIEM) we are going to use. In this case elastalert

  • -c the configurations to use to convert the Sigma file. To obtain the full list of possible configuration, the tool can be run with the --list parameter

./sigmac --target=elastalert /Users/oc12ys/toolz/sigma/rules/windows/process_creation/win_office_shell.yml -o a.yml -c windows-audit

A complete set of rules, already translated can be found here

Test your ElastAlert rule

ElastAlert provides a way to test the defined rules, using the UI and through the CLI tool elastalert-test-rule. We strongly suggest using the second option because of we can look easily in the debug logs produced. Because we are using docker, to run the command line tool, we will need to:

  • SSH into the Docker container /elastalert docker exec -it elastalert /bin/sh

  • run the tool manually loading the rule we want to test and the config file

cd /opt/elastalert/rules; elastalert-test-rule RULE_NAME.yaml --config=../config.yaml

After the tool runs the rule over the data, it returns few stats like the number of hits.

Slack setup

  • Create a new slack channel (#threat-hunting)

  • Add a new webhook (Add a new app)

  • Select the #threat-hunting channel

  • Save your webhook URL[.................]iVSMBBFV5yn

  • We can modify username and icon of our bot (optional)

Putting all together the final payload will look like:

"payload ={\"channel\": \"#threat-hunting\", \"username\": \"webhookbot\", \"text\": \"This is posted to #threat-hunting and comes from a bot named webhookbot.\", \"icon_emoji\": \":alert:\"}"[..............]VSMBBFV5yn

Click Save settings to finish the setup

Test our slack bot

To integrate the slack alerts in our ElastAlert rules, we need to modify the rule as follow:

- "slack"
slack_webhook_url: "[.............................]BBFV5yn"
description: "Detect Python" 
 - query:
    wildcard: { '*Python*'}
index: auditbeat-*
name: Test0
priority: 3
  seconds: 1
type: any

If everything is correctly set, we can launch again the Python code. At this point we should receive our alerts in Slack

Great! You successfully completed the first lab: Setup!

Interesting? For more info about the full course

Last updated