LAB 001:Setup
In this lab we will learn how to setup the entire stack and how to retrieve the alerts from our threat hunting platform using Slack
Last updated
In this lab we will learn how to setup the entire stack and how to retrieve the alerts from our threat hunting platform using Slack
Last updated
PENETRATION TESTS
PentestsLET'S MEET
Book 15 minutes with one of our experts@ dcodx.com
During the course each steps and tools are widely explained from the trainers. The VM are also provided to the attendees with the complete setup
Elasticsearch, Kibana and ElastAlert up and running (we will call this machine "Threat Hunting machine")
Windows VM up and running (Target Machine).
Slack account
BELK stands for Beats - Elasticsearch - Logstash - Kibana, and it is a stack of open-source projects that together build one of the most powerful data ingestion, processor and visualisation engine, provided by Elastic. In these labs we will setup the stack to use it as a SIEM for intrusion detection and threat hunting. We will explore how to grow from a simple stack to a more complex and complete one, covering most of the MITRE ATT&CK scenarios for post infection analysis. Let's start with the Beats.
Beats is the platform for single-purpose data shippers. They send data from hundreds or thousands of machines and systems to Logstash or Elasticsearch. Beats are shipped with different modules to collect information from a host in the network. Beats must be installed on the target machine (Windows VM)
This module will help us collecting network traffic.
Steps:
Download the plugin
Copy the folder in C:\Program Files
Edit the file packetbeat.yaml
to point to your running Elasticsearch instance(s)
Run it!
If the service does not start we can debug the error, using the following command from a cmd.exe
terminal in Administrator mode
.\packetbeat.exe -c packetbeat.yml -e -v -d "*"
If everything is ok, we should see the logs printed out in the console. To check that our plugin works fine we can check the running services and see if the status is Running
.
In your ElasticSearch instance you should be able to query the index:
http://0.0.0.0:9200/packetbeat-*/_search
Fetch logs from the Windows Event Logs and send it to Elasticsearch
Download the plugin
Extract it and move it to C:\Program Files
Rename it to WinLogBeat
Edit the winlogbeat.yml
adding the url of the Kibana instance and Elasticsearch
Run .\winlogbeat.exe -c winlogbeat.yml
Add the fields to the Kibana dashboard using .\winlogbeat.exe setup --dashboards
After running the powershell installation, make sure to run the service using Start-Service winlogbeat
In your ElasticSearch instance you should be able to query the index:
http://0.0.0.0:9200/winlogbeat-*/_search
It allows to analyse running processes, subprocesses and activities on the host machine.
If we follow the guide from Elastic, we should have our plugins in C:\Program Files\
In your ElasticSearch instance you should be able to query the index:
http://0.0.0.0:9200/auditbeat-*/_search
"Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. Logstash and Beats facilitate collecting, aggregating, and enriching your data and storing it in Elasticsearch. Kibana enables you to interactively explore, visualise, and share insights into your data and manage and monitor the stack. Elasticsearch is where the indexing, search, and analysis magic happen.
Elasticsearch provides real-time search and analytics for all types of data. Whether you have structured or unstructured text, numerical data, or geospatial data, Elasticsearch can efficiently store and index it in a way that supports fast searches. You can go far beyond simple data retrieval and aggregate information to discover trends and patterns in your data. And as your data and query volume grows, the distributed nature of Elasticsearch enables your deployment to grow seamlessly right along with it." (Elastic)
A complete guide can be found here
"Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.
Kibana makes it easy to understand large volumes of data. It's simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time." (Elastic)
A complete guide can be found here
If we correctly installed Auditbeat and WinLogBeat we should be able to see the new indexes auditbeat-7.4.0-2019
, winlogbeat-7.4.0-2019
and elastalert_status
in the Kibana dashboard at
http://localhost:5601/app/kibana#/management/kibana/index_pattern?_g=()
as shown in the screenshot below
At the same time we can verify that Elasticsearch can see our target machine, using the SIEM module at
http://localhost:5601/app/siem#/hosts/allHosts
"ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. ElastAlert works with all versions of Elasticsearch."
If ElastAlert has been initialised correctly, we should see the Alert sign on the left menu
As we can see, we have already two rules that are loaded from the folder ./elastaler/rule/
we are going to create a simple rule to match any Python instances that is started on the target machine. Ore rule will match any process that is named Python
. Simple right?
Let's go!
The Kibana UI gives us the possibility to create rules directly in the browser. For now we will use this feature, but in the next lab we will create files on our own and load it in the rules/
folder of ElastAlert.
So, let's create a simple rule like this:
alert:debug
: the alert is logged in the ElastAlert logs (not really convenient, so in later we will setup a Slack alerting system)
description
: Simple description of the alert
filter
: This is the main part of our matching rule. In this case we want to catch any process that is created or destroyed, named Python
. We use the wildcard query so we can match more than just the exact word.
index
: the index form where we want to pull out information. In this case this info comes from Auditbeat
, so we will use the Auditbeat index
name
: this is the name of the alert, that will be visualised where the alert is published.
make it as explicit as possible
realert
: can be seconds, minutes, hours, days etc
type:any
: in this case we use all the rules available
To test the rule, create a Python file test.py
with the following code on the target machine
and run it with python.exe test.py
If everything is ok, we should see all the Python processes started in the Kibana dashboard
The set of rules that we are going to use in these labs are the conversion of the Sigma rules and few custom ones.
What are the Sigma rules?
Written by Florian Roth & Thomas Patzke Sigma is a high level generic language for analytics and one of the best method so far of solving logging signature problem! Sigma decouples rule logic from SIEM vendor and field names and provides a standard way to define rules. Sigma rules are written using the YAML syntax
A list of available rules can be found here:
In order to convert Sigma rules to vendor/open-source SIEM, conversion tools are available online. A fancy one is Uncoder.io. For this lab we are going to use a tool provided by the Sigma creators called sigmac
We are going to use the SigMac tool to translate the Sigma rules to ElastAlert rules.
Signac takes in input:
--target
the alerting tool (or SIEM) we are going to use. In this case elastalert
-c
the configurations to use to convert the Sigma file. To obtain the full list of possible configuration, the tool can be run with the --list
parameter
A complete set of rules, already translated can be found here
Test your ElastAlert rule
ElastAlert provides a way to test the defined rules, using the UI and through the CLI tool elastalert-test-rule
. We strongly suggest using the second option because of we can look easily in the debug logs produced. Because we are using docker, to run the command line tool, we will need to:
SSH into the Docker container /elastalert docker exec -it elastalert /bin/sh
run the tool manually loading the rule we want to test and the config file
After the tool runs the rule over the data, it returns few stats like the number of hits.
Create a new slack channel (#threat-hunting)
Add a new webhook (Add a new app)
Select the #threat-hunting channel
Save your webhook URLhttps://hooks.slack.com/services/TPD2N[.................]iVSMBBFV5yn
We can modify username and icon of our bot (optional)
Putting all together the final payload will look like:
Click Save settings
to finish the setup
To integrate the slack alerts in our ElastAlert rules, we need to modify the rule as follow:
If everything is correctly set, we can launch again the Python
code. At this point we should receive our alerts in Slack
Great! You successfully completed the first lab: Setup!
Interesting? For more info about the full course info@dcodx.com
If sysmon is not installed on the Windows VM , you must install it yourself. Download