Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc: Add tutorial for filter-elastic_integration #15932

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
185 changes: 185 additions & 0 deletions docs/static/ea-integration-tutorial.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
[[ea-integrations-tutorial]]
== Using {ls} with Elastic {integrations} (Beta) tutorial


Logstash elastic-integration Filter
Plugin Guide
Ingest node pipelines in Logstash


Process overview
* Configure Fleet to send data from Elastic Agent to Logstash
* Create an Elastic Agent policy with the necessary integrations
* Configure Logstash to use the elastic_integration filter plugin


Logstash elastic-integration Filter Plugin Guide

Overview
The purpose of this guide is to walk through the steps necessary to configure Logstash to transform events
collected by the Elastic Agent using our pre-built ingest node pipelines that normalize data to the Elastic
Common Schema. This is possible with a new beta feature in Logstash known as the elastic-integration
filter plugin.
Using this new plugin, Logstash reads certain field values generated by the Elastic Agent that tells Logstash to
fetch pipeline definitions from an Elasticsearch cluster which Logstash can then use to process events before
sending them to thier configured destinations.

Prerequisites/Requirements

There are a few requirements needed to make this possible:

* A working Elasticsearch Cluster
* Fleet server
* An Elastic Agent configured to send its output to Logstash
* An Enterprise License
* A user configured with the minimum required privileges

This feature can also be used with a self-managed agent, but the appropriate setup and configuration details
of using a self-managed agent will not be provided in this guide.

Configure Fleet to send data from Elastic Agent to Logstash

. For Fleet Managed Agent, go to Kibana and navigate to Fleet → Settings.

Figure 1: fleet-output

. Create a new output and specify Logstash as the output type.

Figure 2: logstash-output

. Add the Logstash hosts (domain or IP address/s) that the Elastic Agent will send data to.
. Add the client SSL certificate and the Client SSL certificate key to the configuration.
You can specify at the bottom of the settings if you would like to make this out the default for agent
integrations. By selecting this option, all Elastic Agent policies will default to using this Logstash output
configuration.
. Click “Save and apply settings” in the bottom right-hand corner of the page.

Create an Elastic Agent policy with the necessary integrations

. In Kibana navigate to Fleet → Agent policies and click on “Create agent policy”.



Figure 3: create-agent-policy
. Give this policy a name, and then click on “Advanced options”.
. Change the “Output for integrations” setting to the Logstash output you created in the last step.



Figure 4: policy-output


. Click “Create agent policy” at the bottom of the flyout.
. The new policy should be listed on the Agent policies page now.
. Click on the policy name so that we can start configuring an integration.
. On the policy page, click “Add integration”. This will take you to the integrations browser, where you
can select an integration that will have data stream definitions (mappings, pipelines, etc.), dashboards,
and data normalization pipelines that convert the source data into Elastic Common Schema.

Figure 5: add-integration-to-policy
In this example we will search for and select the Crowdstrike integration.

Figure 6: crowdstrike-integration

. On the Crowdstrike integration overview page, click “Add Crowdstrike” to configure the integration.



Figure 7: add-crowdstrike
. Configure the integration to collect the needed data.
On step 2 at the bottom of the page (Where to add this integration?), make sure the “Existing hosts” option
is selected and the Agent policy selected is our Logstash policy we created for our Logstash output. This
should be selected by default using the workflow of these instructions.
. Click “Save and continue” at the bottom of the page.
A modal will appear on the screen asking if you want to add the Elastic Agent to your hosts. If you have not
already done so, please install the Elastic Agent on a host somewhere. Documentation for this process can be
found here: https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html

Figure 8: add-elastic-agent-to-host

Configure Logstash to use the elastic_integration filter plugin


Create a new pipeline configuration in Logstash.

Make sure elastic_integration plugin is installed or install with /bin/logstash-plugin install logstash-filter-
elastic_integration before running the pipeline.

A full list of configuration options can be found here: https://www.elastic.co/guide/en/logstash/current/plugins-
filters-elastic_integration.html

[source,txt]
-----
input {
elastic_agent { port => 5055 }
}
filter {
elastic_integration {
hosts => "{es-host}:9200"
ssl_enabled => true
ssl_verification_mode => "certificate"
ssl_certificate_authorities => ["/usr/share/logstash/config/certs/ca-cert.pem"]
auth_basic_username => "elastic"
auth_basic_password => "changeme"
remove_field => ["_version"]
}
}
output {
stdout {
codec => rubydebug # to debug datastream inputs
}
## add elasticsearch
elasticsearch {
hosts => "{es-host}:9200"
password => "changeme"
user => "elastic"
cacert => "/usr/share/logstash/config/certs/ca-cert.pem"
ssl_certificate_verification => false
}
}
-----


If you are using Elastic Cloud, please refer to this configuration instead.

[source,txt]
-----
input {
elastic_agent { port => 5055 }
}
filter {
elastic_integration {
cloud_id => "your-cloud:id"
api_key => "api-key"
remove_field => ["_version"]
}
}
output {
stdout {}
elasticsearch {
cloud_auth => "elastic:<pwd>"
cloud_id => "your-cloud-id"
}
}
-----

Every event sent from the Elastic Agent to Logstash contains specific meta-fields. Input event are expected
to have data_stream.type, data_stream.dataset, and data_stream.namespace . This tells Logstash which pipelines
to fetch from Elasticsearch to correctly process the event before sending that event to it’s destination output.
Logstash performs a check quickly and often to see if an integrations associated ingest pipeline has had updates
or changes so that events are processed with the most recent version of the ingest pipeline.


All processing occurs in Logstash.


The user or credentials specified in the elastic_integration plugin needs to have sufficient privileges to get

the appropriate monitoring, pipeline definitions, and index templates necessary to transform the events. Mini-
mum required privileges can be found here: https://www.elastic.co/guide/en/logstash/current/plugins-filters-
elastic_integration.html#plugins-filters-elastic_integration-minimum_required_privileges

3 changes: 3 additions & 0 deletions docs/static/ea-integrations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -80,3 +80,6 @@ output { <3>
<1> Use `filter-elastic_integration` as the first filter in your pipeline
<2> You can use additional filters as long as they follow `filter-elastic_integration`
<3> Sample config to output data to multiple destinations


include::ea-integration-tutorial.asciidoc[]
Loading