diff --git a/grafana/alerting-loki-logs/step2.md b/grafana/alerting-loki-logs/step2.md index 02557aa..49a7280 100644 --- a/grafana/alerting-loki-logs/step2.md +++ b/grafana/alerting-loki-logs/step2.md @@ -1,6 +1,6 @@ # Generate sample logs -1. Download and save a python file that generates logs. +1. Download and save a Python file that generates logs. ```bash wget https://raw.githubusercontent.com/grafana/tutorial-environment/master/app/loki/web-server-logs-simulator.py diff --git a/grafana/alerting-loki-logs/step3.md b/grafana/alerting-loki-logs/step3.md index f47f7f2..311e02e 100644 --- a/grafana/alerting-loki-logs/step3.md +++ b/grafana/alerting-loki-logs/step3.md @@ -4,9 +4,7 @@ Besides being an open-source observability tool, Grafana has its own built-in al In this step, we’ll set up a new [contact point](https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/webhook-notifier/). This contact point will use the _webhooks_ integration. In order to make this work, we also need an endpoint for our webhook integration to receive the alert. We will use [Webhook.site](https://webhook.site/) to quickly set up that test endpoint. This way we can make sure that our alert is actually sending a notification somewhere. -1. In your browser, **sign in** to your Grafana Cloud account. - - OSS users: To log in, navigate to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}), where Grafana is running. +1. Navigate to [http://localhost:3000]({{TRAFFIC_HOST1_3000}}), where Grafana is running. 1. In another tab, go to [Webhook.site](https://webhook.site/). diff --git a/grafana/alerting-loki-logs/step4.md b/grafana/alerting-loki-logs/step4.md index 6acd076..c9be810 100644 --- a/grafana/alerting-loki-logs/step4.md +++ b/grafana/alerting-loki-logs/step4.md @@ -18,9 +18,9 @@ In this section, we define queries, expressions (used to manipulate the data), a 1. Paste the query below. -``` -sum by (message)(count_over_time({filename="/var/log/web_requests.log"} != `status=200` | pattern `<_> duration<_>` [10m])) -```{{copy}} + ``` + sum by (message)(count_over_time({filename="/var/log/web_requests.log"} != "status=200" | pattern "<_> duration<_>" [10m])) + ```{{copy}} This query will count the number of log lines with a status code that is not 200 (OK), then sum the result set by message type using an **instant query** and the time interval indicated in brackets. It uses the LogQL pattern parser to add a new label called `message`{{copy}} that contains the level, method, url, and status from the log line. @@ -42,7 +42,7 @@ If you’re using your own logs, modify the LogQL query to match your own log me 1. Click **Preview** to run the queries. - It should return a single sample with the value 1 at the current timestamp. And, since `1`{{copy}} is above `0`{{copy}}, the alert condition has been met, and the alert rule state is `Firing`{{copy}}. + It should return alert instances from log lines with a status code that is not 200 (OK), and that has met the alert condition. The condition for the alert rule to fire is any ocurrence that goes over the threshold of `0`{{copy}}. Since the Loki query has returned more than zero alert instances, the alert rule is `Firing`{{copy}}. ![Preview of a firing alert instances](https://grafana.com/media/docs/alerting/expression-loki-alert.png) @@ -56,7 +56,7 @@ An [evaluation group](https://grafana.com/docs/grafana/latest/alerting/fundament To set up the evaluation: -1. In **Folder**, click **+ New folder** and enter a name. For example: _loki-alerts_. This folder will contain our alerts. +1. In **Folder**, click **+ New folder** and enter a name. For example: _web-server-alerts_. This folder will contain our alerts. 1. In the **Evaluation group**, repeat the above step to create a new evaluation group. We will name it _1m-evaluation_. diff --git a/grafana/alerting-loki-logs/step5.md b/grafana/alerting-loki-logs/step5.md index e901956..850f04f 100644 --- a/grafana/alerting-loki-logs/step5.md +++ b/grafana/alerting-loki-logs/step5.md @@ -1,5 +1,5 @@ # Trigger the alert rule -Since the alert rule that you have created has been configured to always fire, once the evaluation interval has concluded, you should receive an alert notification in the Webhook endpoint. +Since the Python script will continue to generate log data that matches the alert rule condition, once the evaluation interval has concluded, you should receive an alert notification in the Webhook endpoint. ![Firing alert notification details](https://grafana.com/media/docs/alerting/alerting-webhook-firing-alert.png)