-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to limit loglevels/severity of forwarded logs: splunk is flooded with "info" ? #10
Comments
I seem to have the same issue - Splunk is getting 1.5m message / hour, including info. However, the fluent.conf file seems to be correctly configured, according to this Extract from fluent.conf:
Environment variable:
Would anybody have an idea why I (and @toastbrotch) are seeing 'info' messages in Splunk? |
@gmcbrien this configuration does not configure the level of logs that are sent to splunk. it ONLY sets the logging level for the fluentd forwarder itself and not any filtering of the data that it is processing |
Oops - thanks @sabre1041 makes sense.... Then, please consider my 'issue' closed :) |
hm. then i misunderstood https://docs.fluentd.org/configuration/plugin-common-parameters#log_level what is the way to filter out "info"? just add "@log_level warn" at here https://github.com/sabre1041/openshift-logforwarding-splunk/blob/master/charts/openshift-logforwarding-splunk/templates/log-forwarding-splunk-configmap.yaml#L26 ? |
We have had the same issue - we had to disable this while we figure out how to filter logs. We have 5 clusters, sending about 45M logs a day with this enabled :( |
my quickfix so far to get rid of "info", "unknown" and "notice":
|
Have you considered creating a PR and making this something that could be configured (enabled/disabled/tweaked) through the chart? |
not yet. and i had to change it aswell, as my original solution deleted also the whole workload-logs as it was UNKNOWN! this is my current solution i'm testing. therefore i added a label "customer: myworkload" to each of the namespaces (oc label namespace foo customer=myworkload --overwrite) i want to receive the workload logs and i filter it this rather hacky way:
seems to work despite hacky |
hi
i use your sample (https://github.com/sabre1041/openshift-logforwarding-splunk/blob/master/charts/openshift-logforwarding-splunk/values.yaml) on openshift 4.6 with "loglevel: warn", but on splunk i see
86% of messages is level "info"
13% "unknown"
0.5% "Metadata"
0.01% "warning"
0.003% "notice"
0.0000... "RequestResponse"
0.0000... "err"
so this option does not seem to work or it does not limit the forwarded messages, as i think. i forward audit, app and infro to splunk. i see 3million messages in 2hours, on fresh setup without any workload, which instantly exploded our splunk server & license.
how do i debug this?
The text was updated successfully, but these errors were encountered: