You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just shy of a day of running, the journaler begins to error out, showing 'restarting' as the status in docker. When I go to the logs for the container I find repeated errors that indicate the max series of database is exceeded.
INFO:pika.adapters.utils.connection_workflow:Pika version 1.1.0 connecting to ('172.18.0.6', 5672)
INFO:pika.adapters.utils.io_services_utils:Socket connected: <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('172.18.0.5', 38098), raddr=('172.18.0.6', 5672)>
INFO:pika.adapters.utils.connection_workflow:Streaming transport linked up: (<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f9bb372f730>, _StreamingProtocolShim: <SelectConnection PROTOCOL transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f9bb372f730> params=>).
INFO:pika.adapters.utils.connection_workflow:AMQPConnector - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f9bb372f730> params=>
INFO:pika.adapters.utils.connection_workflow:AMQPConnectionWorkflow - reporting success: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f9bb372f730> params=>
INFO:pika.adapters.blocking_connection:Connection workflow succeeded: <SelectConnection OPEN transport=<pika.adapters.utils.io_services_utils._AsyncPlaintextTransport object at 0x7f9bb372f730> params=>
INFO:sos-journaler:Connected to RabbitMQ
INFO:pika.adapters.blocking_connection:Created channel=1
Traceback (most recent call last):
File "/sos-journaler/main.py", line 40, in
main()
File "/sos-journaler/main.py", line 36, in main
channel.start_consuming()
File "/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py", line 1866, in start_consuming
self._process_data_events(time_limit=None)
File "/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py", line 2027, in _process_data_events
self.connection.process_data_events(time_limit=time_limit)
File "/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py", line 834, in process_data_events
self._dispatch_channel_events()
File "/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py", line 566, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/local/lib/python3.9/site-packages/pika/adapters/blocking_connection.py", line 1493, in _dispatch_events
consumer_info.on_message_callback(self, evt.method,
File "/sos-journaler/sos_journaler/message_handling.py", line 19, in on_message
self._db.write_points([point])
File "/usr/local/lib/python3.9/site-packages/influxdb/client.py", line 594, in write_points
return self._write_points(points=points,
File "/usr/local/lib/python3.9/site-packages/influxdb/client.py", line 672, in _write_points
self.write(
File "/usr/local/lib/python3.9/site-packages/influxdb/client.py", line 404, in write
self.request(
File "/usr/local/lib/python3.9/site-packages/influxdb/client.py", line 369, in request
raise InfluxDBClientError(err_msg, response.status_code)
influxdb.exceptions.InfluxDBClientError: 400: {"error":"partial write: max-series-per-database limit exceeded: (1000000) dropped=1"}
A du -h of /var/lib/influxdb/data/fixm shows about 11GB of storage being used.
The text was updated successfully, but these errors were encountered:
Just shy of a day of running, the journaler begins to error out, showing 'restarting' as the status in docker. When I go to the logs for the container I find repeated errors that indicate the max series of database is exceeded.
A du -h of /var/lib/influxdb/data/fixm shows about 11GB of storage being used.
The text was updated successfully, but these errors were encountered: