Skip to content

Troubleshooting

After index rollover, the data are not coming

After rollover, Elasticsearch usually takes a few minutes to display the data.

If few minutes have passed, check the lmio-tenant-others data in Kibana's Discover or LogMan.io UI's Discover. If there are no related data, check others.tenant topic in Kafka UI. The errors should be specific enough to describe why the data cannot be stored in Elasticsearch. It usually means that the wrong schema is used, see Event Lane section or the point 8.) in Migration section.

For advanced users: When you set the index template priority in backup-lmio-tenant-events-eventlane-template (from the step 4. in Migration section) from 0 to 2 or more and do index rollover, this old backup index template will be used to create new indices, while the new index template created by Depositor will be disregarded. This should give you more time to investigate the issue in production environments. Do not forget then to lower the priority in backup-lmio-tenant-events-eventlane-template back to 0.

The SSD storage is becoming full and Elasticsearch is the reason

Then you need to adjust the lifecycle policy in the given event lane declaration (for example, fortigate.yaml) to move data from hot to warm phase sooner. By default, the data become warm in 3 days. For more information on how to set custom lifecycle policy, see Event Lane section.

What is the maximum index size and how can I change it?

Depositor's default lifecycle policy has limit of 16 GB per primary shard per index, so the default maximum index size is thus 6 shards * 16 GB * 2 for replica = 192 GB per index.

To change the maximum index size, you need to specify custom lifecycle policy within the event lane declaration (for example, fortigate.yaml), where you specify the max_primary_shard_size attribute. For more information on how to set custom lifecycle policy, see Event Lane section.

There is only lmio-tenant-events index with no number postfix, that is not linked to any lifecycle policy

Every index managed by Depositor must end with -00000x, for instance lmio-tenant-events-eventlane-000001.

If it is not the case, please check both Docker logs and file logs (if file logs are configured). The Docker logs can be accessed via the following command:

docker logs -f -n 1000 <lmio-depositor>

Warning

There is no simple way to move already existing index lmio-tenant-events with no lifecycle policy to lmio-tenant-events-eventlane-000001. Hence, Depositor should be stopped, index lmio-tenant-events must be deleted, then the Depositor should be started again. Always check the logs after every restart of Depositor.

There is an index with "Index lifecycle error" and "no_node_available_exception", what happened?

This issue usually happens when Elasticsearch is restarted during the index's shrinking phase. The exact message is then: """NoNodeAvailableException[could not find any nodes to allocate index [myindex] onto prior to shrink]

In order to resolve the issue:

1.) Go to Kibana, Stack Management, Indices, select your index and click on its name

2.) In the index detail, click on Edit settings

3.) Inside, set the following values to null:

index.routing.allocation.require._name: null
index.routing.allocation.require._id: null

4.) Click on Save

5.) Go to Dev Tools and run the following command:

POST /myindex/_ilm/retry

replace myindex with the name of your index

6.) Go back to Stack Management, Indices and check that the ILM error disappeared for the given index

There are many logs in others and I cannot find the ones with "interface" attribute inside

Kafka Console Consumer can be used to obtain events from multiple topics, here from all topics starting with events..

Next it is possible the grep the field in doublequotes:

/usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --whitelist "events.*" | grep '"interface"'

This command gives you all incoming logs with "interface" attribute from all events topics.