LogMan.io Baseliner configuration¶
LogMan.io Baseliner requires following dependencies:
- Apache ZooKeeper
- NGINX (for production deployments)
- Apache Kafka
- MongoDB with
/data/db
folder mapped to SSD (/data/ssd/mongo/data
) - Elasticsearch
- SeaCat Auth
- LogMan.io Library with
/Baselines
folder and a schema in/Schemas
folder
MongoDB data folder location
When using Baseliner, MongoDB MUST have its data folder located on SSD/fast drive, not HDD. Having the MongoDB data folder located on HDD will result in all services using MongoDB slowing down.
Example¶
This is the most basic configuration required for each instance of LogMan.io Baseliner:
[declarations]
# The /Baselines is a default path
groups=/Baselines
[tenants]
ids=default
[pipeline:BaselinerPipeline:KafkaSource]
topic=^events.tenant.*
[pipeline:OutputPipeline:KafkaSink]
topic=complex.tenant
[zookeeper]
servers=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181
[library]
providers=zk:///library
[kafka]
bootstrap_servers=kafka-1:9092,kafka-2:9092,kafka-3:9092
[elasticsearch]
url=http://es01:9200/
[mongodb.storage]
mongodb_uri=mongodb://mongodb1,mongodb2,mongodb3/?replicaSet=rs0
mongodb_database=baseliners
[auth]
multitenancy=yes
public_keys_url=http://localhost:8081/openidconnect/public_keys
Zookeeper¶
Specify locations of the Zookeeper server in the cluster:
[zookeeper]
servers=zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181
Hint
For non-production deployments, the use of a single Zookeeper server is possible.
Library¶
Specify the path(s) to the Library to load declarations from:
[library]
providers=zk:///library
Hint
Since ECS.yaml
schema in /Schemas
is utilized by default, consider using the LogMan.io Common Library.
Kafka¶
Define the Kafka cluster's bootstrap servers:
[kafka]
bootstrap_servers=kafka-1:9092,kafka-2:9092,kafka-3:9092
Hint
For non-production deployments, the use of a single Kafka server is possible.
ElasticSearch¶
Specify URLs of Elasticsearch master nodes.
Elasticsearch is necessary for using lookups, e.g. as a !LOOKUP
expression or a lookup trigger.
[elasticsearch]
url=http://es01:9200
username=MYUSERNAME
password=MYPASSWORD
MongoDB¶
Specify the URL of the MongoDB cluster with a replica set.
MongoDB stores the baselines and counters of incoming events.
[mongodb.storage]
mongodb_uri=mongodb://mongodb1,mongodb2,mongodb3/?replicaSet=rs0
mongodb_database=baseliners
Auth¶
The Auth section enables multitenancy, restricting baseline access to only users with access to the specified tenant:
[auth]
multitenancy=yes
public_keys_url=http://localhost:8081/openidconnect/public_keys
Input¶
The events for the baselines are read from the Kafka topics:
[pipeline:BaselinerPipeline:KafkaSource]
topic=^events.tenant.*
Declarations (optional)¶
Define the path for baseline declarations. The default path is /Baselines
, and the default fallback schema is /Schemas/ECS.yaml
.
If you are using a schema other than ECS (the Elastic Common Schema), you can customize the schema path.
[declarations]
groups=/Baselines
schema=/Schemas/ECS.yaml
Tenants¶
Specify the tenant for which to create the baseline. You can list multiple tenants, separating IDs with a comma, but it is recommended to have just one tenant per baseline.
[tenants]
ids=tenant1
tenant_url=http://localhost:8080/tenant
It is recommended to run at least one instance of Baseliner per tenant. In most cases, a single instance per tenant is appropriate.
Output¶
If triggers are utilized, you can change the default topic for the output pipeline:
[pipeline:OutputPipeline:KafkaSink]
topic=complex.tenant
Web APIs¶
The Baseliner provides one web API.
The Web API is designed for communication with the UI.
[web]
listen=0.0.0.0 8999
The default port of the public web API is tcp/8999
.
This port is designed to serve as the NGINX upstream for connections from Collectors.