LogMan.io Parser Configuration¶
First it is needed to specify which library to load the declarations from, which can be either ZooKeeper or File.
Also, every running instance of the parser must know which groups to load from the libraries, see below:
# Declarations
[declarations]
library=zk://zookeeper:12181/lmio/library.lib ./data/declarations
groups=cisco-asa@syslog
include_search_path=filters/parser;filters/parser/syslog
raw_event=event.original
count=count
tenant=tenant
timestamp=end
groups
- names of groups to be used from the library separated by spaces; if the group
is located within a folder's subfolder, use slash as a separator, f. e. parsers/cisco-asa@syslog
If the library is empty or the groups are not specified,
all events, including their context items, are dumpted
into lmio-others
Kafka topic and processed by LogMan.io Dispatcher
as they were not parsed.
include_search_path
- specifies folders to search for YAML files to be later used in !INCLUDE expression
statement (such as !INCLUDE myFilterYAMLfromFiltersCommonSubfolder) in declarations, separated by ;.
By specifying asterisk *
after a slash in the path, all subdirectories will be recursively included.
!INCLUDE expression expects file name without path and without extension as input.
The behavior is similar to -I
include attribute when building C/C++ code.
raw_event
- field name of the input event log message (aka raw)
tenant
- field name of tenant/client is stored to
count
- field name the count of events is stored to, defaults to 1
timestamp
- field name of timestamp attribute
Next, it is needed to know which Kafka topics to use at the input and output, if the parsing was successful or unsuccessful. Kafka connection needs to be also configured to know which Kafka servers to connect to.
# Kafka connection
[connection:KafkaConnection]
bootstrap_servers=lm1:19092;lm2:29092;lm3:39092
[pipeline:ParsersPipeline:KafkaSource]
topic=collected
# group_id=lmioparser
# Kafka sinks
[pipeline:EnrichersPipeline:KafkaSink]
topic=parsed
[pipeline:ParsersPipeline:KafkaSink]
topic=unparsed
The last mandatory section specifies which Kafka topic to use for the information about changes in lookups (i. e. reference lists) and which ElasticSearch instance to load them from.
# Lookup persistent storage
[asab:storage] # this section is used by lookups
type=elasticsearch
[elasticsearch]
url=http://elasticsearch:9200
# Update lookups pipelines
[pipeline:LookupChangeStreamPipeline:KafkaSource]
topic=lookups
[pipeline:LookupModificationPipeline:KafkaSink]
topic=lookups
Installation¶
Docker Compose¶
lmio-parser:
image: docker.teskalabs.com/lmio/lmio-parser
volumes:
- ./lmio-parser:/data