LogMan.io Parser Configuration
First it is needed to specify which library to load the declarations from, which can be either ZooKeeper or File.
Also, every running instance of the parser must know which groups to load from the libraries, see below:
# Declarations [declarations] library=zk://zookeeper:12181/lmio/library.lib ./data/declarations groups=cisco-asa@syslog raw_event=event.original tenant=tenant timestamp=end
groups - names of groups to be used from the library separated by spaces; if the group
is located within a folder’s subfolder, use slash as a separator, f. e.
raw_event - field name of the input event log message (aka raw)
tenant - field name of tenant/client the log is received from
timestamp - field name of timestamp attribute
Next, it is needed to know which Kafka topics to use at the input and output, if the parsing was successful or unsuccessful. Kafka connection needs to be also configured to know which Kafka servers to connect to.
# Kafka connection [connection:KafkaConnection] bootstrap_servers=kafka:19092 [pipeline:ParsersPipeline:KafkaSource] topic=collected # group_id=lmioparser # Kafka sinks [pipeline:EnrichersPipeline:KafkaSink] topic=parsed [pipeline:ParsersPipeline:KafkaSink] topic=unparsed
The last mandatory section specifies which Kafka topic to use for the information about changes in lookups (i. e. reference lists) and which ElasticSearch instance to load them from.
# Lookup persistent storage [asab:storage] type=elasticsearch elasticsearch_url=http://elasticsearch:9200 # Update lookups pipelines [pipeline:LookupChangeStreamPipeline:KafkaSource] topic=lookups [pipeline:LookupModificationPipeline:KafkaSink] topic=lookups
lmio-parser: image: docker.teskalabs.com/lmio/lmio-parser container_name: lmio-parser volumes: - ./lmio-parser:/data networks: - es-overlay