Connect new log source and parse data¶
1. Install LogMan.io Collector¶
LogMan.io Collector is a component of LogMan.io that works outside of the LogMan.io cluster. Configure LogMan.io Collector with CommLink protocol to communicate with LogMan.io Receiver, which is a component within the LogMan.io cluster responsible for ingesting messages and storing them in Archive.
Configure collector to use CommLink.
connection:CommLink:commlink:
url: https://<your-domain>/lmio-receiver
Default configuration of LogMan.io Collector provides multiple TCP and UDP ports open for common log sources.
2. Provision LogMan.io Collector¶
In the LogMan.io WebUI, go to Collectors screen and click Provision. Fill in the identity of your LogMan.io Collector. LogMan.io Collector is provisioned into the active tenant.
Warning
Be sure you provision the LogMan.io Collector into the right tenant if you can access multiple tenants. The Collector is provisioned into the active tenant.
Once provisioned, the Collector sends logs to LogMan.io.
You can find new stream in the Archive and data flowing in.
3. Create event lane¶
To parse the data from Archive, an event lane needs to be created. It is a declaration that specifies how the data flow from the archive through crucial components and which parser rule is applied for the stream.
In the Library, create new file in the /EventLanes
folder. For single-node installation, use this template:
define:
type: lmio/event-lane
parsec:
name: /Parsers/<path to parser>
kafka:
received:
topic: received.<tenant>.<stream>
events:
topic: events.<tenant>.<stream>
others:
topic: others.<tenant>
elasticsearch:
events:
index: lmio-<tenant>-events-<stream>
settings:
number_of_replicas: 0
others:
index: lmio-<tenant>-others
settings:
number_of_replicas: 0
Example
In example, let's assume we have new stream in the Archive called linux-rsyslog-10010
, in tenant example
.
You can use Linux/Common parser of the LMIO Common Library.
Create file /EventLanes/example/linux-rsyslog-10010.yaml
define:
type: lmio/event-lane
parsec:
name: /Parsers/Linux/Common
kafka:
events:
topic: events.example.linux-rsyslog-10010
others:
topic: others.example
received:
topic: received.example.linux-rsyslog-10010
elasticsearch:
events:
index: lmio-example-events-linux-rsyslog-10010
settings:
number_of_replicas: 0
others:
index: lmio-example-others
settings:
number_of_replicas: 0
Number of replicas in Elasticsearch
This example is for single-node installation. Single node cannot carry replicas. Thus, the number_of_replicas
is zero. The default setup is 3-node installation, default settings of Elasticsearch index is number_of_replicas: 1
and does not need to be specified in the Event Lane declaration.
4. Add LogMan.io Parsec to model¶
Each Event Lane requires its own LogMan.io Parsec instance. Adjust model to add a LogMan.io Parsec instance. Use this template:
services:
...
lmio-parsec:
instances:
<tenant>-<stream>-<instance_no>:
asab:
config:
eventlane:
name: /EventLanes/<eventlane>.yaml
tenant:
name: <tenant>
node: <node_id>
Example
In this example, stream is named linux-rsyslog-10010
inside example
tenant. Node ID is specific for your installation.
Let's assume it is example_node
. Instance number (instance_no
) must be unique for each LogMan.io Parsec instance of this tenant and stream.
services:
.
.
.
lmio-parsec:
instances:
example-linux-rsyslog-10010-1:
asab:
config:
eventlane:
name: /EventLanes/example/linux-rsyslog-10010.yaml
tenant:
name: example
node: example_node
Warning
Make sure you use absolute paths when referencing to the file or directory in Library.
For example: /Parsers/Linux/Common
5. Apply changes¶
Apply changes in the Library to the installation.
In the terminal, inside /opt/site
directory, run command:
./gov.sh up <node_id>
An instance of LogMan.io Parsec will be created and start parsing data from selected stream.
Eventually, parsed data appear in the Discovery screen.