Skip to content

TeskaLabs LogMan.io installation

Prerequisites

  • Proper setup of disks and logical volumes as prescribed in previous phases.
  • Configuration in /etc/sysctl.d/01-logman-io.conf is applied.
  • Docker is running and tladmin user is in the docker group.
  • Make sure /opt/site directory is empty.

Info

Data will be stored in /data/hdd and /data/ssd directories. Make sure these folders do not contain any content that might interfere with the installation.

Note

You can use the prerequisites script for a quick check of your system.

# Download the script
curl -O https://libsreg.z6.web.core.windows.net/prerequisites/prerequisites.sh -o /tmp/prerequisites.sh

# Download the corresponding SHA-256 checksum
curl -O https://libsreg.z6.web.core.windows.net/prerequisites/prerequisites.sh.sha256 -o /tmp/prerequisites.sh.sha256

# Verify the script’s integrity
(cd /tmp && sha256sum -c prerequisites.sh.sha256)

# Make the script executable and run it
chmod +x /tmp/prerequisites.sh

(cd /tmp && ./prerequisites.sh)

rm /tmp/prerequisites.sh /tmp/prerequisites.sh.sha256

First node or single node

1) Download installation script

curl -s https://lmio.blob.core.windows.net/library/lmio/install-ubuntu2204.sh -o /tmp/install-lmio.sh

2) Launch the installation

sudo bash /tmp/install-lmio.sh

Select "First core node"

Select first core node

Press < Continue >

Enter TeskaLabs Docker registry credentials

Provide TeskaLabs Docker registry credentials

Press < Login > to proceed.

Note

The credentials are provided by TeskaLabs support. Please contact us, if you don't have yours.

Fill in the node id and IP address

The Node id is a hostname and it MUST be resolvable.

The IP address must be reachable from other nodes of the cluster, over Internal network. For a single node installation, use the IP address of the machine on the Froting network.

Fill in the hostname and IP address

After you enter all necessary information and confirm by pressing the button, the installation proceeds. This might take from couples of minutes to half an hour. Be patient and do not stop the process.

Monitoring the installation

To monitor the Docker containers being enrolled, open second terminal and type watch docker ps -a.

3) Open the Web User Interface

TeskaLabs LogMan.io web application will be accessible on port 443 using the hostname as domain name. In the example, https://lmio-test/.

4) The first node is installed

LogMan.io can run both as a single-node installation or in a cluster. If you run LogMan.io on a single machine only, your installation is finished. Continue to setup the your TeskaLabs LogMan.io installation to collect logs.

Second and third node

Make sure that the second respective third core node of the cluster conforms to prerequisites prescribed on the top of this page. Also ensure you can reach first node of the cluster over the network.

If ready, use this command to start the installation. Make sure you specify the ASAB Maestro version. Use the same version as on the first cluster node.

docker run -it --rm --pull always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /opt/site:/opt/site \
  -v /opt/site/docker/conf:/root/.docker/ \
  -v /data:/data \
  -e NODE_ID=`hostname -s` \
  -e ASAB_MAESTRO_VERSION=<specify version> \  #(1)
  --network=host \
  pcr.teskalabs.com/asab/asab-governator:stable \
  python3 -m governator.bootstrap
  1. Make sure you specify the ASAB Maestro version. Use the same version as on the first cluster node.

When the GUI opens, select to install second/third core node. Select second/third core node

In the next screen, enter the IP address of the first cluster node to connect to.

Enter IP address of the first node

Next screen shows current state of the Zookeeper cluster and lets you revise the hostname and ip address.
Check or correct the hostname and IP address and press Build new cluster node Check the hostanme and IP address

Wait for the process to finish.

The rest can be set up from the LogMan.io web application. So, log in.

There are new instances on the new cluster node (lmio-test2) running. Check it in the Services screen.

Second node is connected

Installing arbiter/quorum node

Proceed with this settings only if you do not plan to process data on the cluster node.

Adjust the record of the node in the Zookeeper. Give it the "arbiter" role manually. Through Tools menu, open Zoonavigator and navigate to /asab/nodes directory. Find node you want to label as arbiter and add it a role:

ip:
- XX.XX.XX.XX
roles:
- arbiter

Save the file.

Set arbiter role

Setting up the cluster

Ensure the cluster technologies are installed. Zookeeper is already installed.

Add to the new node a instance for each service:

  • mongo
  • elasticsearch-master
  • telegraf
  • lmio-collector-system

Go to the Library and open model.yaml located in the Site folder. Search for the above listed services in the model and add instance of each on the newly installed node.

Add a instance of mongo service

In this screenshot, you can see model.yaml file inside the Library being modified on line 9. Add node id of the newly installed node (lmio-test2) into a instances list of mongo service. Continue similarly for all above listed services. Specify elasticsearch master instance explicitely, similarly to master-1. When ready, hit Save and apply changes to the affected node.

Select the new node (lmio-test2) and hit the Apply button.

When installation is done, select one by one the remaining nodes and hit Apply. The current changes must be applied to all cluster nodes.

Install data node

If this node is meant for log collection and data processing, install instances of following services in a similar way:

  • nginx
  • kafka
  • lmio-receiver
  • lmio-depositor
  • elasticsearch-hot
  • elasticsearch-warm
  • elasticsearch-cold
  • lmio-lookupbuilder
  • lmio-ipaddrproc

Round-Robin DNS balancing

Balancing of log collection is done through DNS balancing. Make sure that your DNS server can resolve to all nodes where you expect log collection.

Installation without ASAB Maestro

Installation without the Maestro orchestration requires following steps:

1) Create a folder structure

sudo apt install git

2) Create a folder structure

sudo mkdir -p \
/data/ssd/zookeeper/data \
/data/ssd/zookeeper/log \
/data/ssd/kafka/kafka-1/data \
/data/ssd/elasticsearch/es-master/data \
/data/ssd/elasticsearch/es-hot01/data \
/data/ssd/elasticsearch/es-warm01/data \
/data/hdd/elasticsearch/es-cold01/data \
/data/ssd/influxdb/data \
/data/hdd/nginx/log

Change ownership to elasticsearch data folder:

sudo chown -R 1000:0 /data/ssd/elasticsearch
sudo chown -R 1000:0 /data/hdd/elasticsearch

3) Clone the site configuration files into the /opt folder:

cd /opt
git clone https://gitlab.com/TeskaLabs/<PARTNER_GROUP>/<MY_CONFIG_REPO_PATH>

4) Login to docker.teskalabs.com.

cd <MY_CONFIG_REPO_PATH>
docker login docker.teskalabs.com

5) Enter the repository and deploy the server specific Docker Compose file

docker compose -f docker-compose-<SERVER_ID>.yml pull
docker compose -f docker-compose-<SERVER_ID>.yml build
docker compose -f docker-compose-<SERVER_ID>.yml up -d

6) Check that all containers are running

docker ps