Skip to content

Cluster

TeskaLabs LogMan.io can be deployed into a single server (aka "node") or in a cluster setup. TeskaLabs LogMan.io supports also geo-clustering.

Geo-clustering

Geo-clustering is a technique used to provide redundancy against failures by replicating data, and services across multiple geographic locations. This approach aims to minimize the impact of any unforeseen failures, disasters, or disruptions that may occur in one location, by ensuring that the system can continue to operate without interruption from another location.

Geo-clustering involves deploying multiple instances of the LogMan.io across different geographic regions or data centers, and configuring them to work together as a single logical entity. These instances are linked together using a dedicated network connection, which enables them to communicate and coordinate their actions in real-time.

One of the main benefits of geo-clustering is that it provides a high level of redundancy against failures. In the event of a failure in one location, the remaining instances of the system take over and continue to operate without disruption. This not only helps to ensure high availability (HA) and uptime, but also reduces the risk of data loss and downtime.

Another advantage of geo-clustering is that it can provide better performance and scalability by enabling load balancing and resource sharing across multiple locations. This means that resources can be dynamically allocated and adjusted to meet changing demands, ensuring that the system is always optimized for performance and efficiency.

Overall, geo-clustering is a powerful technique that helps to ensure high availability, resilience, and scalability for their critical applications and services. By replicating resources across multiple geographic locations, organizations can minimize the impact of failures and disruptions, while also improving performance and efficiency.

Locations

Location "A"

Location "A" is the first location to be build. In the single node setup, it is also the only location.

Node lma1 is the first server to built of the cluster.

Nodes in this location are named "Node lmaX". X is a sequence number of the server (eg 1, 2, 3, 4 and so on). If you run out of numbers, continue with small letters (eg. a, b, c and so on).

Please refer to the recommended hardware specification for details about nodes.

Location B, C, D and so on

Location B (and C, D and so on) are next locations of the cluster.

Nodes in these locations are named "Node lmLX". L is a small letter that represents location in the alphabetical order (eg a, b, c). X is a sequence number of the server( eg 1, 2, 3, 4 and so on). If you run out of numbers, continue with small letters (eg. a, b, c and so on).

Please refer to the recommended hardware specification for details about nodes.

Coordinating location "X"

The cluster MUST have odd number of locations to avoid Split-brain problem. For that reason, we recommend to build a small, coordinating location with one node (Node lmx1). We recommend to use virtualisation platform for "Node x1", not a physical hardware.

No data (logs, events) are stored at this location.

Types of nodes

Core node

First three nodes in the cluster are called code nodes. Core nodes form the consensus within the cluster, ensuring consistency, and coordinating activities across the cluster.

Peripheral nodes

Peripheral nodes are these nodes that don't participate in the consensus of the cluster.

Cluster layouts

Example of the cluster layout

Schema: Example of the cluster layout.

Single node "cluster"

Node: lma1 (Location a, Server 1).

Two big and one small node

Nodes: lma1, lmb1 and lmx1.

Thee nodes, three locations

Nodes: lma1, lmb1 and lmc1.

Four big and one small node

Nodes: lma1, lma2, lmb1, lmb2 and lmx1.

Six nodes, three locations

Nodes: lma1, lma2, lmb1, lmb2, lmc1 and lmc2.

Bigger clusters

Bigger clusters typically introduce a specialization of nodes.