Skip to content

NGINX configuration

We recommend to use a dedicated virtual server in the NGINX for LogMan.io Receiver respectively communication links from LogMan.io Collector to the LogMan.io Receiver.

This server shares the NGINX server process and the IP address and it is operated on the dedicated DNS domain, different to the LogMan.io Web UI. For example, the LogMan.io Web UI runs on http://logman.example.com/ and the receiver is available at https://recv.logman.example.com/. In this example logman.example.com and recv.logman.example.com can resolve to the same IP address(es).

Multiple NGINX servers can be configured on different cluster nodes to handle incoming connections from collectors, sharing the same DNS name. We recommend to implement this option for high availability clusters.

upstream lmio-receiver-upstream {
    server 127.0.0.1:3080; # (1)

    server node-2:3080 backup; # (2)
    server node-3:3080 backup;
}

server {
    listen 443 ssl; # (3)
    server_name recv.logman.example.com;

    ssl_certificate recv-cert.pem;  # (4)
    ssl_certificate_key recv-key.pem;

    ssl_client_certificate conf.d/receiver/client-ca-cert.pem;  # (5)
    ssl_verify_client optional;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'EECDH+AESGCM:EECDH+AES:AES256+EECDH:AES256+EDH';
    ssl_prefer_server_ciphers on;

    ssl_stapling on;
    ssl_stapling_verify on;

    server_tokens off;

    add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;

    location / {  # (8)
        proxy_pass http://lmio-receiver-upstream;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-SSL-Verify $ssl_client_verify;  # (6)
        proxy_set_header X-SSL-Cert $ssl_client_escaped_cert;

        client_max_body_size 500M;  # (7)

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}
  1. Points to a locally running lmio-receiver, public Web API port. This is a primary destination since it saves a network traffic.

  2. Backup links to receivers run on other cluster nodes that runs lmio-receiver, node-2 and node-3 in this example. Backups will be used when the locally running instance is not available. In the single node installation, skip these entries completely.

  3. This is a dedicated HTTPS server running on https://recv.logman.example.com.

  4. You need to provide SSL server key and certificate. You can use a self-signed certificate or a certificate provide by a Certificate Authority.

  5. The certificate client-ca-cert.pemis automatically created by the lmio-receiver. See "Client CA certificate" section.

  6. This verifies the SSL certificate of the client (lmio-collector) and pass that info to lmio-receiver.

  7. lmio-collector may upload chunks of buffered logs.

  8. A URL location path where the lmio-collector API is exposed.

Verify the SSL web server

After NGINX configuration is completed, always verify the SSL configuration quality using ie. Qualsys SSL Server test. You should get "A+" overall rating.

OpenSSL command for generating self-signed server certificate
openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 \
  -keyout recv-key.pem -out recv-cert.pem -sha256 -days 380 -nodes \
  -subj "/CN=recv.logman.example.com" \
  -addext "subjectAltName=DNS:recv.logman.example.com"

This command generates a self-signed certificate using elliptic curve cryptography with the secp384r1 curve. The certificate is valid for 380 days and includes a SAN extension to specify the hostname recv.logman.example.com. The private key and the certificate are saved to recv-key.pem and recv-cert.pem, respectively.

Client CA certificate

The NGINX needs a client-ca-cert.pem file for ssl_client_certificate option. This file is generated by the lmio-receiver during the first launch, it is the export of the client CA certificate from the Zookeeper from lmio/receiver/ca/cert.der. For this reason lmio-receiver needs to be started before this NGINX virtual server configuration is created.

The lmio-receiver generates this file into ./var/ca/client-ca-cert.pem folder.

docker-compose.yaml

lmio-receiver:
    image: docker.teskalabs.com/lmio/lmio-receiver
    volumes:
    - ./nginx/conf.d/receiver:/app/lmio-receiver/var/ca
    ...

nginx:
    volumes:
    - ./nginx/conf.d:/etc/nginx/conf.d
    ...

Single DNS domain

The lmio-receiver can be alternativelly collocated on the same domain and port with the LogMan.io Web IU. In this case, the lmio-receiver API is exposed on the subpath: http://logman.example.com/lmio-receiver

Snipplet from the NGINX configuration for "logman.example.com" HTTPS server.

upstream lmio-receiver-upstream {
    server 127.0.0.1:3080;

    server node-2:3080 backup;
    server node-3:3080 backup;
}

...

server {
    listen 443 ssl;
    server_name logman.example.com;

    ...

    ssl_client_certificate conf.d/receiver/client-ca-cert.pem;
    ssl_verify_client optional;

    ...

    location /lmio-receiver {
        rewrite ^/lmio-receiver/(.*) /$1 break;

        proxy_pass http://lmio-receiver-upstream;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-SSL-Verify $ssl_client_verify;
        proxy_set_header X-SSL-Cert $ssl_client_escaped_cert;

        client_max_body_size 500M;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

In this case, the lmio-collector CommLink setup must be:

connection:CommLink:commlink:
  url: https://logman.example.com/lmio-receiver/

...

Load balancing and high availability

Load balancing is configured by the upstream section of the NGINX configuration.

upstream lmio-receiver-upstream {
    server node-1:3080;
    server node-2:3080 backup;
    server node-3:3080 backup;
}

The collector connects to a receiver via NGINX using a long-lasting WebSocket connection (Commlink). The NGINX will first try to forward the incoming connection to "node-1". If that fails, it tries to forward to one of the backups: "node-2" or "node-3". The "node-1" is preferably "localhost" so the network traffic is limited, but it can be reconfigured otherwise.

Because the WebSocket connection is persistent, the web socket stays connected to the "backup" server even if the primary server becomes online again. The collector will reconnect "on housekeeping" (daily, during the night) to resume proper balancing.

This mechanism also provides the high availability feature of the installation. When the NGINX or receiver instance is down, collectors will connect to another NGINX instance, and that instance will forward these connections to available receivers.

DNS round-robin balancing is recommended for distributing the incoming web traffic across available NGINX instances. Ensure that the DNS TTL value of related entries (A, AAAA, CNAME) is set to a low value, such as 1 minute.