SeaCat Gateway is highly scalable and offers:
- Various Disaster Recovery scenarios;
- High Availability scenarios including the support of multiple geographical locations;
- Load Balancing;
- Virtualization support.
SeaCat Gateway operates in the following deployments:
- Shared on a cloud
- Dedicated on a cloud
- Dedicated on-premise
SeaCat Gateways are typically installed in multiple instances called SeaCat Gateways Arrays.
High Availability, Load Balancing, and Discover Service
Discover Service is the central authority for pairing Applications with SeaCat Gateways and allows for the coordination of Client Connections in real time.
A higher number of SeaCat Gateways offer higher load resistance and better High Availability. In a case of more than one available SeaCat Gateway, it is possible to perform various maintenance & administration operations and configuration changes without any impact on the actual traffic and service availability. It is possible to:
- Adjust traffic load profile thanks to SeaCat Gateway’s priority and weight settings;
- Separate anonymous Clients from authorized Clients via SeaCat Gateway’s purpose specification. More information in Cryptographic Detail chapter;
- Transform architecture without service interruption thanks to SeaCat Gateways reconfigurable IP addresses.
Client-side Load Balancing
SeaCat Gateway’s High Availability is assured due to Client-side Load Balancing when using at least two SeaCat Gateways. Every Application equipped by SeaCat SDK has access to all associated SeaCat Gateways based on information obtained from the Discover Service. SeaCat SDK chooses the SeaCat Gateway based on provided priority and weight which are configured via the Discover Service.
Host-side Load Balancing
High availability of Application Backend is assured owing to Host-side Load Balancing when using at least two Hosts. SeaCat Gateway forwards requests from the Application to every available Host on a round-robin schedule.
SeaCat Gateway ensures that each Client is connected to one Application Backend for the whole session. The pinning is active until the Client Connection time out.
SeaCat produces data overhead to identify Clients, signs the data payload and ensures data transfer security in general. Overhead is related to payload size. We set different sizes of payload, ranging from 128 bytes to 64 kilobytes. In our case, we simulate usual application behavior by sending 5x HTTP POST requests with payload, 5x HTTP GET requests for payload file and 10 seconds wait-time. Repeated ten times.
The payload is mostly between 512 bytes and 1 kilobytes in size.
Overhead comparison with the same headers between SeaCat, HTTPS and HTTP protocol is in the following graph:
Protocol overhead has a direct impact on data transfer speed. Based on HTTP overhead calculation, SeaCat protocol is more economical in data bandwidth consumption than HTTPS and close to HTTP in speed.