SecureLink: Protocol-Hardened Data Gateway for Multi-Site Operations
Sustained multi-site data routing on constrained WAN links with sub-100ms DNS-based failover, custom TCP congestion control, and zero dependency on external cloud infrastructure.

Operational systems spanning multiple physical sites need reliable, low-latency data routing between locations without relying on external cloud infrastructure. Standard application-layer networking solutions introduce dependencies on TLS libraries, external DNS resolvers, and cloud-hosted routing services that are incompatible with the security and availability requirements of systems that must operate when commercial internet access is unavailable or untrusted.
Built a protocol-level data gateway at the transport layer. The TCP implementation includes Reno congestion control tuned for constrained WAN links with high latency variability. The HTTP layer implements connection pooling and request pipelining to keep link utilisation high without head-of-line blocking on slow links. DNS-based adaptive routing monitors link health continuously and switches to a configured backup site connection within 100 milliseconds of primary link failure detection, without packet loss during the transition.
Custom TCP Transport with Congestion Control
Reno congestion control tuned for variable-latency WAN links
The TCP implementation handles the full connection lifecycle: SYN/ACK handshake, windowed data transfer, retransmission on loss, and graceful shutdown. Congestion control follows the Reno algorithm with parameters tuned for the higher RTTs and bursty loss characteristics typical of constrained WAN links between operational sites. Written in C with no dependency on OS networking stack components beyond raw socket access.
- Full TCP connection lifecycle: connect, data transfer, FIN/RST teardown
- Sliding window flow control with receiver-advertised window
- Reno congestion control: slow start, congestion avoidance, fast retransmit, fast recovery
- Retransmission timeout with exponential backoff
- Nagle algorithm configurable: disabled for latency-sensitive payloads
- Zero external library dependency: all TCP logic is first-party C code
HTTP/1.1 Request Pipeline with Connection Pooling
Sustained throughput on slow links without head-of-line blocking
The HTTP layer implements persistent connections with a configurable pool size per destination host. Request pipelining allows multiple in-flight requests without waiting for each response, improving link utilisation on high-latency paths. A request scheduler detects and avoids head-of-line blocking by reordering independent requests when a slow response is holding up the pipeline.
- HTTP/1.1 keep-alive with configurable connection pool depth per host
- Request pipelining: multiple in-flight requests per connection
- Head-of-line blocking avoidance via independent request reordering
- Connection idle timeout and graceful pool drain
- Chunked transfer encoding for streaming payloads
- HTTP response parser handles malformed partial responses without crash
DNS-Based Adaptive Failover
Sub-100ms site switchover with zero manual reconfiguration
The gateway monitors the health of the primary site connection using lightweight periodic probes. When probe failure thresholds are exceeded, the DNS resolver for the gateway's configured hostname is updated to point to the backup site, and active connections are gracefully migrated. The entire switchover completes before the TCP retransmission timeout expires on active sessions, preserving application-layer continuity.
- Continuous link health probing on configurable interval
- Probe failure threshold: 1 to 5 consecutive failures before failover
- DNS record TTL set to minimal value for fast propagation
- Active connection migration: open sessions moved to backup host
- Failback to primary on restored connectivity with hold-down timer
- All failover events logged with timestamp, direction, and trigger probe result
Correct TCP congestion control requires accurate RTT measurement and timeout calculation under the conditions where the implementation is most stressed: high loss rates and variable latency introduce interactions between slow start, fast retransmit, and retransmission timeout that must be handled correctly to avoid connection collapse under load.
HTTP pipelining without head-of-line blocking requires a request-response matching layer that correctly handles partial responses, out-of-order delivery, and server-side pipeline refusal, all of which occur on degraded links.
DNS failover timing must be calibrated precisely: too aggressive and healthy links trigger unnecessary failovers, too conservative and the switchover falls outside the application timeout window.