Note: this article refers to Syncplify.me Server! v1.x and v2.x; As of Syncplify.me Server! v3.0 support for high-availability (HA) is built-in, and much easier to use; you can read more about v3.0 HA features here.
If you are running Syncplify.me Server! in a corporate environment, especially if you operate it in a mission critical environment, you may want to deploy it with high-availability (HA) and fault tolerance in mind.
The diagram here below, and the explanation that follows, are intended as a “first step” towards that goal. More complicated layouts are certainly possible, but this is a good starting point:
Basically, all incoming traffic (from the Internet) is managed by a firewall that translates (NAT) all SSH – and therefore all SFTP – traffic towards a virtual IP (VIP) configured onto a load balancer.
The above mentioned load balancer receives all the SSH/SFTP traffic and is responsible for translating (NAT) it towards the proper back-end server(s).
Depending on the load balancer you run, it is possible to configure simple active/passive strategies, where all traffic is forwarded to one server as long as it’s up and redirected to the second server when the first one is down, or more complex strategies that actually balance the traffic between N “front servers” and fall back to their equivalent “backup servers” if the front ones stop working.
Of course all SFTP servers in your pool (no matter the size of the pool) must share the same storage. On Windows this is fairly easy by using a DFS volume.