Syncplify.me Server!: 2 million file transfers in 24 hours

This article refers to Syncplify.me Server! v4.0, which – at the time the article is being written – is still in beta, and is not yet available for purchase by customers. The purpose of this article is to give our users an idea of the performance level of the upcoming version.

A considerable amount of Syncplify customers are large and very-large enterprises, and – as you can guess – some of them need a secure file transfer server that can sustain a tremendous amount of sessions for a very long period of time.

Syncplify.me Server! v3.x, with its HA (High Availability) deployment specific configuration was already ahead of the competition in many ways, but some of our customers needed something even more powerful. Therefore we have engineered our Syncplify.me Server! v4.0 to be our most reliable and heavy-load bearing version ever.

Now that the release date of V4 (that’s how we call it internally, just “V4”) approaches, the time has come for us to test just how fast and dependable this new version is. We have therefore set up the test environment that is shown in the picture here below.

2million

The 5 server machines (3 for Syncplify.me Server! and 2 for the DB+Storage cluster) are all identical: single quad-core Xeon E5-4603 CPU, 16 GB RAM, two 256 GB SSD drives (RAID-1), and 64 bit Windows 2008 Server R2. The 11 test clients were Virtual Machines running inside a VMWare VSphere environment, all identical: single dual-core virtual CPU, 2GB RAM, Windows 8.1 Pro.

Ok, now that the environment is set up, we need a tool to stress-test our servers, and for that we need to write some scripts. We take advantage of our FTP Script! software product to create a set of scripts that will be able to stress-test the servers using all 4 supported protocols: FTP, FTPS, FTPES, and SFTP.

Regardless of the protocol, the core of all scripts is the same. Here it is:

Basically inside the main (infinite) cycle, each script will connect to the server pool using its designated protocol, create a random-size temporary file (up to 10 MB), upload it, download it, delete it from the server, delete it from the client, close the connection.

We ran all scripts on all clients simultaneously, and we let them run for exactly 24 hours. At the end of the test, we checked the number of sessions successfully handled by each one of the 3 servers in our pool. Here’s the result:

  • First server: 687,921 sessions
  • Second server: 674,337 sessions
  • Third server: 679,087 sessions

Total number of sessions handled in 24 hours: 2,041,345

At the end of the test there was no zombie session in memory, all cores of all server CPUs were back down to zero percent usage, and the average in-use RAM was in the neighborhood of 24 MB per instance, which indicates no memory leaks and stable/effective memory management.

Print Friendly
Bookmark the permalink.

Comments are closed