StarWind NVMeoF

Related image

Who is StarWind

StarWind is a software company that’s becoming a hardware company.  They are focused on SMB and ROBO and bringing them affordable enterprise-grade hyper-converged appliances.  Before making this shift from software to hardware, they wrote some pretty cool tools like the V2V Converter and the Deduplication Analyzer.  These tools, plus others are free for use.

Now that StarWind is focusing on hyper-converged, they came to Tech Field Day and presented on NVM over Fabric and presented their findings.  They presented three different demos using the same hardware which included a 40 CPU SuperMicro server, 40GB of RAM, a Mellanox NIC, and an Intel Optane SSD.  Below are the demos and the findings per demo.

Demo 1

The first demo was using a target initiator configuration.  The target was a Windows 2016 server installed as a bare-metal OS on a 40 core system and the initiator was a CentOS 7.5 VM.  The target did not have the SPDK StarWind NVMe-oF software installed on it.  That’s because there isn’t one.  So they developed their own SPDK for Windows to get good performance.

The StarWind performed some benchmarks and the output was for 4k random write the IOPS were 518,000, Bandwidth = 2025 MB/s with a latency of 0.615ms.  This configuration also ate up 8 CPUs.  Not good when you consider this could be a host with other workloads on it.

Demo 2

The second demo involved a similar target initiator configuration, but this time, the target was running Windows Hyper-V with the StarWind NVMe-oF target software installed on a CentOS 7.5 virtual machine running inside of Hyper-V.  This VM had 4 vCPU and 8GB of RAM.  It was able to see the Intel Optane SSD because Hyper-V was configured for passthrough to the SSD.

StarWind demo 2 - architecture

The target device was configured through StarWind’s new tool, StarWind Stack, where the target was created with a point and click GUI.

The demo then ran through a benchmark with the output of 4k random write with 551,000 IOPs, BW = 2152 MB/s with a latency of 0.578ms.  The CentOS VM that was sitting on top of Hyper-V was using 2 vCPUs.  This is a dramatic change from the bare metal Windows Server, but the CentOS also has the proper SPDK installed in it.

Demo 3

Since the CentOS VM is a virtual machine, it can be ported over to any other hypervisor. So, the StarWind team put that same VM and the same specs on a VMware ESXi host. Besides the hypervisor difference from demo two, demo three uses SR-IOV directly to the Mellanox network card.

StarWind demo 3 - architecture.GIF

The final result for demo 3 was the same as demo 2 with 551K IOPs, 2152 MB/s bandwidth with 0.587 ms.

Thoughts

StarWind showed how they are able to push the limits of a single SSD Intel Optane drive with the use of their own software inside a VM.  This VM that can be migrated to two of the top enterprise hypervisors and with a single Intel Optane SSD, a single node with 40 CPUs can handle 550K IOPS.  The presentations did not cover recoverability or consistency, but I believe StarWind will get there.  You can see how this type of technology brings companies and customers closer to composable infrastructure.  If this technology can scale, the SAN might have just met its match.

StarWind did say their code will be available for Windows and VMware later this year. So keep your eyes peeled for this release.

Videos of the demos

 

 

 

 

 

 

 

One Reply to “StarWind NVMeoF”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.