Recently, I’ve been testing NetApp Private Storage (NPS) for AWS more extensively. The idea behind NPS for AWS is putting a NetApp controller into a colocation facility, e.g., Equinix, and connecting it directly to the AWS cloud. By using AWS Direct Connect, it is possible to directly mount the NetApp storage to an EC2 instance. This supports NFS, iSCSI and CIFS and is available in all public AWS regions worldwide.

 

AWS Direct Connect connects a close-by colocation facility to a single AWS region and into all of the region’s availability zones. Since those availability zones are geographically distributed, we’re expecting different latencies between the NetApp controller in the colocation and the EC2 instances, which are running in different availability zones. In this case, we’ve tested against Equinix DC2, which is located in Ashburn, VA (details here) and started an instance in each availabilty zone within the us-east1 region (N. Virginia). To calculate the roundtrip time between the EC2 instance and the controller (192.168.102.2), we’ve used:

ping -c 60 -i 5 192.168.102.2

While the results aren’t too surprising, we were still seeing some slight differences in average roundtrip time:

numbers.png

 

Conclusion

The test only ran over five minutes, so in order to get more representative numbers, it may be required to re-run this test over a longer period and during different times of the day. However, it also points out that there can be significant differences in the average roundtrip time between a specific availability zone and the NetApp storage in the colocation.

Does 1.0ms vs. 1.5ms latency make a big difference? In many cases most likely not, but when we’re thinking about high performance applications, 0.5ms might make a difference. The bottom line  is: For latency sensitive applications, it can make sense to spend a little bit of time and measure latencies between the colocation and the availability zones.

 

Follow me on Twitter: @clemenssiebler

 

 

Ping Snippets

#us-east-1b [ec2-user@ip-172-16-101-91 ~]$ ping -c 60 -i 5 192.168.102.2 PING 192.168.102.2 (192.168.102.2) 56(84) bytes of data. 64 bytes from 192.168.102.2: icmp_seq=1 ttl=251 time=1.04 ms ... 64 bytes from 192.168.102.2: icmp_seq=60 ttl=251 time=1.06 ms  --- 192.168.102.2 ping statistics --- 60 packets transmitted, 60 received, 0% packet loss, time 295311ms rtt min/avg/max/mdev = 0.890/1.036/1.273/0.074 ms   #us-east-1c [ec2-user@ip-172-16-102-206 ~]$ ping -c 60 -i 5 192.168.102.2 PING 192.168.102.2 (192.168.102.2) 56(84) bytes of data. 64 bytes from 192.168.102.2: icmp_seq=1 ttl=251 time=1.27 ms ... 64 bytes from 192.168.102.2: icmp_seq=60 ttl=251 time=1.13 ms  --- 192.168.102.2 ping statistics --- 60 packets transmitted, 60 received, 0% packet loss, time 295323ms rtt min/avg/max/mdev = 1.083/1.227/1.398/0.070 ms  #us-east-1d [ec2-user@ip-172-16-103-126 ~]$ ping -c 60 -i 5 192.168.102.2 PING 192.168.102.2 (192.168.102.2) 56(84) bytes of data. 64 bytes from 192.168.102.2: icmp_seq=1 ttl=251 time=1.50 ms ... 64 bytes from 192.168.102.2: icmp_seq=60 ttl=251 time=1.52 ms  --- 192.168.102.2 ping statistics --- 60 packets transmitted, 60 received, 0% packet loss, time 295344ms rtt min/avg/max/mdev = 1.432/1.800/11.430/1.284 ms  #us-east-1e [ec2-user@ip-172-16-98-240 ~]$ ping -c 60 -i 5 192.168.102.2 PING 192.168.102.2 (192.168.102.2) 56(84) bytes of data. 64 bytes from 192.168.102.2: icmp_seq=1 ttl=251 time=1.27 ms ... 64 bytes from 192.168.102.2: icmp_seq=60 ttl=251 time=1.30 ms  --- 192.168.102.2 ping statistics --- 60 packets transmitted, 60 received, 0% packet loss, time 295326ms rtt min/avg/max/mdev = 1.220/1.448/6.157/0.646 ms 

Clemens_Siebler