In 2015 organisations will face a number of events that will force a reevaluation of key business applications.

Applications that define the business, are critical to the business, drive revenue and competitive differentiation.

 

Haswell.jpg
New sever technology is also on the menu for this year driven by the availability of new processors in the4th

Generation Intel HASWELL Family. The Tick-Tock of processor development continues with the announced top

end HASWELL processors sporting 18 cores. In a 2 socket next generation server that’s a total of 36 cores.

 

It will come as no surprise that enterprise application software is often licenced by the processor core.

In other words the cost of the software per core is X dollars. Where X is often tens of thousands of dollars.

On the next generation servers with 36 cores this equates to a serious investment.

 

One thing is therefore clear. Processor utilisation is a key factor when measuring business value and return on

investment. Sadly for many this is not the case and analysts have reported average industry servers utilization

of between 20-30%. In many organizations these licence costs extend to many millions of dollars. If your processor

utilization is around 30% then you have the potential to make significant improvements and at the same time big savings.

 

There may be many reasons for low server utilisation, on the other side of the coin there are notable organisations

that are achieving 90%+ utilisation. They have typically addressed everything that sits in the path between

the raw data and delivery of results. The key word is latency. Latency being the delay between a request to a system

and the desired outcome. It’s what governs how fast your progress bar, or hour glass responds. We all know how

we react when the progress bar stops or slows down – we go somewhere else where we we will get the response

we desire. In a previous blog I discussed the psychology of the way we react to delay. This alone can be the the

difference between business success and failure.

 

 

This year will also see the End-of-Life and Service for several key business applications. Applications such as

Windows 2003, SQL Server 2005 along with opportunities to refresh customer facing databases and CRM

systems such as Oracle and SAP. Staying with these applications, the do nothing scenario, is probably not an option

as the business risk is to high. The transition to new business applications will also drive the adoption of new server

technology while this new generation of applications will be able to spawn more threads and utilise the processor

resources more effectively. Again trying to running new business applications on older server technology is likely

not to work well due to the imbalance in other resources I spoke about earlier.

 

What about storage ? I have seen many projects fail because in updating the Servers and business Apps the storage

was left alone. Surprise, surprise the server was left, looking at its watch,  waiting for a response from the older generation

of storage that was unable to fulfil the requests. Latency in action. Lots of it. The outcome:

 

ALL COMPUTERS WAIT AT THE SAME SPEED

 

They do, think about it. The trick is managing the length of the wait time. Flash storage makes a dramatic difference to this

equation. It’s ability to service vast numbers of I/O  requests concurrently at a consistent low latency is the key metric.

The important word here is consistent.

 

EF560.jpg

 

 

 

Today NetApp launched the 3rd Generation All-Flash Array, the EF560. Designed for extreme performance for business critical

workloads  this performance storage technology can deliver against goals for improved business value. Helping you achieve

higher processor utilization and potentially reducing the number of servers required to service your work. Fewer servers equal

fewer cores, which equals the potential for huge cost savings in licence fees.I hear you ask where is the evidence ?

 

Rocket.jpg

 

Well, today we are also announcing the EF560 results from the Storage performance Council (SPC)SPC-1 Benchmarks. The

fact is that the EF560 achieved the lowest SPC-1 Price-Performance for all-flash arrays with an average response time under

1 millisecond at $0.54/SPC-1 IOP. In other words absolute, consistent low latency,  bandwidth and IOPS critical to production

databases and analytics.

 

The question I often get from doubters and our competition is … ‘but it doesn’t have inline compression and dedupe’ Lets be clear

if you are planning to host your customer facing, revenue generating production database on Flash – DO NOT BE DUPED by DEDUPE.

 

I am not saying this applies in all cases but really ? Have you looked and what goes on at the block or transaction  level in a

database ? How uniqueness is achieved by the generation of the SCN and the TAILCHECK ? What ratio do you think you are

going to achieve ? Certainly not the overblown, wild, ratios Ive seen stated in some marketing blurb. Sorry the overhead of

dedupe, hash generation/comparison is not going cut it at the storage controller when my goal is extreme performance.

Neither is compression.I may see a useful compression ratios from the data in my database, but let the database application

do it. After all it’s format aware and can achieve higher compression ratios while also minimising storage network traffic.

 

Then comes…. .’what about database copies and backups’ ? Doesn’t Dedupe help me here ? Possibly but do you want backup

copies on your production Flash Storage platform ? Likewise do you want development copies there also ? No you probably

don’t and if you do need a point-in-time copy use SnapShots. Better still the EF560 has both asynchronous and synchronous

replication capability. You can further reduce costs be replicating to a traditional storage platform such as the E5600 Hybrid

storage array, launched today and designed for Enterprise SAN Applications

 

More on this launch and news of a new EF560 based  Epic Story in the next Blog.

 

In summary here are the top metrics and high Availability Features available on the EF560

 

Burst I/O rate                          900,000 IOPS

Sustained I/O rate                   650,000 IOPS

Sustained throughput              Up to 12GB/s

Maximum drives                       120

Maximum raw capacity            192TB

Drive types supported             2.5″ SSD 400GB, 800GB, 800GB (FDE),1.6TB

Base system:                           2U

Expansion shelf:                      2U

System ECC memory               24GB

I/O interface options(8)         16Gb FC, (8) 12Gb SAS, (8) 10Gb iSCSI, or (4) 56Gb InfiniBand

SANTricity Storage Manager 11.20

High-availability features

  • Dual active controller with automated I/O path failover
  • Dynamic Disk Pools (DDP) and RAID levels 0, 1, 3, 5, 6, and 10
  • Redundant, hot-swappable storage controllers, disk drives, power supplies, and fans
  • Automatic DDP or RAID rebuild following a drive failure
  • Mirrored data cache with battery backup and de-stage to flash
  • SANtricity proactive drive health monitoring identifies problem drives before they create issues
  • Greater than 99.999% availability

More information can be found here:

 

EF560 & E5600 Press Release

 

SPC-1 Top 10 Price Performance results

 

EF560 Datasheet

 

E5600 Data Sheet

 

Flash Storage Guide

mm

Laurence James