To the best of my knowledge, we do not publish any "generic" recommended latency values for each protocol.
Latency is an independent variable in various performance equations (e.g., Little's Law). What may be an acceptable latency for one customer may not be acceptable for another.
Our generalized best practice here is to refer to the latency requirements for applications that one wants to run. For example, Microsoft Exchange suggests < 20ms response time.
In lieu of specific application latency suggestions, one may establish their own baselines by trending latency response times from each protocol. From there, either perform a statistical analysis to determine what is acceptable in the environment or "eye ball it" based on end-customer feedback.
I hope this helps.
Thank you very much for the response.
If that is the case, NetApp might have documented some test results for accepatable latency values for some specific applications NetApp storage supports. In my case it is VMWare(NFS, FCP, iSCSI).
Can you provide some information specific to VMWare on NFS, FCP and iSCSI.?
Also info about user data with CIFS?
I don't think there is such a thing as "recommenced (baseline) latency values for ... protocols". In general, latency has more to do with disk spindles or type of disks than with FCP or iSCSI protocols. And I think the discussion of latency needs to be in some context, e.g., how many random (or sequential) IOPS a disk can sustain at a given latency, regardless of FCP, iSCSI, etc.
Hope that helps,
I'm not completely sure the scope that Mr. Wei had in mind in his response. Data ONTAP calculates and exposes latencies at many levels, the protocol included. What is seen in the management tools are latencies that include whatever disk time that may have been included in a request (or set of requests as these are exposed as averages).
So to directly answer your question "what am I being shown"; you are being shown the measured latency for protocol operations covering the time between the request being received and the response being sent out on the wire (with some small and rare exceptions).
In other words, you are being shown what you probably expect you were being shown.
This really goes back to the original question and my initial response. "It depends" is a common response to this question. In reality, the it depends bit is defined by application tolerance and user tolerance. For example, Microsoft Exchange and the JetStress application expect response times to never exceed 20ms to be considered "good".
The "it depends" also has a factor of operation size embedded in it. If I were to say and alert that 5ms was "bad", does that really have any value without also understanding the overall operation size? If it's a 4K operation and it takes 5ms, maybe that's bad in your environment.. but what if the requested op was a 1meg op that took 6ms? Is that bad?
I'm not intending to answer in circles, it just isn't exactly cut and dry and greatly depends on your environment.
In lieu of some specific application requirement, I would say that expecting read times < 20ms for operations that aren't huge (for some definition of "huge") is a reasonable starting expectation. For writes to a NetApp system, I'd expect that the nominal response time is well less than that (5ms or less?). These starting figures would be for a healthy system that is not being overrun with more work than it is designed or sized to handle. These expectations might need to be adjusted up or down depending on what is normal for the environment (e.g., measured "normal" by running for a period of time and averaging out the daily/weekly spikes [shift on, shift off, lunch, etc.]).
Parting thought... given two applications, A and B: Application A may be completely content with a response time of 30 ms for an 8k operation as it's work is considered to be more batch than directly user-interactive. However, Application B might expect a response time of 10 ms worst case for an 8k operation because that IO response directly feeds back to the user in some form of UI.