Back in August I posted an entry
regarding NetApp's Performance Acceleration Module (PAM). To summarize, the PAM card is a 16GB DRAM based read cache integrated with ONTAP that shipped around the middle of 2008. Depending on the platform, up to 5 modules can be installed for a maximum of 80GB of intelligent read cache.
The caching policies implemented as part of ONTAP in conjunction with the usage of the PAM(s) are designed to discern random from sequential reads and hold on to these blocks in cache in order to minimize back-end disk reads which can add latency.
In that article, I also said that an additional PAM benefit is that it can, potentially, provide significant reduction in terms of the number of disks and shelves to be deployed in order to support the same application(s), thus reducing costs, while providing great response times.
Today, we announced results using the new SPEC SFS 2008 Benchmark to test 3 configurations. For more on the Specifics of the SPEC SFS 2008 Benchmark, Mike Eisler has a couple of great posts here
The individual benchmarks results can be be found below:
However, thru the good work of a co-league of mine, the results from the 3 configurations have been summarized on the below chart:
!http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0111684400cb970c-pi.jpg|height=327|style=border-right: 0px; border-top: 0px; border-left: 0px; border-bottom: 0px|alt=SPECSFS|width=450|src=http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0111684400cb970c-pi.jpg|border=0!
Based on the specific results the conclusion one can easily arrive to is that:
Starting with DataONTAP 188.8.131.52, we've provided the ability thru the collection of Predictive Cache Statistics (PCS) to determine whether specific workloads will benefit from the deployment of PAM modules and with what significance.