We have a oracle 10g setup on FAS3240 active/active controllers( FC SAN setup), and serving Solaris sparc hosts. From filer perspective the CPU uage is at 20-30% and IOPS are relatively low.
But from the solaris host side we are seeing high service times and a the SAR report on the solaris side, reports a high utilization on the netapp LUNS. Its a veritas cluster too.
Anyone faced similar issues or can provide any suggestions on this!!
Adding a few more specs:
It is running 8.0.1 - 7 mode, active active controllers sharing the oracle LUNS/volumes.
As per performance advisor reports average FCP read latency is 3 milli sec, and FCP write latency of about 12 millisec on both controllers.
No CIFS, only FC setup.
Anyone has come across such a situation/issue, any inputs??
It is hard to tell from your descriptions what might be going on. Read latency certainly looks good and write latency could be better, but isn't necessarily that bad. In order to really get an understanding we would need to look at things like FCP partner path configuration, alignment, RG layout, etc.
Your best option would be to open a performance case. With a perfstat we should be able to see if there is anything strange going on.
I did find a TR that you might find interesting: http://www.netapp.com/us/library/technical-reports/tr-3850.html
If your filer is sending ASUPs email me (don't post) your controller name and company and I'll take a quick look.
Did you manage to resolve this issue?
We are running an Oracle 10.0.2.3 over NFS with a brand new FAS2220 (only purchase in Nov 2011) with the same Ontap version and having the same issue.
We've migrated the Oracle Data/Log/Redo/Control and temp files to local disk storage. Brought the Database up and ran a simple query which took 6 seconds. After migrating the data/log/redo/control and temp files to the Netapp storage the same query took 42 seconds to run. This simple select statement only fetches 50 records from the database.
We have opened a case with Netapp and ran a load of perfstats that they are currently investigating.
This sounds exactly the same issue that we are experiencing.