Anyone have any implementation guide for dNFS on RHEL?
Specifically are the mount options required on the Linux host for the /etc/fstab or the /etc/mtab?
So far from what i read from Oracle documents, is that these mount options are not relevant.
From TR 3633:
Direct NFS Configuration
The first step to using the Direct NFS client is to make sure that all of the Oracle Database files residing on the NetApp storage system volumes are mounted using kernel NFS mounts. Direct NFS does not require any special NFS mount options. However, it needs the rsize and wsize NFS mount options to be set to 32768 (32K) as the max value of DB_BLOCK_SIZE can be 32K.
See the recommended NFS mount options for Oracle Databases in http://kb.netapp.com/support/index?page=content&id=3010189.
It mentioned no special NFS mount options, but there is reference to the NFS mount options for Oracle... so which one should i follow?
And where is the ADR HOME located?
Thanks for reaching out to NetApp Community.
Yes, you must follow the kb article to setup the mount points and mount the volumes with the required options. It is immaterial whether you are using DNFS or not, the mount points should be created and volumes must be mounted by the OS commands. DNFS makes use of oranfstab to start operating on the database files and can be checked by quering v$dnfs... views(v$dnfs_servers, v$dnfs_channels, v$dnfs_files, v$dnfs_stats). However you still require the mount points at all times.
For implementation I would suggest you to follow this general practice:
1. create a file 'oranfstab' in one of the following locations:
The first one from the top would be picked by Oracle instance for it's use.
2. Then modify the above file to something like this:
server: FASController1 <-- This is just a name you can keep it anything
local: 10.238.162.233 <-- Local IP address of the first ethernet interface that will be used for NFS, find this out from ifconfig -a
path: 10.238.162.234 <-- IP address of the NFS Server which is NetApp Controller, find this out from System Manager or through SSH connection to Storage
local: 10.239.162.233 <-- Local IP address of the second ethernet interface that will be used for NFS, find this out from ifconfig -a
path: 10.239.162.234 <-- IP address of the NFS Server which is NetApp Controller, find this out from System Manager or through SSH connection to Storage
export: /vol/oradata1 mount: /mnt/oradata1
3. And then you can run the following commands to enable ODM library:
$ cd $ORACLE_HOME/lib
$ cp libodm11.so libodm11.so_stub
$ ln -s libnfsodm11.so libodm11.so
That's all you have to do. Bounce back the instance and you should see from the v$dnfs views that the instance is now using DNFS.
ADR refers to the Automatic Diagnostic Repository where all your logs are stored to diagnose. I think you can safely ignore this if you are not running 11gR1. There were special requirements of having a different mount option for storing diagnostic logs on the NFS share for this release however that is not applicable for any other release. Does that help? Please don't forget to mark this question as "Answered" if you all your queries are resolved to your satisfaction.
Thanks for the answers, btw if i am not using multipaths i would then skip steps 1 and 2 right?
Since oranfstab is only use for indicating multipaths etc..
from the kb these are the mount options i recommended to the customers:
rw,bg,hard,rsize=65536,wsize=65536,vers=3, nointr,timeo=600, tcp
but for consistency i told them to apply the same mount options for all volumes, such as datafiles, redologs, archive logs, tmp etc... it should be fine right?
short question about your last post here. Is there a specific reason why it is necessary to use a oranfstab even if there is no multipath configured? We think about to remove the oranfstab completely and only create one if we need to configure multipath.
I actually try to create a new volume/mountpoint structure for SMO and until now I don't see any problem. It seems to be enough when the (d)nfs-mountpoints of the db are configured in the /etc/fstab (OS: OEL 6.2; DB: > 220.127.116.11.6; dnfs)
We just started experimental with DNFS and we saw the performance degrade (just a little bit) with DNFS and I'm trying to explain why.
With Kernel NFS, a 5G datafile creation takes 18-19 seconds.
With DNFS, a 5G datafile creation takes 20-21 seconds.
I was hoping that DNFS with be faster than KNFS.
Anybody has any ideas ?