Filerview GUI help page states the following,
Oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to that file, which improves performance by reducing network traffic.
By default, oplocks are enabled for each qtree. If you disable oplocks for the entire storage system, oplocks are not sent even if you enable oplocks on a per-qtree basis.
If a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must relinquish the oplock and its access to the file. The redirector must then invalidate cached data and flush writes and locks, resulting in possible loss of data that was to be written.
CIFS oplocks on the storage system are on by default. You might turn CIFS oplocks off under either of the following circumstances (otherwise, you can leave CIFS oplocks on):
You are using a database application whose documentation recommends that oplocks be turned off.
You are handling critical data and cannot afford even the slightest data loss.}
Hope this helps you to understand the CIFS oplocks.
They are a feature of CIFS, and quite an important one too. Here is a description from Microsoft ...
... opportunistic locking is enabled for server message block (SMB) clients that run one of the Windows operating systems .... Opportunistic locking lets clients lock files and locally cache information without the risk of another user changing the file. This increases performance for many file operations...
Also worth having a quick read of the Wikipedia entry
Basically in CIFS environments you wouldn't really have a need to disable them. There are odd cases where you would disable it, I know that Enterprise Vault implementations sometimes recommend it for the vault stores (EV would be the only operator accessing these). For general purpose file shares it is almost mandatory, and it a huge benefit. Disabling it can often have a big negative performance impact.
As per the discussions on oplocks, it will be caching the data as well as lock the accessing data to prevent simultaneous modification. But got some doubt.
The 2nd circumstance where we need to make the oplocks option off.
* You are handling critical data and cannot afford even the slightest data loss.
Here if the data is critical , then I feel the oplocks on option will work better. Can you clarify this.
Snippet from File Access and Protocols Management Guide what they mean by data loss follows
Under some circumstances, if a process has an exclusive opslock on a file and a second process attempts
to open the file, the first process must invalidate cached data and flush writes and locks. The client must
then relinquish the oplock and access to the file. If there is a network failure during this flush, cached
write data might be lost.
Data loss possibilities: Any application that has write-cached data can lose that data under the following
set of circumstances:
• It has an exclusive oplock on the file.
• It is told to either break that oplock or close the file.
• During the process of flushing the write cache, the network or target system generates an error.
Error handling and write completion: The cache itself does not have any error handling—the applications
do. When the application makes a write to the cache, the write is always completed. If the cache, in
turn, makes a write to the target system over a network, it must assume that the write is completed
because if it does not, the data is lost.