My understanding is that Filers with good dedupe savings, will cause 'fragmentation' in data layout and will eventuall lead to:
1. slow sequential reads
2. eventually cause high disk utilization which in turn may cause latency
3. causes non-contigous free space, which will affect writes.
In that case, following option in 8.1.1 seems attractive.
filer>aggr options aggr_name free_space_realloc on
fller>vol options vol_name read_realloc space_optimized
Note: space_optimized option is synonymous with the physical reallocation method.
Accodring to TR-3929 (Reallocate Best Practices) -These two are complementary technologies that help maintain optimal layout. Read reallocate will optimize the system for sequential reads on the fly, while
free space reallocate will optimize for writes.
But, Read reallocate is a volume option that performs opportunistic reallocation on data to improve read performance. So I believe it again depends on the read/write ratio of your applcation, therefore you need to selective before enabling this option on a particular volume ?
Any thoughts folks?
As TR-3929 states, the best practice is to enable them both. However, it's all about sequential reads/writes. I've disabled those options for random data volumes that have high IO since the performance went down. For random read/write volumes I've used scheduled reallocation (i.e. once a week) at non-peak hours. The outcome is satisfactory, regular scheduled reallocation helped me to reduce the dedupe cycle time.
I hope it helps.
I have enabled these options, by recommendation from support after running into issues with aggregate fragmentation and write performance (CPreads 4x higher than writes in aggr)
CPU usage is higher, as there are always redirect scans going on in the background, but performance overall is better, with the controller not choking with bursty write workloads. (although difficult to quantify as we upgraded to 8.1.2 from 8.0.2 at the same time)
I just enabled read-realloc for all vols in the aggr, but would be interested to hear more on timo.puronens experience with being a bit more selective, would large NFS VM datastores count as random or sequential?
I would consider VM datastore as random. I tried to enable free_space_realloc for aggregate and read_realloc for VM volume but had to turn these off since there were constant performance issues. I would suggest scheduled physical reallocation for these volumes instead.
Nice to see this thread revived after being in coma for almost 6 months. Thanks Tomas.
Thanks all for sharing your experience. It is indeed helpful. On TR it is fine, but it helps a great deal when we get to hear from live production environments. So keep sharing your experiences.
Basically, for sequential read/write loads, enabling these (both) option makes sense, b'cos it will make way for more contiguous free space & read optimization as needed by the such workloads. However, for random write work loads, it does not help b'cos there is no concept of sequential write with WAFL (NetApp writes anywhere on the aggregate in an effort to reduce the write to disk latency). But, ONTAP tries it best to look for FULL STRIPES for writes, hence scheduled reallocation can help create that contiguous free space required by WAFL FULL Stripes. I guess, it will not matter if there are plenty of free space on the storage array, but if the storage array is approaching saturation (reaching full space) and you have lots of random free spaces created over a period of time, then scheduled reallocation will certainly help I guess.
It is interesting because as data is written to the WAFL in a fragmented manner to gain WRITE performance, it can hurt you during sequential reads, and I guess that is where read_allocate can help optimize for sequential read workload ?
This leads me to this analogy...
RANDOM/Seq WRITES = Already optimized due to WAFL architecture (NVRAM/MEM) but when disk saturation chips in, free_space reallocation can help improve write performance by avoiding back to back CPs.
RANDOM READ = FLASH CACHE
SEQUENTIAL READ = read_allocate
My customer experienced high CPU (100% constantly) load. sysstat -M didn't have 100% on all cores though but around 80 each core. And dedupe process in the end didn't finish in 24h. There was no significant performance impact to user data serving, but the customer was not able to monitor the subsystem over SNMP. After I reallocated luns/volumes, corrected some misalignment issues and went through dedupe/reallocation schedules the system behavior was back to normal (cpu < 50%, peaks@90%).
Thanks for the reply - that makes more sense. the controllers treat the reallocate procedures as low priority tasks, along with dedupes/snmp - and throttles them accordingly to maintain good service serving data.
Our CPU (as per DFM/Sysstat) constantly shows 99/100% however sysstat -m or sysstat from powershell shows the true figure which is nearer 40-60% across 8 cores.
I believe the latest version of DFM actually does not monitor CPU at all by default as its no longer an accurate measure of showing system load in these modern multi-core systems.
Yep, that's correct.
One thing I forgot to mention was that when read_reallocate and reallocate_free_space were both enabled on volumes that contained VM datastores, there were some performance issues, or impact to user data. That's why I went the other way and chose scheduled reallocation instead.
This is a great thread. As Ashwin says it's all well and good reading TR's but sometimes you need to hear a real persons experience with implementing these suggestions from Netapp. We currently have a 4 node 3250 cluster and are seeing performance issues. We've got both HyperV and VMware volumes on this cluster. We have free_space_realloc switched on for all our aggregates.
We have read_realloc switched to space-optimized for all of our volumes. Our VM and HYPERV volumes are all de-duped and I believe the space optimized feature of the read realloc is supposed to work with de-duped volumes. How does everyone monitor the filers performance?
I've found all of the netapp tools to be rubbish for monitoring performance. We tried Oncommand Insight Perform and it looked great until we had an issue with filer performance and the leading people at Netapp couldn't tell us what went on with the tool. It looks like it mis reported latency at the time of the issue.