10

EXT4 vs XFS for Oracle, which one performs better?

 3 years ago
source link: https://myvirtualcloud.net/ext4-vs-xfs-for-oracle-which-performs-better/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

EXT4 vs XFS for Oracle, which one performs better?

  • 02/10/2019

[UPDATE] Since publishing this post I received requests to benchmark ASM and Oracle Enterprise Linux. An updated post for this article EXT4 vs XFS vs ASM vs ASM + OEL, which one performs better? Taking it to the next level.

My past few blog posts have all been on Oracle and SLOB (Silly Little Oracle Benchmark) create by Kevin Closson. As part of a number of runs with SLOB, I also was curious to benchmark EXT4 and XFS file systems.

EXT4 vs XFS, which file system performs better?

As you can imagine there is not a single and simple answer to these questions because it depends on a number of variables – so I tried to eliminate most of them.

  • The same server
  • The same VM configuration
  • The same Hypervisor
  • The same Oracle configuration
  • The same SLOB configuration (30min with a 70:30 Read/Write ratio)
  • The same LVM configuration
  • The same storage configuration (I am using Datrium DVX, which uses localhost SSDs to perpetually read data I/O locally at bus speeds. More info on the exact config here)

The only aspect changing between tests is the datadisk filesystem, being either EXT4 or XFS. Here is an image of the disk configuration.

–CentOS/7 (XFS)
|–250 GB
–Oracle U01 (XFS)
|-250 GB
–Oracle U02 (XFS)
|–250 GB
–Oracle Datadisk (LVM)
|-500 GB
|-500 GB
|-500 GB
|-500 GB
|-500 GB
|-500 GB
|-500 GB
|-500 GB

Additionally, considering that file systems can be optimized in countless ways I have accepted all the default configurations for fdisk, LVM, FSTAB and the file systems themselves. That means that this comparison is only valid for the pristine default installations.

--- Volume group ---
VG Name vg-01
System ID
Format lvm2
Metadata Areas 8
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 8
Act PV 8
VG Size <3.91 TiB
PE Size 4.00 MiB
Total PE 1023992
Alloc PE / Size 1023992 / <3.91 TiB
Free PE / Size 0 / 0
VG UUID fC2LdY-IBs9-AfKd-LevI-WbfX-2PCx-VxSZei
--- Logical volume ---
LV Path /dev/vg-01/oracle
LV Name oracle
VG Name vg-01
LV UUID o3V0Ca-Qiv0-3tDI-bWsp-2mrA-Btci-zbckEu
LV Write Access read/write
LV Creation host, time oradb02.slab.datrium.com, 2019-01-30 17:53:44 -0800
LV Status available
# open 1
LV Size <3.91 TiB
Current LE 1023992
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 16384
Block device 253:3

To collect the data I used Chris Buckel article on SLOB Sustained Throughput Test: Interpreting SLOB Results, and I graphed the same way.

However, looking at every data point in a graph doesn’t really give us a solid understanding of what is happening because the SLOB workload is dynamic and there are peaks and deeps during a 30 minutes run.

To eliminate outliers I replaced the time series with a trendline that uses a moving average of 10 data points (there are roughly 605 data points for each SLOB run). This approach removed anomalies while maintaining data fidelity.

iops-1024x742.png
syscpu-1024x742.png

Conclusion

The conclusion for this Oracle SLOB test that uses 8Kb block size I/O is that XFS performs better than EXT4 under the exact same default configuration conditions – further, XFS is able to better utilize the CPU available to drive performance, due to the parallel I/O based on allocation groups. As always, your mileage may vary 🙂


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK