Test command example

iozone -t6 -Rb test4-6threads-fs48G-rs512k-R6_MD2_DDP2.xls -r512k -s48G -F /md3260/R6_MD2_DDP2/io1.test /md3260/R6_MD2_DDP2/io2.test /md3260/R6_MD2_DDP2/io3.test /md3260/R6_MD2_DDP2/io4.test /md3260/R6_MD2_DDP2/io5.test /md3260/R6_MD2_DDP2/io6.test

Optimization/tuning notes

We increased cache block size from 4K default to 32K in "optimized" tests (all tests except the first). XFS filesystems were not created with sunit/swidth to match RAID stripes.

Single DDP Volume without optimization

test3-6threads-fs48G-rs512k-R6_MD2_DDP2.png

Single DDP Volume (48GB and 512GB files)

Reconstruction time (1 disk removed, array immediately begins this process on disk removal): 10:40

Rebuild time (after reconstruction, 1 disk marked online again): 10:43

test4-6threads-fs48G-rs512k-R6_MD2_DDP2.png test7-6threads-fs512G-rs512k-R6_MD2_DDP2.png
test10-6threads-fs512G-rs512k-R6_MD2_DDP2-rebuilding.png test11-6threads-fs512G-rs512k-R6_MD2_DDP2-diskadd.png

Single Raid 6 30D Volume (48GB and 512GB files)

Rebuild time (1 disk marked "fail" then triggered "manual rebuild" in GUI): 24:59
test5-6threads-fs48G-rs512k-R6_MD1_30D.png test8-6threads-fs512G-rs512k-R6_MD1_30D.png

Single Raid 6 20D volume

Rebuild time (1 disk marked "fail" then triggered "manual rebuild" in GUI): 23:15

test6-6threads-fs48G-rs512k-R6_MD1_20D.png
test9-6threads-fs512G-rs512k-R6_MD1_20D.png test12-6threads-fs512G-rs512k-R620D-rebuilding.png

-- BenMeekhof - 29 Nov 2012

Topic revision: r11 - 10 Dec 2012, BenMeekhof
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback