こんにちは、カスタマーサポートのChoiです。
今回はSamsung NF1の性能検証のレポートになります。
NF1は新しいNVMe SSDのFormfactorで、M.2の拡張版の仕様になります。
詳細は↓からご確認できます。
https://www.samsung.com/us/labs/pdfs/collateral/Samsung-PM983-NF1-Product-Brief-final.pdf
利用したサーバは1UでNF1のDiskが32本まで搭載できるSuperMicroのSSG-1029P-NMR36Lになります。
16TBのSSDがありますので最大512TBまで搭載出来ます。
System | SSG-1029P-NMR36L |
CPU | Intel® Xeon® Silver 4114 2.20 GHz 10Cores *2 |
RAM | 32GB *2 |
SSD | Samsung PM983 NF1 3.84TB NVMe SSD |
OS | CentOS 7.3 |
Filesystem | ZFS |
Systemの詳細は↓からご確認できます。
https://www.supermicro.com/products/system/1U/1029/SSG-1029P-NMR36L.cfm
https://www.supermicro.com/flyer/f_All-Flash_SSG-1029P-NMR36L.pdf
NF1のDiskはOS上から普通のNVMeと同様のNamingで認識されています。
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:3 0 3.5T 0 disk
nvme1n1 259:0 0 3.5T 0 disk
nvme2n1 259:2 0 3.5T 0 disk
nvme3n1 259:1 0 3.5T 0 disk
ZFSでPoolを作成して性能測定しました。
▼ZFSインストールしてRAIDZを構成します。
[root@localhost ~]# yum install https://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
[root@localhost ~]# yum install zfs
[root@localhost ~]# modprobe zfs
[root@localhost ~]# zpool create tank raidz /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
[root@localhost ~]# zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
nvme0n1 ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
nvme2n1 ONLINE 0 0 0
nvme3n1 ONLINE 0 0 0
▼テストのためにCacheを無効にしました。
[root@localhost ~]# zfs set primarycache=none tank
[root@localhost ~]# zfs set secondarycache=none tank
測定
▼ZFSのVolumeにFIOテスト
mkdir /tank/fio-data
▼Sequential Read Throughput
[root@localhost ~]# fio -rw=read -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=1m -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [R(10)][100.0%][r=214MiB/s,w=0KiB/s][r=214,w=0 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=293198: Wed Sep 5 08:46:25 2018
read: IOPS=2332, BW=2333MiB/s (2446MB/s)(683GiB/300003msec)
slat (usec): min=1747, max=104856, avg=4274.95, stdev=986.83
clat (usec): min=16, max=2472.4k, avg=1639252.82, stdev=713578.17
lat (msec): min=2, max=2479, avg=1643.53, stdev=713.59
clat percentiles (msec):
| 1.00th=[ 41], 5.00th=[ 213], 10.00th=[ 430], 20.00th=[ 860],
| 30.00th=[ 1301], 40.00th=[ 1737], 50.00th=[ 2140], 60.00th=[ 2165],
| 70.00th=[ 2198], 80.00th=[ 2198], 90.00th=[ 2232], 95.00th=[ 2232],
| 99.00th=[ 2299], 99.50th=[ 2400], 99.90th=[ 2433], 99.95th=[ 2433],
| 99.99th=[ 2467]
bw ( KiB/s): min= 2048, max=1303272, per=16.35%, avg=390513.39, stdev=377618.70, samples=3650
iops : min= 2, max= 1272, avg=381.27, stdev=368.77, samples=3650
lat (usec) : 20=0.01%, 50=0.07%, 100=0.02%, 250=0.01%
lat (msec) : 4=0.04%, 10=0.16%, 20=0.22%, 50=0.70%, 100=1.16%
lat (msec) : 250=3.47%, 500=5.79%, 750=5.77%, 1000=5.76%, 2000=22.98%
lat (msec) : >=2000=53.86%
cpu : usr=0.31%, sys=50.52%, ctx=3400315, majf=0, minf=753137
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2%
issued rwt: total=699790,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
READ: bw=2333MiB/s (2446MB/s), 2333MiB/s-2333MiB/s (2446MB/s-2446MB/s), io=683GiB (734GB), run=300003-300003msec
[root@localhost ~]#
▼Sequential Write Throughput
[root@localhost ~]# fio -rw=write -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=1m -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [W(10)][100.0%][r=0KiB/s,w=1348MiB/s][r=0,w=1348 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=76656: Wed Sep 5 08:53:20 2018
write: IOPS=2113, BW=2114MiB/s (2216MB/s)(619GiB/300004msec)
slat (usec): min=583, max=48966, avg=4717.77, stdev=2273.02
clat (usec): min=3, max=2654.5k, avg=1808757.79, stdev=789926.35
lat (msec): min=2, max=2658, avg=1813.48, stdev=789.92
clat percentiles (msec):
| 1.00th=[ 42], 5.00th=[ 226], 10.00th=[ 468], 20.00th=[ 961],
| 30.00th=[ 1418], 40.00th=[ 1921], 50.00th=[ 2299], 60.00th=[ 2366],
| 70.00th=[ 2400], 80.00th=[ 2433], 90.00th=[ 2467], 95.00th=[ 2500],
| 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2635], 99.95th=[ 2635],
| 99.99th=[ 2635]
bw ( KiB/s): min= 2043, max=1327869, per=16.54%, avg=358104.72, stdev=369546.61, samples=3608
iops : min= 1, max= 1296, avg=349.50, stdev=360.88, samples=3608
lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.07%, 100=0.02%
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 4=0.04%, 10=0.14%, 20=0.23%, 50=0.69%, 100=1.13%
lat (msec) : 250=3.20%, 500=5.10%, 750=5.12%, 1000=5.08%, 2000=20.88%
lat (msec) : >=2000=58.29%
cpu : usr=4.62%, sys=29.43%, ctx=11174495, majf=0, minf=657838
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.2%
issued rwt: total=0,634147,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
WRITE: bw=2114MiB/s (2216MB/s), 2114MiB/s-2114MiB/s (2216MB/s-2216MB/s), io=619GiB (665GB), run=300004-300004msec
[root@localhost ~]#
▼Sequential Write IOPS
[root@localhost ~]# fio -rw=write -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=4k -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [W(10)][100.0%][r=0KiB/s,w=364MiB/s][r=0,w=93.1k IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=78848: Wed Sep 5 08:59:17 2018
write: IOPS=95.9k, BW=375MiB/s (393MB/s)(110GiB/300002msec)
slat (usec): min=8, max=100587, avg=100.42, stdev=407.19
clat (usec): min=10, max=192443, avg=53202.83, stdev=12899.85
lat (usec): min=32, max=192472, avg=53303.55, stdev=12915.46
clat percentiles (usec):
| 1.00th=[30540], 5.00th=[34341], 10.00th=[36439], 20.00th=[39584],
| 30.00th=[43254], 40.00th=[48497], 50.00th=[54789], 60.00th=[57934],
| 70.00th=[61604], 80.00th=[65274], 90.00th=[69731], 95.00th=[73925],
| 99.00th=[79168], 99.50th=[81265], 99.90th=[86508], 99.95th=[87557],
| 99.99th=[91751]
bw ( KiB/s): min=30281, max=46366, per=10.04%, avg=38520.11, stdev=2255.14, samples=6000
iops : min= 7570, max=11591, avg=9629.67, stdev=563.72, samples=6000
lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.04%, 50=41.70%
lat (msec) : 100=58.22%, 250=0.01%
cpu : usr=3.67%, sys=36.51%, ctx=5245111, majf=0, minf=385702
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,28766997,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
WRITE: bw=375MiB/s (393MB/s), 375MiB/s-375MiB/s (393MB/s-393MB/s), io=110GiB (118GB), run=300002-300002msec
[root@localhost ~]#
▼Sequential Read IOPS
[root@localhost ~]# fio -rw=read -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=4k -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [R(10)][100.0%][r=31.7MiB/s,w=0KiB/s][r=8120,w=0 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=185915: Wed Sep 5 09:08:23 2018
read: IOPS=8058, BW=31.5MiB/s (33.0MB/s)(9444MiB/300002msec)
slat (usec): min=785, max=101425, avg=1231.83, stdev=191.27
clat (usec): min=31, max=743531, avg=633490.46, stdev=19823.11
lat (usec): min=1155, max=744789, avg=634723.71, stdev=19832.42
clat percentiles (msec):
| 1.00th=[ 609], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 625],
| 30.00th=[ 625], 40.00th=[ 634], 50.00th=[ 634], 60.00th=[ 642],
| 70.00th=[ 642], 80.00th=[ 642], 90.00th=[ 642], 95.00th=[ 651],
| 99.00th=[ 651], 99.50th=[ 659], 99.90th=[ 718], 99.95th=[ 735],
| 99.99th=[ 743]
bw ( KiB/s): min= 2560, max= 3519, per=10.01%, avg=3225.74, stdev=58.17, samples=5990
iops : min= 640, max= 879, avg=806.34, stdev=14.50, samples=5990
lat (usec) : 50=0.01%, 100=0.01%, 250=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.02%, 250=0.05%, 500=0.08%, 750=99.83%
cpu : usr=0.80%, sys=22.19%, ctx=7318404, majf=0, minf=238938
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=2417583,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=9444MiB (9902MB), run=300002-300002msec
[root@localhost ~]#
▼Random Read IOPS
[root@localhost ~]# fio -rw=randread -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=4k -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [r(10)][100.0%][r=31.3MiB/s,w=0KiB/s][r=8008,w=0 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=253692: Wed Sep 5 09:14:27 2018
read: IOPS=8023, BW=31.3MiB/s (32.9MB/s)(9403MiB/300002msec)
slat (usec): min=780, max=101180, avg=1236.32, stdev=188.25
clat (usec): min=29, max=739852, avg=636263.05, stdev=18168.52
lat (usec): min=1182, max=741017, avg=637500.76, stdev=18171.23
clat percentiles (msec):
| 1.00th=[ 625], 5.00th=[ 625], 10.00th=[ 634], 20.00th=[ 634],
| 30.00th=[ 634], 40.00th=[ 634], 50.00th=[ 634], 60.00th=[ 642],
| 70.00th=[ 642], 80.00th=[ 642], 90.00th=[ 642], 95.00th=[ 651],
| 99.00th=[ 651], 99.50th=[ 651], 99.90th=[ 709], 99.95th=[ 735],
| 99.99th=[ 735]
bw ( KiB/s): min= 2584, max= 6920, per=10.00%, avg=3209.34, stdev=61.71, samples=5985
iops : min= 646, max= 1732, avg=802.27, stdev=15.44, samples=5985
lat (usec) : 50=0.01%, 100=0.01%, 250=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.02%, 250=0.05%, 500=0.08%, 750=99.84%
cpu : usr=0.90%, sys=22.18%, ctx=7287363, majf=0, minf=222555
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=2407080,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
READ: bw=31.3MiB/s (32.9MB/s), 31.3MiB/s-31.3MiB/s (32.9MB/s-32.9MB/s), io=9403MiB (9859MB), run=300002-300002msec
[root@localhost ~]#
▼Random Write IOPS
[root@localhost ~]# fio -rw=randwrite -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=4k -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [w(10)][100.0%][r=0KiB/s,w=28.3MiB/s][r=0,w=7234 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=307653: Wed Sep 5 09:20:08 2018
write: IOPS=7431, BW=29.0MiB/s (30.4MB/s)(8709MiB/300007msec)
slat (usec): min=12, max=101245, avg=1338.94, stdev=1264.14
clat (usec): min=22, max=911417, avg=686647.71, stdev=39705.07
lat (usec): min=1761, max=913702, avg=687987.62, stdev=39752.46
clat percentiles (msec):
| 1.00th=[ 609], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 659],
| 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 693],
| 70.00th=[ 701], 80.00th=[ 718], 90.00th=[ 726], 95.00th=[ 743],
| 99.00th=[ 785], 99.50th=[ 802], 99.90th=[ 860], 99.95th=[ 877],
| 99.99th=[ 894]
bw ( KiB/s): min= 488, max= 3703, per=10.01%, avg=2976.46, stdev=187.28, samples=5990
iops : min= 122, max= 925, avg=743.87, stdev=46.75, samples=5990
lat (usec) : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.02%, 250=0.05%, 500=0.08%, 750=96.46%, 1000=3.38%
cpu : usr=0.51%, sys=14.57%, ctx=7857840, majf=0, minf=150619
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,2229462,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=8709MiB (9132MB), run=300007-300007msec
[root@localhost ~]#
▼Random Read Throughput
[root@localhost ~]# fio -rw=randread -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=1m -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=100): [r(10)][100.0%][r=2442MiB/s,w=0KiB/s][r=2442,w=0 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=177356: Wed Sep 5 10:53:57 2018
read: IOPS=2362, BW=2362MiB/s (2477MB/s)(692GiB/300004msec)
slat (usec): min=1705, max=104427, avg=4219.50, stdev=975.16
clat (usec): min=25, max=2556.7k, avg=2154125.30, stdev=124783.16
lat (msec): min=3, max=2562, avg=2158.35, stdev=124.85
clat percentiles (msec):
| 1.00th=[ 2072], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2123],
| 30.00th=[ 2123], 40.00th=[ 2140], 50.00th=[ 2140], 60.00th=[ 2165],
| 70.00th=[ 2165], 80.00th=[ 2198], 90.00th=[ 2265], 95.00th=[ 2265],
| 99.00th=[ 2400], 99.50th=[ 2467], 99.90th=[ 2534], 99.95th=[ 2534],
| 99.99th=[ 2534]
bw ( KiB/s): min=190464, max=291975, per=10.03%, avg=242555.06, stdev=8492.02, samples=5950
iops : min= 186, max= 285, avg=236.67, stdev= 8.26, samples=5950
lat (usec) : 50=0.01%, 100=0.01%
lat (msec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.02%
lat (msec) : 250=0.05%, 500=0.09%, 750=0.09%, 1000=0.08%, 2000=0.35%
lat (msec) : >=2000=99.31%
cpu : usr=0.35%, sys=49.39%, ctx=3672621, majf=0, minf=1082807
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=708658,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
READ: bw=2362MiB/s (2477MB/s), 2362MiB/s-2362MiB/s (2477MB/s-2477MB/s), io=692GiB (743GB), run=300004-300004msec
[root@localhost ~]#
▼Random Write Throughput
[root@localhost ~]# fio -rw=randwrite -size=1g -directory=/tank/fio-data -ioengine=libaio -iodepth=512 -name=zfstest -numjobs=10 -bs=1m -time_based -runtime=300 -group_reporting -nrfile=10
zfstest: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=512
...
fio-3.1
Starting 10 processes
Jobs: 7 (f=70): [w(7),_(3)][100.0%][r=0KiB/s,w=3678MiB/s][r=0,w=3678 IOPS][eta 00m:00s]
zfstest: (groupid=0, jobs=10): err= 0: pid=50320: Wed Sep 5 11:27:46 2018
write: IOPS=2108, BW=2109MiB/s (2211MB/s)(618GiB/300012msec)
slat (usec): min=619, max=66670, avg=4726.41, stdev=2484.98
clat (usec): min=15, max=2759.8k, avg=2416840.47, stdev=159513.56
lat (usec): min=1344, max=2764.0k, avg=2421569.46, stdev=159274.29
clat percentiles (msec):
| 1.00th=[ 2089], 5.00th=[ 2265], 10.00th=[ 2299], 20.00th=[ 2366],
| 30.00th=[ 2400], 40.00th=[ 2433], 50.00th=[ 2433], 60.00th=[ 2467],
| 70.00th=[ 2467], 80.00th=[ 2500], 90.00th=[ 2534], 95.00th=[ 2534],
| 99.00th=[ 2601], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2702],
| 99.99th=[ 2702]
bw ( KiB/s): min=24576, max=288768, per=10.00%, avg=215977.61, stdev=41363.24, samples=5963
iops : min= 24, max= 282, avg=210.72, stdev=40.39, samples=5963
lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02%
lat (msec) : 100=0.02%, 250=0.07%, 500=0.06%, 750=0.10%, 1000=0.10%
lat (msec) : 2000=0.50%, >=2000=99.11%
cpu : usr=4.76%, sys=28.84%, ctx=11125618, majf=0, minf=842879
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,632712,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=512
Run status group 0 (all jobs):
WRITE: bw=2109MiB/s (2211MB/s), 2109MiB/s-2109MiB/s (2211MB/s-2211MB/s), io=618GiB (663GB), run=300012-300012msec
[root@localhost ~]#
性能Summary
Throughput(1m) | IOPS(4k) | |
Seq Read | 2446MB/s | 8,058 |
Seq Write | 2216MB/s | 95,900 |
Rand Read | 2477MB/s | 8,023 |
Rand Write | 2211MB/s | 7,431 |
今回の結果からかなりいいthroughputが出る事が分かります。
一つ不思議に思っているのはSequential WriteのIOPSがかなり速い事ですが、ZFSの書き込み特性が出ているかなと思います。
WindowsからNF1のDisk1本に対して性能は以下になります。かなりいい数字です。
以上、NF1でZFSのPoolを作成した時の性能レポートでした。
実機の貸し出しも行っておりますのでご希望の際は弊社営業までにお問い合わせ頂ければと思います。