Lab 13 : Bench Marking and Performance

POD parameters : user0 POD0 10.1.64.110
User admin node1 node2 node3 spare
user0
pod0-admin
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1
pod0-node1
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1
pod0-node2
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1
pod0-node3
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1
pod0-spare
eth0            : 10.1.64.110
eth1            : 10.1.65.110
eth2            : ext-net
Netmask  : 255.255.255.0
Gateway  : 10.1.64.1

Storage Cluster

Ceph includes the rados bench command to do performance benchmarking on a RADOS storage cluster. The command will execute a write test and two types of read tests. The –no-cleanup option is important to use when testing both read and write performance.

Note: Before running these performance tests, drop all the file system caches by running the following.

1. Login into pod0-admin:

ssh centos@pod0-admin
sudo su -

2. Create a new storage pool:

ceph osd pool create testbench 100 100

3. Execute a write test for 10 seconds to the newly created storage pool:

rados bench -p testbench 10 write --no-cleanup

Output:

Maintaining 16 concurrent writes of 4194304 bytes for up to 10
seconds or 0 objects
Object prefix: benchmark_data_cephn1.home.network_10510
sec Cur ops started finished avg MB/s cur MB/s last lat
.........
.......

4. Execute a sequential read test for 10 seconds to the storage pool:

rados bench -p testbench 10 seq

5. Execute a random read test for 10 seconds to the storage pool:

rados bench -p testbench 10 rand

6. To increase the number of concurrent reads and writes, use the -t option, which the default is 16 threads.

rados bench -p testbench 10 write -t 4 --run-name client1

7. Remove the data created by the rados bench command:

rados -p testbench cleanup

Block Device

Creating a Ceph Block Device:

1. As root, load the rbd kernel module, if not already loaded:

modprobe rbd

2. As root, create a 1 GB rbd image file in the testbench pool:

rbd create image01 --size 1024 --pool testbench

On Centos 7.0, users utilizing the kernel RBD client will not be able to map the block device image.

3. You must first disable all these features, except, layering.

Syntax:

rbd feature disable  
rbd feature disable testbench/image01 exclusive-lock object-map fast-diff deep-flatten

4. As root, map the image file to a device file:

rbd map image01 --pool testbench --name client.admin

5. As root, create an ext4 file system on the block device:

mkfs -t ext4 /dev/rbd/testbench/image01

6. As root, create a new directory:

mkdir /mnt/ceph-block-device

7. As root, mount the block device under /mnt/ceph-block-device/:

mount /dev/rbd/testbench/image01 /mnt/ceph-block-device

8. Execute the write performance test against the block device

rbd bench-write image01 --pool=testbench

Output:

bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq
SEC OPS OPS/SEC BYTES/SEC
2 11127 5479.59 22444382.79
...
...