Skip to content

Latest commit

 

History

History
87 lines (67 loc) · 1.35 KB

Ceph_Deployment.md

File metadata and controls

87 lines (67 loc) · 1.35 KB

Configuring Ceph with Blkin

ssh into VM1 where Ceph is Deployed

ssh centos@128.31.25.229 -A
cd ceph/build

Startup Ceph with desired configurations

OSD=3 MON=3 RGW=1 ../src/vstart.sh -n

Stop Ceph so that the tracepoints can be enabled.:

/home/centos/ceph/src/stop.sh

Start up an Lttng session and enable tracepoints

lttng create blkin-test
lttng enable-event --userspace zipkin:timestamp
lttng enable-event --userspace zipkin:keyval
lttng start

Startup Ceph again

OSD=3 MON=3 RGW=1 ../src/vstart.sh -n

Go to a parallel terminal to run this

ssh centos@128.31.25.229 -A
~/ceph/build/bin/ceph-mgr -i x -c ~/ceph/build/ceph.conf

Now put something in using rados, check that it made it, get it back, and remove it

./bin/rados mkpool test-blkin
./bin/rados put test-object-1 ../src/vstart.sh --pool=test-blkin
./bin/ceph osd map test-blkin test-object-1
./bin/rados get test-object-1 ./vstart-copy.sh --pool=test-blkin
md5sum vstart*
./bin/rados rm test-object-1 --pool=test-blkin

Now stop Lttng session and see what was collected

lttng stop
lttng view