Lab 2.2 - Ceph osdmap


1. Create Pool

# create pool with placement group 128
sudo ceph osd pool create osdmap-pool 128

# list created pools
sudo ceph osd lspools

# verify the placement group number
sudo ceph osd pool get osdmap-pool pg_num

2. Create File with size 500M

dd if=/dev/zero of=bigfile bs=500M count=1

3. put file to ceph pool

sudo rados -p osdmap-pool put bigobject bigfile

you might be see error like this

error putting osdmap-pool/bigfile: (27) File too large

4. Modify ceph configuration

by default ceph just can store single object with 128Mi size, to make it support up 500M file size change the configuration

sudo ceph config set global osd_max_object_size 600M

5. put it again

sudo rados -p osdmap-pool put bigobject bigfile

6. verify it

sudo rados -p osdmap-pool ls

7. show where object are stored in OSDs

sudo ceph osd map osdmap-pool bigobject

Example output

osdmap e1805 pool 'osdmap-pool' (5) object 'bigobject' -> pg 5.f71d4b98 (5.18) -> up ([2,5,1], p2) acting ([2,5,1], p2)

Now , we already created a osdmap-pool , added objects bigfile to pool-A . Observe the above output. It throws a lot of information

  1. OSD map version id is e1805
  2. pool name is osdmap-pool
  3. pool id is 5
  4. object name ( which was enquired bigobject)
  5. Placement Group id to which this object belongs is 5.18
  6. Our osdmap-pool has replication level set to 3 , so every object of this pool should have 3 copies on different OSD , here our object's 3 copies resides on OSD.2 , OSD.5 and OSD.1

to see our level size replication, we can type the command below

sudo ceph osd pool get osdmap-pool size