Project

General

Profile

Actions

Bug #21809

closed

Raw Used space is 70x higher than actually used space (maybe orphaned objects from pool deletion)

Added by Yves Vogl over 6 years ago. Updated over 6 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

I've had a pool named vm-0d29db27 which has used round about 3T of storage.
I wanted to purge this pool and adjust pgs, so I deleted the entire pool.
After a few minutes I recreated this pool and checked for the cluster usage:

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
17472G 16731G 740G 4.24
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
vm-0d29db27 3 8317M 0.05 5149G 2086
os 4 1259M 0 5149G 323

As you can see, there are 740G marked as raw used space - but in fact there are only round about 10G used by the pools.

It seems to me that the deletion was somehow aborted.
Now - how can I free up this orphaned space?

I'm using:

CentOS Linux release 7.4.1708 (Core)
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)

I'm using bluestore and having WAL and BlockDB on a separate SSD for each OSD disk.
No erasure coding used.

Here are some information:

Pools:
3 vm-0d29db27
4 os

Cluster:
cluster:
id: ***
health: HEALTH_OK

services:
mon: 3 daemons, quorum inf-d7a3ca,inf-30d985,inf-0a38f9
mgr: inf-0a38f9(active), standbys: inf-d7a3ca, inf-30d985
osd: 6 osds: 6 up, 6 in
rbd-mirror: 1 daemon active
data:
pools: 2 pools, 128 pgs
objects: 2409 objects, 9577 MB
usage: 740 GB used, 16731 GB / 17472 GB avail
pgs: 128 active+clean

OSD Usage:

ID CLASS WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE VAR  PGS
1 hdd 2.91499 1.00000 2984G 196G 2788G 6.59 1.55 69
2 hdd 2.91499 1.00000 2984G 195G 2789G 6.56 1.55 59
4 hdd 2.81070 1.00000 2878G 91998M 2788G 3.12 0.74 67
5 hdd 2.81070 1.00000 2878G 91325M 2789G 3.10 0.73 61
0 hdd 2.80579 1.00000 2873G 86100M 2789G 2.93 0.69 56
3 hdd 2.80579 1.00000 2873G 86986M 2788G 2.96 0.70 72
TOTAL 17472G 740G 16731G 4.24
MIN/MAX VAR: 0.69/1.55 STDDEV: 1.68

OSD Pool Mapping:

pool :    4    3    | SUM
--------------------------------
osd.4 34 33 | 67
osd.5 30 31 | 61
osd.0 26 30 | 56
osd.1 33 36 | 69
osd.2 31 28 | 59
osd.3 38 34 | 72
--------------------------------
SUM : 192 192 |

Regards,
Yves

Actions

Also available in: Atom PDF