Project

General

Profile

Actions

Bug #44660

open

Multipart re-uploads cause orphan data

Added by Chris Jones about 4 years ago. Updated 4 months ago.

Status:
Pending Backport
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
multipart gc backport_processed
Backport:
pacific quincy reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The impact of this issue is accumulation of large amounts (hundreds of TB) of orphan data, which we are unable to clean up due to the orphans find tool being impractical to run on a 2PB cluster due to the time required, as well as due to a memory leak. After several weeks of the orphan find tool running, it consumes up to 1TB of RAM and then terminates with an out of memory issue.

For background information, this issue is replicable on ANY known version of CEPH of Jewel or greater, including new versions of Nautilus we tested as of a few months ago.

It is related to the following item, which was opened around 3 years ago, but it has not received any attention in quite some time.

https://tracker.ceph.com/issues/16767

In a nutshell, the one certain condition under which this occurs is when re-uploading one or more parts of a multipart upload as documented in the issue above.

There is an attached bash script (ceph-leaked-mp-populater.sh) that should be able to recreate the issue on any ceph version Jewel and up. This includes the ceph-daemon docker containers.
The results below are after running the ceph-leaked-mp-populater.sh script.

  1. radosgw-admin bucket stats
    NOTES:
    This bucket was empty prior to uploading ONE mp file of 25MB, with 5 uploads(parts) of 5MB each, and with each of the 5 parts re-uploaded using the same upload id before completing the mp upload.
    This bucket should contain only ONE object, the completed file, and should only be 25MB in size, however, it is also reflecting the leaked parts and shows an additional 25MB.

root@rgw09:~# radosgw-admin bucket stats -b mybucket {
"bucket": "mybucket",
"zonegroup": "1d0e4456-f8d2-4e4c-abc1-1db8e834507d",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2",
"marker": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2",
"index_type": "Normal",
"owner": "admin",
"ver": "0#1,1#1,2#1,3#1,4#27,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0",
"mtime": "2020-03-17 14:35:44.895102",
"max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#",
"usage": {
"rgw.main": {
"size": 52428800,
"size_actual": 52428800,
"size_utilized": 52428800,
"size_kb": 51200,
"size_kb_actual": 51200,
"size_kb_utilized": 51200,
"num_objects": 6
},
"rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 0
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}

  1. radosgw-admin bucket list
    NOTE:
    This but illustrates that there are invalid objects remaining in the bucket after the multipart upload is completed with reuploaded parts.
    The objects with the "_multipart" prefix are entries in the index pool that are added when the parts are re-uploaded. Note that the SAME UPLOAD ID was used for the reupload, however the created index entry reflects a new/different upload ID for each of the parts. This is the source of the incorrect bucket stats.
    There is a correlating set of objects in the data pool representing the ORIGINALLY UPLOADED PARTS (with the ORIGINAL UPLOAD ID) that are not removed. This is the source of the orphan data in the data pool.
    To restate:
    The index pool reflects entries with A DIFFERENT UPLOAD ID than the one used to actually upload the object, and are not cleaned up on completion of the upload
    The data pool retains the ORIGINAL parts from the first uploads of those parts, and are not deleted when the multipart is completed.

root@rgw09:~# radosgw-admin bucket list -b mybucket
[ {
"name": "_multipart_mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v.3",
"instance": "",
"ver": {
"pool": 24,
"epoch": 537
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 5242880,
"mtime": "2020-03-17 14:36:16.286084Z",
"etag": "9e0372262b7b4b72a47e019b4bd1c890",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 5242880,
"user_data": ""
},
"tag": "_yD_noQmPhQ_mHaPhTpsC2A2H6czMaCo",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}, {
"name": "_multipart_mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97.5",
"instance": "",
"ver": {
"pool": 24,
"epoch": 739
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 5242880,
"mtime": "2020-03-17 14:36:24.108514Z",
"etag": "9572a6537c6d5e14edffa1b1e6a34b72",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 5242880,
"user_data": ""
},
"tag": "_rMiNuSVe5_8hsMB-IQ5O9ecU25BXgVX",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}, {
"name": "_multipart_mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4.4",
"instance": "",
"ver": {
"pool": 24,
"epoch": 542
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 5242880,
"mtime": "2020-03-17 14:36:20.180951Z",
"etag": "30a9025baa371b42241a453d183f98d2",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 5242880,
"user_data": ""
},
"tag": "_HirjTlD2ccPbmMru_zQ1kDW2KpyFUX1",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}, {
"name": "_multipart_mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ.1",
"instance": "",
"ver": {
"pool": 24,
"epoch": 234
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 5242880,
"mtime": "2020-03-17 14:36:08.470327Z",
"etag": "bbae3bd3fa90f2df80d29eccf57635bc",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 5242880,
"user_data": ""
},
"tag": "_SynsNOIYTQOvg7czESjjJhqSt9Mp2um",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}, {
"name": "_multipart_mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6.2",
"instance": "",
"ver": {
"pool": 24,
"epoch": 212
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 5242880,
"mtime": "2020-03-17 14:36:12.382395Z",
"etag": "8de31bf63197b779138d4eb661b3047d",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 5242880,
"user_data": ""
},
"tag": "_NIk9FcgAcX4sIV2Jpe949wztMNKwAEL",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}, {
"name": "mymp1",
"instance": "",
"ver": {
"pool": 24,
"epoch": 223
},
"locator": "",
"exists": "true",
"meta": {
"category": 1,
"size": 26214400,
"mtime": "2020-03-17 14:36:25.239287Z",
"etag": "37f7055208439684f087af5d7746ccad-5",
"owner": "admin",
"owner_display_name": "administrator",
"content_type": "",
"accounted_size": 26214400,
"user_data": ""
},
"tag": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.6857325",
"flags": 0,
"pending_map": [],
"versioned_epoch": 0
}
]

  1. Showing that all of the original (should have been replaced) parts are still in the data pool, and will never be removed by ceph. These are the objects which are incorrectly bloating our data pool.
    In the following listing... the data pool objects that have the ORIGINAL UPLOAD ID are the invalid pieces. These are the originally uploaded parts of the multipart upload, and were replaced by the items having the different/unique upload ids.

The original invalid objects are noted by me in the output below:

root@rgw09:~# rados p default.rgw.buckets.data ls | grep '3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2' | sort
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2_mymp1
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v.3 <--
These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97.5 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6.2 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4.4 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ.1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v.3_1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97.5_1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6.2_1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4.4_1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ.1_1 <--- These are the replacement parts that are now the valid ones
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1_1 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2_1 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3_1 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4_1 <--- original part that was replaced (now orphan data)
3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5_1 <--- original part that was replaced (now orphan data)

  1. radosgw-admin object stat
    NOTE: You can see in the object stat manifest section below that ceph is notating the replacement parts as the valid ones.
    root@rgw09:~# radosgw-admin object stat -b mybucket -o mymp1 {
    "name": "mymp1",
    "size": 26214400,
    "policy": {
    "acl": {
    "acl_user_map": [ {
    "user": "admin",
    "acl": 15
    }
    ],
    "acl_group_map": [],
    "grant_map": [ {
    "id": "admin",
    "grant": {
    "type": {
    "type": 0
    },
    "id": "admin",
    "email": "",
    "permission": {
    "flags": 15
    },
    "name": "administrator",
    "group": 0,
    "url_spec": ""
    }
    }
    ]
    },
    "owner": {
    "id": "admin",
    "display_name": "administrator"
    }
    },
    "etag": "37f7055208439684f087af5d7746ccad-5",
    "tag": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.6857325",
    "manifest": {
    "objs": [],
    "obj_size": 26214400,
    "explicit_objs": "false",
    "head_size": 0,
    "max_head_size": 0,
    "prefix": "mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ",
    "rules": [ {
    "key": 0,
    "val": {
    "start_part_num": 1,
    "start_ofs": 0,
    "part_size": 5242880,
    "stripe_max_size": 4194304,
    "override_prefix": ""
    }
    }, {
    "key": 5242880,
    "val": {
    "start_part_num": 2,
    "start_ofs": 5242880,
    "part_size": 5242880,
    "stripe_max_size": 4194304,
    "override_prefix": "mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6"
    }
    }, {
    "key": 10485760,
    "val": {
    "start_part_num": 3,
    "start_ofs": 10485760,
    "part_size": 5242880,
    "stripe_max_size": 4194304,
    "override_prefix": "mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v"
    }
    }, {
    "key": 15728640,
    "val": {
    "start_part_num": 4,
    "start_ofs": 15728640,
    "part_size": 5242880,
    "stripe_max_size": 4194304,
    "override_prefix": "mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4"
    }
    }, {
    "key": 20971520,
    "val": {
    "start_part_num": 5,
    "start_ofs": 20971520,
    "part_size": 5242880,
    "stripe_max_size": 4194304,
    "override_prefix": "mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97"
    }
    }
    ],
    "tail_instance": "",
    "tail_placement": {
    "bucket": {
    "name": "mybucket",
    "marker": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2",
    "bucket_id": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2",
    "tenant": "",
    "explicit_placement": {
    "data_pool": "",
    "data_extra_pool": "",
    "index_pool": ""
    }
    },
    "placement_rule": "default-placement"
    }
    },
    "attrs": {
    "user.rgw.pg_ver": "",
    "user.rgw.source_zone": ".!�\u0011",
    "user.rgw.tail_tag": "3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.6857325"
    }
    }
  1. radosgw-admin gc list (after running radosgw-admin gc process)
    root@rgw09:~# radosgw-admin gc list
    []
  1. radosgw-admin gc list --include-all (after running radosgw-admin gc process)
    []
  1. Showing that the invalid items still remain after garbage collection
    root@rgw09:~# rados -p default.rgw.buckets.data ls | grep '3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2' | sort
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v.3
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97.5
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6.2
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4.4
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ.1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2_mymp1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.7CZU8PpPYSF8lfuu6iyVXkMqHGowZ_v.3_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.7QV1hCCj4Zx12G6Cz8CmaQGx4_Uvd97.5_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.iNL-YL9K92asvy2L6OX0QdNNncTvnW6.2_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.XGyHz_JrQaqlwhDIHfLgfcY7ms7ppe4.4_1
    3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.Yf--dgWYbAirt3Vm1RblGLOdZe8__bZ.1_1
  1. Result of radosgw-admin orphans find
    NOTE:
    On a small cluster its very fast and feasible to run without memory leak issues.
    On large clusters, it takes literally MONTHS to run, and must run without interruption or you lose a significant portion of your progress, particularly during the data pool dump phase.
    It resumes at checkpoints, but in some cases, will restart from the beginning.
    Also, it can not complete successfully on large clusters due to a memory leak issue that, on our 2PB cluster (total data pool size is approx 1500TB, with around 800TB of valid data based on summing up the individual bucket totals, and around 700TB of suspected orphan data), on a VM with 1TB of RAM (yes 1TB) it will run out of memory after a few weeks of continuous running.
    Restarting just starts over at the checkpoint, and it continues to fail at about the same point on each restart.
    Note that (amongst other things) it identifies the original parts mentioned above as leaked data.
root@rgw09:~# radosgw-admin orphans find -p default.rgw.buckets.data --orphan-stale-secs 60 --job-id mybucket-2 --yes-i-really-mean-it
  • some logging ommitted
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.2
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.3
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.4
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.5
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.1_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.2_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.3_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.4_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~Z60OreS-MgiJXv4-9uCBsf5EWhuHPrr.5_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.2
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.3
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.4
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__multipart_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.5
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.1_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.2_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.3_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.4_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.1__shadow_mymp1.2~TpAvTseKBOv1aX1e1E8Tid19BMJUSKL.5_1
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__multipart_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.1_1 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.2_1 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.3_1 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.4_1 <--- leaked original multipart part
    leaked: 3647aa76-2877-4c76-8ef6-c56377ee1ae1.6963188.2__shadow_mymp1.2~v7BGHFNfjFxvPEPYd-wwgT72aNwAdw6.5_1 <--- leaked original multipart part

Files

ceph-leaked-mp-populater.sh (3.08 KB) ceph-leaked-mp-populater.sh Chris Jones, 03/17/2020 06:20 PM

Related issues 4 (0 open4 closed)

Related to rgw - Bug #16767: RadosGW Multipart Cleanup FailureResolved

Actions
Copied to rgw - Backport #59566: reef: Multipart re-uploads cause orphan dataResolvedActions
Copied to rgw - Backport #59567: quincy: Multipart re-uploads cause orphan dataDuplicateActions
Copied to rgw - Backport #59568: pacific: Multipart re-uploads cause orphan dataRejectedActions
Actions

Also available in: Atom PDF