Project

General

Profile

Actions

Bug #53623

open

mds: LogSegment will only save one ESubtreeMap event if the ESubtreeMap event size is large enough.

Added by Xiubo Li over 2 years ago. Updated over 2 years ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

 8614 2021-12-16T10:02:42.596+0800 14c148b75700  5 mds.0.log _submit_thread 2829812324~1725281 : ESubtreeMap 1653 subtrees , 1651 ambiguous [metablob 0x1, 1973 dirs]
 8615 2021-12-16T10:02:42.597+0800 14c148b75700  5 mds.0.log _submit_thread 2831537625~520 : EImportStart 0x2000000000f from mds.1 [metablob 0x2000000000f, 1 dirs]
 8616 2021-12-16T10:02:42.624+0800 14c148b75700  5 mds.0.log _submit_thread 2831538165~1724401 : ESubtreeMap 1652 subtrees , 1650 ambiguous [metablob 0x1, 1972 dirs]

The `ESubtreeMap` event will be inserted to each new LogSegment when starting a new one. The above logs was reproduced by creating 2000 diretories by using:

# ceph fs set a max_mds 2
# mkdir lxb; setfattr -n ceph.dir.pin -v 1 lxb
# for i in {0..2000}; do mkdir lxb/dir2_$i; setfattr -n ceph.dir.pin -v 1 lxb/dir2_$i; done
# ceph fs set a max_mds 1

If I create more directories the size of the `ESubtreeMap` event will reach up to more than 4MB, that means each LogSegment will only contain the `ESubtreeMap` event:

256350 2021-12-16T09:56:39.804+0800 14c14a582700 10 mds.0.log submit_entry also starting new segment: last = 2055/18446744073709551615, event seq = 2056                                                                                                                                           
256351 2021-12-16T09:56:39.804+0800 14c14a582700  7 mds.0.log _prepare_new_segment seq 2057
...
256500 2021-12-16T09:56:39.810+0800 14c14a582700 10 mds.0.log submit_entry also starting new segment: last = 2057/18446744073709551615, event seq = 2058                                                                                                                                           
256501 2021-12-16T09:56:39.810+0800 14c14a582700  7 mds.0.log _prepare_new_segment seq 2059


Related issues 1 (1 open0 closed)

Related to CephFS - Bug #53542: Ceph Metadata Pool disk throughput usage increasingFix Under ReviewXiubo Li

Actions
Actions #1

Updated by Xiubo Li over 2 years ago

  • Related to Bug #53542: Ceph Metadata Pool disk throughput usage increasing added
Actions #2

Updated by Xiubo Li over 2 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Xiubo Li
  • Target version set to v17.0.0
  • Pull request ID set to 44180
  • ceph-qa-suite fs added
  • Component(FS) MDS added
Actions

Also available in: Atom PDF