Project

General

Profile

Actions

Bug #43297

closed

StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)

Added by Lei Liu over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
mimic, luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

system info

# ceph -v
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

# uname -r
3.10.0-693.2.2.el7.x86_64

core debug log

    -7> 2019-12-08 21:13:09.581833 7fd13a6ead80  0 osd.48 230132 crush map has features 432629239337189376 was 8705, adjusting msgr requires for mons
    -6> 2019-12-08 21:13:09.581844 7fd13a6ead80  0 osd.48 230132 crush map has features 1009089991640629248, adjusting msgr requires for osds
    -5> 2019-12-08 21:13:13.828033 7fd13a6ead80  0 osd.48 230132 load_pgs
    -4> 2019-12-08 21:13:13.906883 7fd13a6ead80  5 osd.48 pg_epoch: 230103 pg[8.64(unlocked)] enter Initial
    -3> 2019-12-08 21:13:13.958354 7fd13a6ead80  5 osd.48 pg_epoch: 230103 pg[8.64( v 132670'11636 (129594'10100,132670'11636] local-lis/les=229814/229815 n=85 ec=436/436 li
s/c 229814/227154 les/c/f 229815/227155/230062 229813/229814/227154) [48,89,261]/[48,89,235] r=0 lpr=0 pi=[227154,229814)/1 crt=132670'11636 lcod 0'0 mlcod 0'0 unknown mbc={
}] exit Initial 0.051508 0 0.000000
    -2> 2019-12-08 21:13:13.958406 7fd13a6ead80  5 osd.48 pg_epoch: 230103 pg[8.64( v 132670'11636 (129594'10100,132670'11636] local-lis/les=229814/229815 n=85 ec=436/436 li
s/c 229814/227154 les/c/f 229815/227155/230062 229813/229814/227154) [48,89,261]/[48,89,235] r=0 lpr=0 pi=[227154,229814)/1 crt=132670'11636 lcod 0'0 mlcod 0'0 unknown mbc={
}] enter Reset
    -1> 2019-12-08 21:13:13.958431 7fd13a6ead80  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 4294967295'18446744
073709551615, trimmed: , trimmed_dups: , clear_divergent_priors: 0
     0> 2019-12-08 21:13:13.983454 7fd1280d4700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHI
NE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/os/bluestore/StupidAllocator.cc: In function 'virtual double StupidAllocator::get_fragmentation(uint64_t)' thread
 7fd1280d4700 time 2019-12-08 21:13:13.977904
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.
2.12/src/os/bluestore/StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)

 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55db29778ba0]
 2: (StupidAllocator::get_fragmentation(unsigned long)+0x3bc) [0x55db29720cec]
 3: (BlueStore::_txc_finish(BlueStore::TransContext*)+0x657) [0x55db2961b307]
 4: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x216) [0x55db2961bb56]
 5: (BlueStore::_kv_finalize_thread()+0x630) [0x55db2961d4f0]
 6: (BlueStore::KVFinalizeThread::entry()+0xd) [0x55db2966508d]
 7: (()+0x7e25) [0x7fd137995e25]
 8: (clone()+0x6d) [0x7fd136a85bad]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Related issues 2 (0 open2 closed)

Copied to bluestore - Backport #43577: luminous: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)ResolvedActions
Copied to bluestore - Backport #43578: mimic: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)ResolvedIgor FedotovActions
Actions #1

Updated by Lei Liu over 4 years ago

Not kernel client issue, should be bluestore related

Actions #2

Updated by Fabien Brachere over 4 years ago

Hi,
same problem here with my 12 OSDs nodes ceph cluster:

2020-01-05 12:43:35.446628 7fc4e5984700  0 log_channel(cluster) log [WRN] : slow request 36.048318 seconds old, received at 2020-01-05 12:42:59.398161: osd_op(client.673965751.0:28654977 12.332 12.122b4f32 (undecoded) ondisk+write+known_if_redirected e266434) currently queued_for_pg
2020-01-05 12:43:35.446634 7fc4e5984700  0 log_channel(cluster) log [WRN] : slow request 36.032513 seconds old, received at 2020-01-05 12:42:59.413965: osd_repop(client.673965751.0:28655061 12.3b3 e266434/265593) currently queued_for_pg
2020-01-05 12:43:35.446637 7fc4e5984700  0 log_channel(cluster) log [WRN] : slow request 34.269280 seconds old, received at 2020-01-05 12:43:01.177199: osd_repop(client.672877169.0:38544060 11.26d e266434/265592) currently queued_for_pg
2020-01-05 12:43:36.298585 7fc4d095a700 -1 bluestore(/var/lib/ceph/osd/ceph-85) _do_read bdev-read failed: (5) Input/output error
2020-01-05 12:43:36.310446 7fc4d095a700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::CollectionRef&, BlueStore::OnodeRef, uint64_t, uint64_t, ceph::buffer::list::iterator&, BlueStore::WriteContext*)' thread 7fc4d095a700 time 2020-01-05 12:43:36.298657
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/os/bluestore/BlueStore.cc: 9982: FAILED assert(r >= 0 && r <= (int)tail_read)

 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x55f811d9cba0]
 2: (BlueStore::_do_write_small(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::list::iterator&, BlueStore::WriteContext*)+0x30b3) [0x55f811c664d3]
 3: (BlueStore::_do_write_data(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::list&, BlueStore::WriteContext*)+0x22a) [0x55f811c6671a]
 4: (BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::list&, unsigned int)+0x45f) [0x55f811c6730f]
 5: (BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int)+0x109) [0x55f811c67f99]
 6: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x192c) [0x55f811c6b52c]
 7: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x3a0) [0x55f811c6c0d0]
 8: (PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x65) [0x55f8119d0185]
 9: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> > const&, boost::optional<pg_hit_set_history_t>&, Context*, Context*, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr<OpRequest>)+0xd06) [0x55f811ae6296]
 10: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0x9f1) [0x55f81195e371]
 11: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x147a) [0x55f8119afcda]
 12: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x3147) [0x55f8119b3997]
 13: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xebb) [0x55f81197095b]
 14: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3f9) [0x55f8117eb619]
 15: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest> const&)+0x57) [0x55f811a7b2e7]
 16: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xfce) [0x55f81181a8ce]
 17: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x839) [0x55f811da26b9]
 18: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55f811da4650]
 19: (()+0x7e65) [0x7fc4eca3fe65]
 20: (clone()+0x6d) [0x7fc4ebb2f88d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---

The problem began after a disk was crashed and the cluster rebalanced the data and was nearly full (85% on some OSDs).
3 servers were unable to start any OSDs.
I recompiled ceph v 12.2.12 with this patch and the OSDs were able to start again.
Unfortunally, the patch is not backported to luminous...

Actions #3

Updated by Lei Liu over 4 years ago

Fabien Brachere wrote:

Hi,
same problem here with my 12 OSDs nodes ceph cluster:
[...]

The problem began after a disk was crashed and the cluster rebalanced the data and was nearly full (85% on some OSDs).
3 servers were unable to start any OSDs.
I recompiled ceph v 12.2.12 with this patch and the OSDs were able to start again.
Unfortunally, the patch is not backported to luminous...

Yes, I encountered the same cluster situation as you. created a pull request for luminous.

See https://github.com/ceph/ceph/pull/32523

Actions #4

Updated by Ilya Dryomov over 4 years ago

  • Project changed from Linux kernel client to bluestore
Actions #5

Updated by Igor Fedotov over 4 years ago

  • Backport set to mimic
  • Pull request ID set to 32523
Actions #6

Updated by Igor Fedotov over 4 years ago

@Fabien - to be honest I don't understand how the mentioned patch could fix your issue. Contrary to Leu Lei's backtrace ones from you doesn't have any reference to get_fragmentation method. Neither assertion expressions match.

Actions #7

Updated by Fabien Brachere over 4 years ago

Igor Fedotov wrote:

@Fabien - to be honest I don't understand how the mentioned patch could fix your issue. Contrary to Leu Lei's backtrace ones from you doesn't have any reference to get_fragmentation method. Neither assertion expressions match.

@Igor Gajowiak - Sorry, you are right, it was the wrong logs.
Here are the correct logs:

2020-01-05 15:52:22.312736 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312729, "job": 78990, "event": "table_file_deletion", "file_number": 3557956}
2020-01-05 15:52:22.312759 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312754, "job": 78990, "event": "table_file_deletion", "file_number": 3557954}
2020-01-05 15:52:22.312798 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312795, "job": 78990, "event": "table_file_deletion", "file_number": 3557952}
2020-01-05 15:52:22.312823 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312820, "job": 78990, "event": "table_file_deletion", "file_number": 3557950}
2020-01-05 15:52:22.312846 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312841, "job": 78990, "event": "table_file_deletion", "file_number": 3557925}
2020-01-05 15:52:22.312860 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312858, "job": 78990, "event": "table_file_deletion", "file_number": 3557924}
2020-01-05 15:52:22.312876 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312872, "job": 78990, "event": "table_file_deletion", "file_number": 3557923}
2020-01-05 15:52:22.312895 7efdb2933700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1578235942312891, "job": 78990, "event": "table_file_deletion", "file_number": 3557922}
2020-01-05 15:53:29.642965 7efdba142700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/os/bluestore/StupidAllocator.cc: In function 'virtual double StupidAllocator::get_fragmentation(uint64_t)' thread 7efdba142700 time 2020-01-05 15:53:29.638099
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/os/bluestore/StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)

 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x5556dfe1fba0]
 2: (StupidAllocator::get_fragmentation(unsigned long)+0x3bc) [0x5556dfdc7cec]
 3: (BlueStore::_txc_finish(BlueStore::TransContext*)+0x657) [0x5556dfcc2307]
 4: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x216) [0x5556dfcc2b56]
 5: (BlueStore::_kv_finalize_thread()+0x630) [0x5556dfcc44f0]
 6: (BlueStore::KVFinalizeThread::entry()+0xd) [0x5556dfd0c08d]
 7: (()+0x7e65) [0x7efdc3a02e65]
 8: (clone()+0x6d) [0x7efdc2af288d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Actions #8

Updated by Nathan Cutler over 4 years ago

  • Pull request ID changed from 32523 to 32606
Actions #9

Updated by Nathan Cutler over 4 years ago

  • Status changed from New to Pending Backport
  • Backport changed from mimic to mimic, luminous
Actions #10

Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #43577: luminous: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals) added
Actions #11

Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #43578: mimic: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals) added
Actions #12

Updated by Nathan Cutler over 4 years ago

  • Pull request ID changed from 32606 to 23606
Actions #13

Updated by Nathan Cutler over 4 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF