# v15.2.2 * Feature #43689: cephadm: iscsi * Feature #43839: enhance `host ls` * Feature #44556: cephadm: preview drivegroups * Bug #44598: cephadm: Traceback, if Python 3 is not installed on remote host * Feature #44599: cephadm: check-host: Returns only a single problem * Bug #44602: cephadm: `orch ls` shows daemons as online, despite host is down * Bug #44669: cephadm: rm-cluster should clean up /etc/ceph * Backport #44685: octopus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean timeout * Backport #44712: octopus: mgr/dashboard: crush rule test suite is missing in API tests * Bug #44769: cephadm doesn't reuse osd_id of 'destroyed' osds * Backport #44800: octopus: mds: 'if there is lock cache on dir' check is buggy * Backport #44819: octopus: perf regression due to bluefs_buffered_io=true * Bug #44820: racey concurrent ceph-volume callKeyError: 'ceph.type' * Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/... * Backport #44834: octopus: mgr/dashboard: 'Prometheus / All Alerts' page shows progress bar * Backport #44836: octopus: librados mon_command (mgr) command hang * Backport #44837: octopus: mgr/dashboard: iSCSI CHAP messages should inform that numbers are allowed * Backport #44839: octopus: [python] ensure image is open before permitting operations * Backport #44842: octopus: nautilus: FAILED ceph_assert(head.version == 0 || e.version.version > head.version) in PGLog::IndexedLog::add() * Backport #44843: octopus: LibCephFS::RecalledGetattr test failed * Backport #44844: octopus: qa:test_config_session_timeout failed with incorrect options * Backport #44892: octopus: mgr/dashboard: shorten `Container ID` and `Container image ID` in Services page * Backport #44893: octopus: racey concurrent ceph-volume callKeyError: 'ceph.type' * Backport #44895: octopus: pubsub checkpoint failures * Backport #44897: octopus: librbd:No lockers are obtained, ImageNotFound exception will be output. * Backport #44918: octopus: monitoring: alert for prediction of disk and pool fill up broken * Bug #44934: cephadm RGW: scary remove-deploy loop * Backport #44953: octopus: mgr/dashboard: Some Grafana panels in Host overview, Host details, OSD details etc. are displaying N/A or no data * Backport #44955: octopus: monitoring: root volume full alert fires false positives * Backport #44975: octopus: simple/scan.py: syntax problem in log statement * Backport #44979: octopus: monitoring: Fix pool capacity incorrect * Backport #44992: octopus: mgr/dashboard: Error: ViewDestroyedError: Attempt to use a destroyed view: detectChanges * Backport #44996: octopus: mgr/dashboard: define SSO/SAML dependencies to packaging * Backport #44999: octopus: batch: error on filtered devices in interactive only if usable data devices are present * Backport #45003: octopus: batch filter_devices tries to access lvs when there are none * Backport #45020: octopus: mgr/dashboard: standby mgr redirects to a IP address instead of a FQDN URL * Backport #45028: octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0) * Backport #45034: octopus: RPM 4.15.1 has some issues with ceph.spec * Backport #45036: octopus: [rbd-mirror] tx-only peer from heartbeat can race w/ CLI * Backport #45039: octopus: mon: reset min_size when changing pool size * Backport #45041: octopus: osd: incorrect read bytes stat in SPARSE_READ * Backport #45042: octopus: mgr: exception in module serve thread does not log traceback * Backport #45044: octopus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs * Backport #45046: octopus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_do_remount * Backport #45047: octopus: [rbd-mirror] improved replication statistics * Backport #45049: octopus: stale scrub status entry from a failed mds shows up in `ceph status` * Backport #45051: octopus: mgr/dashboard: lint error on plugins/debug.py * Backport #45052: octopus: RGW prefetches data for range requests * Backport #45053: octopus: nautilus upgrade should recommend ceph-osd restarts after enabling msgr2 * Backport #45059: octopus: qa/workunits/rest/test-restful.sh fails * Backport #45063: octopus: bluestore: unused calculation is broken * Bug #45065: cephadm: Config option warn_on_stray_daemons does not work as expected * Backport #45069: octopus: Trying to enable the CEPH Telegraf module errors 'No such file or directory' * Bug #45081: cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pull 15.2.1 on ceph0 * Backport #45083: octopus: mgr/dashboard: iSCSI CHAP max length validation * Backport #45084: octopus: mgr/dashboard: Editing iSCSI target advanced setting causes a target recreation * Bug #45108: test_orchestrator: service ls doesn't work * Bug #45120: cephadm: adopt prometheus doesn't work * Backport #45122: octopus: OSD might fail to recover after ENOSPC crash * Backport #45127: octopus: Extent leak after main device expand * Backport #45180: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolume * Backport #45207: octopus: Object corpus generation is currently broken * Backport #45211: octopus: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0 * Backport #45214: octopus: client: write stuck at waiting for larger max_size * Backport #45215: octopus: radosgw can't bind to reserved port (443) * Backport #45216: octopus: qa: after the cephfs qa test case quit the mountpoints still exist * Backport #45219: octopus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull) * Backport #45220: octopus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_verify_scatter) * Backport #45222: octopus: cephfs-journal-tool: cannot set --dry_run arg * Backport #45223: octopus: [rbd-mirror snapshot] clean-up unnecessary non-primary snapshots * Backport #45226: octopus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not give the necessary right anymore * Backport #45227: octopus: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory * Backport #45230: octopus: ceph fs add_data_pool doesn't set pool metadata properly * Backport #45233: octopus: Test failure: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest) * Backport #45259: octopus: rgw: dynamic resharding may process a manually resharded bucket with lower count * Backport #45262: octopus: mgr/dashboard: In Object Gateway,Users and Buckets auto refresh or reload is not working * Backport #45272: octopus: mgr/dashboard: ceph-api-nightly-master-backend and ceph-api-nightly-octopus-backend RuntimeError "test_purge_trash (tasks.mgr.dashboard.test_rbd.RbdTest)" * Backport #45274: octopus: mgr/dashboard: ceph-api-nightly-master-backend error "test_perf_counters_mgr_get (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)" * Backport #45277: octopus: [rbd-mirror] image replayer stop might race with remove and instace replayer shut down * Backport #45278: octopus: [rbd-mirror snapshot] optimize cloned image sync to non-primary site * Backport #45280: octopus: mgr/dashboard: test failure "test_selftest_cluster_log (tasks.mgr.test_module_selftest.TestModuleSelftest)" * Backport #45281: octopus: mgr/dashboard: Random failure in Pool unit test * Feature #45290: mgr/volumes: Add command to clone a volume from another volume * Backport #45314: octopus: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure * Backport #45317: octopus: Add support for --bucket-id in radosgw-admin bucket stats command * Backport #45328: octopus: monitoring: fix grafana percentage precision * Backport #45348: octopus: BlueStore asserting on fs upgrade tests * Backport #45360: octopus: rgw_bucket_parse_bucket_key function is holding old tenant value, when this function is executed in a loop * Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name? * Backport #45389: octopus: qa: install task runs twice with double unwind causing fatal errors * Backport #45392: octopus: follower monitors can grow beyond memory target * Backport #45401: octopus: mon/OSDMonitor: maps not trimmed if osds are down * Feature #45748: recommended max number of buckets....