# v0.94.4 * Backport #11629: ceph.spec.in: SUSE/openSUSE builds need libbz2-devel * Backport #11735: RGW init script needs to check /var/lib/ceph/radosgw * Backport #11824: implicit erasure code crush ruleset is not validated * Backport #11872: RGW does not send Date HTTP header when civetweb frontend is used * Backport #11910: mon: "pg ls" is broken * Backport #11983: "FAILED assert(!old_value.deleted())" in upgrade:giant-x-hammer-distro-basic-multi run * Backport #11997: ceph.spec.in: rpm: not possible to turn off Java * Backport #11999: cephfs Dumper tries to load whole journal into memory at once * Backport #12098: kernel_untar_build fails on EL7 * Backport #12099: rgw: rados objects wronly deleted * Backport #12199: RadosGW regression: COPY doesn't preserve Content-Type * Backport #12234: segfault: test_rbd.TestClone.test_unprotect_with_children * Backport #12235: librbd: crash when two clients try to write to an exclusive locked image * Backport #12236: Possible crash while concurrently writing and shrinking an image * Backport #12241: [ FAILED ] TestObjectMap.InvalidateFlagInMemoryOnly * Backport #12245: rgw: empty json response when getting user quota * Backport #12246: RGW Swift API: responses for several request types don't contain mandatory Content-Type header * Backport #12267: ceph.spec.in: 50-rbd.rules conditional is wrong * Backport #12269: ceph.spec.in: ceph-common needs python-argparse on older distros, but doesn't require it * Backport #12293: ceph.spec.in: rpm: cephfs_java not fully conditionalized * Backport #12303: arm: all programs that link to librados2 hang forever on startup * Backport #12305: ceph.spec.in running fdupes unnecessarily * Backport #12311: read on chunk-aligned xattr not handled * Backport #12331: ceph: cli throws exception on unrecognized errno * Backport #12335: ceph: Method or utility to report OSDs in a particular bucket * Backport #12345: librbd: correct issues discovered via lockdep / helgrind * Backport #12361: ceph.spec.in: snappy-devel for all supported distros * Backport #12390: PGLog::proc_replica_log: correctly handle case where entries between olog.head and log.tail were split out * Backport #12394: Memory leak in Mutex.cc, pthread_mutexattr_init without pthread_mutexattr_destroy * Backport #12396: register_new_pgs() should check ruleno instead of its index * Backport #12433: Show osd as NONE in "ceph osd map " output * Backport #12446: ceph.spec.in: radosgw requires apache for SUSE only -- makes no sense * Backport #12448: ceph.spec.in: useless %py_requires breaks SLE11-SP3 build * Backport #12486: mon: leaked Messenger, MLog on shutdown * Backport #12487: ceph osd crush reweight-subtree does not reweight parent node * Backport #12489: pg_interval_t::check_new_interval - for ec pool, should not rely on min_size to determine if the PG was active at the interval * Backport #12491: buffer: critical bufferlist::zero bug * Backport #12493: the output is wrong when runing "ceph osd reweight" * Backport #12494: ceph tell: broken error message / misleading hinting * Backport #12496: pgmonitor: wrong "at/near target max“ reporting * Backport #12498: get pools health'info have error * Backport #12499: ceph-fuse 0.94.2-1trusty segfaults / aborts * Backport #12500: segfault launching ceph-fuse with bad --name * Backport #12501: error in ext_mime_map_init() when /etc/mime.types is missing * Backport #12504: rest-bench common/WorkQueue.cc: 54: FAILED assert(_threads.empty()) * Backport #12511: ceph-dencoder links to libtcmalloc, and shouldn't * Backport #12530: OSDMonitor::preprocess_get_osdmap: must send the last map as well * Backport #12571: "FAILED assert(!log.null() || olog.tail == eversion_t())" * Backport #12583: Inconsistent PGs that ceph pg repair does not fix * Backport #12585: OSD crash creating/deleting pools * Backport #12588: Change radosgw pools default crush ruleset * Backport #12589: ceph-disk zap should ensure block device * Backport #12591: rgw: create a tool for orphaned objects cleanup * Backport #12592: RGW returns requested bucket name raw in "Bucket" response header * Backport #12593: HTTP return code is not being logged by CivetWeb * Backport #12597: Crash during shutdown after writeback blocked by IO errors * Backport #12632: rgw: shouldn't return content-type: application/xml if content length is 0 * Backport #12634: swift smoke test fails on TestAccountUTF8 * Backport #12682: object_map_update fails with -EINVAL return code * Tasks #12701: hammer v0.94.4 * Backport #12751: kvm die with assert(m_seed < old_pg_num) * Backport #12813: is_new_interval: size change should cause new interval * Backport #12817: build_incremental() could take 40678 ms to finish * Backport #12836: WBThrottle::clear_object: signal on cond when we reduce throttle values * Backport #12839: Mutex Assert from PipeConnection::try_get_pipe * Backport #12841: recursive lock of md_config_t (0) * Backport #12843: long standing slow requests: connection->session->waiting_for_map->connection ref cycle * Backport #12844: osd suicide timeout during peering - search for missing objects * Backport #12846: osd/PGLog.cc: 732: FAILED assert(log.log.size() == log_keys_debug.size()) * Backport #12847: common: do not insert emtpy ptr when rebuild emtpy bufferlist * Backport #12849: [ FAILED ] TestLibRBD.BlockingAIO * Backport #12850: Crash during TestInternal.MultipleResize * Backport #12851: Ensure that swift keys don't include backslashes * Backport #12852: test_s3.test_object_copy_canned_acl ... FAIL * Backport #12853: RGW Swift API: X-Trans-Id header is wrongly formatted * Backport #12854: the arguments 'domain' should not be assigned when return false * Backport #12855: segmentation fault when rgw_gc_max_objs > HASH_PRIME * Backport #12859: testGetContentType and testHead failed * Backport #12880: COPYing an old object onto itself produces a truncated object * Backport #12883: cache agent is idle although one object is left in the cache * Backport #12894: Have a configurable number of RADOS handles in RGW * Backport #12918: RGW Swift API: response for GET on Swift account doesn't contain mandatory Content-Length header * Bug #12979: Ceph lost it's repair ability after repeatedly flapping * Backport #13014: Rados Swift API handles prefix differently than Openstack Swift * Backport #13019: rgw: intra region copy does not preserve acl * Backport #13034: osd: copy-from doesn't preserve truncate_{seq,size} * Backport #13044: LibCephFS.GetPoolId failure * Backport #13046: RGW : setting max number of buckets for user via ceph.conf option * Backport #13052: rgw: init_rados failed leads to repeated delete * Backport #13053: GWWatcher::handle_error -> common/Mutex.cc: 95: FAILED assert(r == 0) * Backport #13054: rgw: region data still exist in region-map after region-map update * Backport #13060: osd: hammer: fail to start due to stray pgs after firefly->hammer upgrade * Backport #13070: ceph-object-corpus: add 0.94.2-207-g88e7ee7 hammer objects * Bug #13078: ceph-monstore-tool still create a local file when get map failed * Backport #13091: upstart: configuration is too generous on restarts * Backport #13094: Pipe: Drop connect_seq increase line * Backport #13170: update docs to point to download.ceph.com * Backport #13224: rgw: fails to parse HTTP_HOST=: header * Backport #13225: init script doesn't start daemon - errors silently * Backport #13226: Keystone Fernet tokens break auth * Backport #13227: With root as default user, unable to have multiple RGW instances running * Backport #13228: rgw: segments are read during HEAD on Swift DLO * Backport #13343: /etc/init.d/radosgw restart does not work correctly * Backport #13347: osd/ReplicatedPG.cc: 7247: FAILED assert(agent_state) * Backport #13354: tests: do not assume crash_replay_interval 45 * Backport #13401: mon: fix crush testing for new pools * Backport #13407: tests: qa/workunits/cephtool/test.sh: don't assume crash_replay_interval=45 * Backport #13410: TEST_crush_rule_create_erasure consistently fails on i386 builder * Backport #13537: Seg fault 9.0.3-1845-gf1ead76 : RGWRESTSimpleRequest::forward_request(RGWAccessKey&, req_info&, unsigned long, ceph::buffer::list*, ceph::buffer::list*)+0x74) * Bug #13596: radosgw-admin fails user listing when the user pool's empty