Project

General

Profile

Actions

Bug #44824

closed

cephadm: adding osd device is not idempotent

Added by Patrick Donnelly about 4 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
cephadm/osd
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[ceph: root@li223-220 /]# ceph orch daemon add osd li221-238.members.linode.com:/dev/sdc
Created osd(s) 0 on host 'li221-238.members.linode.com'
[ceph: root@li223-220 /]# ceph orch daemon add osd li221-238.members.linode.com:/dev/sdc
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
    wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/module.py", line 496, in _daemon_add_osd
    completion = self.create_osds(drive_group)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1510, in inner
    completion = self._oremote(method_name, args, kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1581, in _oremote
    return mgr.remote(o, meth, *args, **kwargs)
  File "/usr/share/ceph/mgr/mgr_module.py", line 1515, in remote
    args, kwargs)
RuntimeError: Remote method threw exception: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/module.py", line 548, in wrapper
    return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 2081, in create_osds
    ret_msg = self._create_osd(host, cmd)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 2124, in _create_osd
    code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3634422a-5436-496c-92e6-df317f294530
INFO:cephadm:/bin/podman:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3634422a-5436-496c-92e6-df317f294530 ceph-d7a33717-e89b-4c3a-bef8-fc53ec9099b2
INFO:cephadm:/bin/podman:stderr  stderr: Calculated size of logical volume is 0 extents. Needs to be larger.
INFO:cephadm:/bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it
INFO:cephadm:/bin/podman:stderr  stderr: purged osd.1
INFO:cephadm:/bin/podman:stderr -->  RuntimeError: command returned non-zero exit status: 5
Traceback (most recent call last):
  File "<stdin>", line 4247, in <module>
  File "<stdin>", line 917, in _infer_fsid
  File "<stdin>", line 952, in _infer_image
  File "<stdin>", line 2663, in command_ceph_volume
  File "<stdin>", line 696, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --net=host --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=li221-238.members.linode.com -v /var/run/ceph/0b8a808c-72e0-11ea-83da-f23c926056ef:/var/run/ceph:z -v /var/log/ceph/0b8a808c-72e0-11ea-83da-f23c926056ef:/var/log/ceph:z -v /var/lib/ceph/0b8a808c-72e0-11ea-83da-f23c926056ef/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp49clmnhu:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpp03x6m12:/var/lib/ceph/bootstrap-osd/ceph.keyring:z --entrypoint /usr/sbin/ceph-volume docker.io/ceph/ceph:v15 lvm prepare --bluestore --data /dev/sdc --no-systemd

Related issues 4 (0 open4 closed)

Related to Orchestrator - Bug #44313: ceph-volume prepare is not idempotent and may get called twiceResolved

Actions
Related to Orchestrator - Bug #44825: cephadm: bootstrap is not idempotentRejected

Actions
Related to Orchestrator - Bug #45327: cephadm: Orch daemon add is not idempotentResolvedJuan Miguel Olmo Martínez

Actions
Related to Orchestrator - Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are createdCan't reproduce

Actions
Actions #1

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44313: ceph-volume prepare is not idempotent and may get called twice added
Actions #3

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44825: cephadm: bootstrap is not idempotent added
Actions #4

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #45327: cephadm: Orch daemon add is not idempotent added
Actions #5

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are created added
Actions #6

Updated by Sebastian Wagner almost 3 years ago

  • Category set to cephadm/osd
Actions #7

Updated by Sebastian Wagner over 2 years ago

  • Priority changed from High to Normal
Actions #8

Updated by Redouane Kachach Elhichou about 2 years ago

  • Status changed from New to Resolved
  • Pull request ID set to 33755

This issue seems to be fixed on master upstream. Probably it was fixed by the following PR https://github.com/ceph/ceph/pull/33755.

Right if a user tries to add the same OSD twice he will get the following message:

[ceph: root@ceph-node-0 /]# ceph orch daemon add osd ceph-node-0:/dev/vdb
Created no osd(s) on host ceph-node-0; already created?

output code is 0 (which is good):

[ceph: root@ceph-node-0 /]# echo $?
0
Actions

Also available in: Atom PDF