• ceph osd 初始化硬盘时提示OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error


    [ceph03][WARNIN]  stderr: 2022-06-30T11:00:41.980+0800 7f12726d85c0 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 2cdaca21-1355-403a-9303-a5ef85630ac3, block dump:
    [ceph03][WARNIN]  stderr: 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    [ceph03][WARNIN]  stderr: *
    [ceph03][WARNIN]  stderr: 00000ff0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    [ceph03][WARNIN]  stderr: 00001000
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:02.488+0800 7f12726d85c0 -1 rocksdb: verify_sharding unable to list column families: NotFound:
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:02.488+0800 7f12726d85c0 -1 bluestore(/var/lib/ceph/osd/ceph-8/) _open_db erroring opening db:
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:02.920+0800 7f12726d85c0 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:02.920+0800 7f12726d85c0 -1  **** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-8/: (5) Input/output error**
    [ceph03][WARNIN] --> Was unable to complete a new OSD, will rollback changes
    [ceph03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.8 --yes-i-really-mean-it
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:03.000+0800 7f322bea8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
    [ceph03][WARNIN]  stderr: 
    [ceph03][WARNIN]  stderr: 2022-06-30T11:01:03.000+0800 7f322bea8640 -1 AuthRegistry(0x7f322405f170) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
    [ceph03][WARNIN]  stderr: 
    [ceph03][WARNIN]  stderr: purged osd.8
    [ceph03][WARNIN] --> Zapping: /dev/ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e/osd-block-1ded9e00-9501-4696-b3dc-6d627c05279a
    [ceph03][WARNIN] --> Unmounting /var/lib/ceph/osd/ceph-8
    [ceph03][WARNIN] Running command: /usr/bin/umount -v /var/lib/ceph/osd/ceph-8
    [ceph03][WARNIN]  stderr: umount: /var/lib/ceph/osd/ceph-8 unmounted
    [ceph03][WARNIN] Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e/osd-block-1ded9e00-9501-4696-b3dc-6d627c05279a bs=1M count=10 conv=fsync
    [ceph03][WARNIN]  stderr: 10+0 records in
    [ceph03][WARNIN] 10+0 records out
    [ceph03][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.125339 s, 83.7 MB/s
    [ceph03][WARNIN] --> Only 1 LV left in VG, will proceed to destroy volume group ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e
    [ceph03][WARNIN] Running command: vgremove -v -f ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e
    [ceph03][WARNIN]  stderr: Removing ceph--a070e2ed--bcd9--4713--9c2d--efe95cdb3c6e-osd--block--1ded9e00--9501--4696--b3dc--6d627c05279a (253:2)
    [ceph03][WARNIN]  stderr: Archiving volume group "ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e" metadata (seqno 5).
    [ceph03][WARNIN]   Releasing logical volume "osd-block-1ded9e00-9501-4696-b3dc-6d627c05279a"
    [ceph03][WARNIN]  stderr: Creating volume group backup "/etc/lvm/backup/ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e" (seqno 6).
    [ceph03][WARNIN]  stdout: Logical volume "osd-block-1ded9e00-9501-4696-b3dc-6d627c05279a" successfully removed
    [ceph03][WARNIN]  stderr: Removing physical volume "/dev/sdd" from volume group "ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e"
    [ceph03][WARNIN]  stdout: Volume group "ceph-a070e2ed-bcd9-4713-9c2d-efe95cdb3c6e" successfully removed
    [ceph03][WARNIN] --> Zapping successful for OSD: 8
    [ceph03][WARNIN] -->  RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid 1ded9e00-9501-4696-b3dc-6d627c05279a --setuser ceph --setgroup ceph
    [ceph03][ERROR ] RuntimeError: command returned non-zero exit status: 1
    [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdd
    [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
    

    应该是之前有过分区无法写入数据,在对应硬盘的机器上,执行

    sgdisk -Z /dev/sdd
    # 然后重新加入osd即可
    ceph-deploy osd create ceph03 --data /dev/sdd
    
  • 相关阅读:
    C++之容器
    C++之复制控制
    TCP的3次握手/4次握手
    linux编程之多线程编程
    linux编程之信号
    linux编程之共享内存
    MySQL数据库优化
    MySQL存储引擎InnoDB与Myisam
    Redis--持久化
    Redis 请求应答模式和往返延时 Pipelining
  • 原文地址:https://www.cnblogs.com/hhsh/p/16426166.html
Copyright © 2020-2023  润新知