• ceph-报错日志


    由于时钟不一致问题,导致ceph存储有问题

    clock skew
    时钟偏移
    overall
    adj. 全部的;全体的;一切在内的
    stamped
    adj. 铭刻的;盖上邮戳的;顿足的

    beacon
    vt. 照亮,指引


    2019-04-29 17:00:00.000223 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1959 : cluster [WRN] overall HEALTH_WARN clock skew detected on mon.cu-pve05, mon.cu-pve06

    2019-04-29 17:00:11.495180 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1960 : cluster [WRN] mon.1 192.168.7.205:6789/0 clock skew 1.30379s > max 0.05s
    2019-04-29 17:00:11.495343 mon.cu-pve04 mon.0 192.168.7.204:6789/0 1961 : cluster [WRN] mon.2 192.168.7.206:6789/0 clock skew 0.681995s > max 0.05s
    2019-04-29 17:14:41.500133 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2106 : cluster [WRN] mon.1 192.168.7.205:6789/0 clock skew 1.73357s > max 0.05s
    2019-04-29 17:14:41.500307 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2107 : cluster [WRN] mon.2 192.168.7.206:6789/0 clock skew 0.671272s > max 0.05s

    2019-04-29 17:35:33.320667 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2342 : cluster [WRN] message from mon.1 was stamped 2.355514s in the future, clocks not synchronized
    2019-04-29 17:39:59.322154 mon.cu-pve04 mon.0 192.168.7.204:6789/0 2397 : cluster [DBG] osdmap e191: 24 total, 24 up, 24 in
    2019-04-29 18:32:24.854130 mon.cu-pve04 mon.0 192.168.7.204:6789/0 3026 : cluster [DBG] osdmap e194: 24 total, 24 up, 24 in
    2019-04-29 19:00:00.000221 mon.cu-pve04 mon.0 192.168.7.204:6789/0 3324 : cluster [WRN] overall HEALTH_WARN clock skew detected on mon.cu-pve05, mon.cu-pve06


    2019-04-29 17:01:31.898307 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 649 : cluster [DBG] pgmap v676: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
    2019-04-29 17:01:33.927961 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 650 : cluster [DBG] pgmap v677: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.97KiB/s wr, 0op/s
    2019-04-29 17:01:35.956276 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 651 : cluster [DBG] pgmap v678: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 2.71KiB/s wr, 1op/s
    2019-04-29 17:01:37.981052 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 652 : cluster [DBG] pgmap v679: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 2.71KiB/s wr, 1op/s
    2019-04-29 17:01:40.014386 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 653 : cluster [DBG] pgmap v680: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 589B/s rd, 4.03KiB/s wr, 1op/s
    2019-04-29 17:01:42.042173 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 654 : cluster [DBG] pgmap v681: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 4.02KiB/s wr, 1op/s
    2019-04-29 17:01:44.072142 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 655 : cluster [DBG] pgmap v682: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 588B/s rd, 5.01KiB/s wr, 1op/s
    2019-04-29 17:01:46.100477 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 656 : cluster [DBG] pgmap v683: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.89KiB/s rd, 3.20KiB/s wr, 1op/s
    2019-04-29 17:01:48.129701 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 657 : cluster [DBG] pgmap v684: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 2.46KiB/s wr, 0op/s
    2019-04-29 17:01:50.161716 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 658 : cluster [DBG] pgmap v685: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 2.46KiB/s wr, 0op/s
    2019-04-29 17:01:52.190373 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 659 : cluster [DBG] pgmap v686: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 1.15KiB/s wr, 0op/s
    2019-04-29 17:01:54.220284 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 660 : cluster [DBG] pgmap v687: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 1.15KiB/s wr, 0op/s
    2019-04-29 17:01:56.248956 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 661 : cluster [DBG] pgmap v688: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail; 1.31KiB/s rd, 168B/s wr, 0op/s
    2019-04-29 17:01:58.273446 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 662 : cluster [DBG] pgmap v689: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
    2019-04-29 17:02:00.305394 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 663 : cluster [DBG] pgmap v690: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail
    2019-04-29 17:02:02.334375 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 664 : cluster [DBG] pgmap v691: 1152 pgs: 1152 active+clean; 32.1GiB data, 121GiB used, 52.3TiB / 52.4TiB avail

    2019-04-30 00:22:14.177176 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 13697 : cluster [DBG] pgmap v13716: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:22:16.203475 mgr.cu-pve05 client.64099 192.168.7.205:0/2045992877 13698 : cluster [DBG] pgmap v13717: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:22:28.348815 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6578 : cluster [WRN] daemon mds.cu-pve04 is not responding, replacing it as rank 0 with standby daemon mds.cu-pve06
    2019-04-30 00:22:28.349010 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6579 : cluster [INF] Standby daemon mds.cu-pve05 is not responding, dropping it
    2019-04-30 00:22:28.353359 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6580 : cluster [WRN] Health check failed: 1 filesystem is degraded (FS_DEGRADED)
    2019-04-30 00:22:28.353476 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6581 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
    2019-04-30 00:22:28.364180 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6582 : cluster [DBG] osdmap e195: 24 total, 24 up, 24 in
    2019-04-30 00:22:28.374585 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6583 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:replay}
    2019-04-30 00:22:29.413750 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6584 : cluster [INF] Health check cleared: MDS_INSUFFICIENT_STANDBY (was: insufficient standby MDS daemons available)
    2019-04-30 00:22:29.425556 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6585 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:reconnect
    2019-04-30 00:22:29.425710 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6586 : cluster [DBG] mds.? 192.168.7.204:6800/2960873692 up:boot
    2019-04-30 00:22:29.425883 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6587 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:reconnect}, 1 up:standby
    2019-04-30 00:22:30.435723 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6588 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:rejoin
    2019-04-30 00:22:30.435868 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6589 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:rejoin}, 1 up:standby
    2019-04-30 00:22:30.449165 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6590 : cluster [INF] daemon mds.cu-pve06 is now active in filesystem cephfs as rank 0
    2019-04-30 00:22:30.015869 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 1 : cluster [DBG] reconnect by client.54450 192.168.7.205:0/1578906464 after 0
    2019-04-30 00:22:30.019932 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 2 : cluster [DBG] reconnect by client.64366 192.168.7.206:0/2722278656 after 0.00400001
    2019-04-30 00:22:30.054313 mds.cu-pve06 mds.0 192.168.7.206:6800/3970858648 3 : cluster [DBG] reconnect by client.54120 192.168.7.204:0/254060409 after 0.0400001
    2019-04-30 00:22:31.434592 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6591 : cluster [INF] Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
    2019-04-30 00:22:31.446526 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6592 : cluster [DBG] mds.0 192.168.7.206:6800/3970858648 up:active
    2019-04-30 00:22:31.446675 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6593 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:active}, 1 up:standby
    2019-04-30 00:22:43.355044 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6595 : cluster [INF] Manager daemon cu-pve05 is unresponsive. No standby daemons available.
    2019-04-30 00:22:43.355235 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6596 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)
    2019-04-30 00:22:43.367182 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6597 : cluster [DBG] mgrmap e18: no daemons active
    2019-04-30 00:22:53.658070 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6601 : cluster [INF] Activating manager daemon cu-pve05
    2019-04-30 00:22:53.898363 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6602 : cluster [INF] Health check cleared: MGR_DOWN (was: no active mgr)
    2019-04-30 00:22:53.917204 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6603 : cluster [DBG] mgrmap e19: cu-pve05(active, starting)
    2019-04-30 00:22:53.979682 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6608 : cluster [INF] Manager daemon cu-pve05 is now available
    2019-04-30 00:22:54.928868 mon.cu-pve04 mon.0 192.168.7.204:6789/0 6609 : cluster [DBG] mgrmap e20: cu-pve05(active)
    2019-04-30 00:22:59.965578 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 1 : cluster [DBG] pgmap v2: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:00.677664 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 2 : cluster [DBG] pgmap v3: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:02.700917 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 3 : cluster [DBG] pgmap v4: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:04.707492 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 4 : cluster [DBG] pgmap v5: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:06.740218 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 5 : cluster [DBG] pgmap v6: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:08.746633 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 6 : cluster [DBG] pgmap v7: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:23:10.780395 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 7 : cluster [DBG] pgmap v8: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail

    2019-04-30 00:32:18.562962 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 278 : cluster [DBG] pgmap v279: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:32:18.465670 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7327 : cluster [INF] osd.16 marked down after no beacon for 901.455814 seconds
    2019-04-30 00:32:18.468437 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7328 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
    2019-04-30 00:32:18.483797 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7329 : cluster [DBG] osdmap e196: 24 total, 23 up, 24 in
    2019-04-30 00:32:19.495106 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7331 : cluster [DBG] osdmap e197: 24 total, 23 up, 24 in
    2019-04-30 00:32:21.501683 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7334 : cluster [WRN] Health check failed: Reduced data availability: 3 pgs inactive, 47 pgs peering (PG_AVAILABILITY)
    2019-04-30 00:32:21.501774 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7335 : cluster [WRN] Health check failed: Degraded data redundancy: 794/38643 objects degraded (2.055%), 50 pgs degraded (PG_DEGRADED)
    2019-04-30 00:32:20.596358 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 279 : cluster [DBG] pgmap v280: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:32:22.603039 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 280 : cluster [DBG] pgmap v281: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:32:24.628896 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 281 : cluster [DBG] pgmap v283: 1152 pgs: 41 stale+active+clean, 1111 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 00:32:26.642893 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 282 : cluster [DBG] pgmap v285: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:28.669528 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 283 : cluster [DBG] pgmap v286: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:30.683129 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 284 : cluster [DBG] pgmap v287: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:32.709629 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 285 : cluster [DBG] pgmap v288: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:34.717180 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 286 : cluster [DBG] pgmap v289: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:36.748749 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 287 : cluster [DBG] pgmap v290: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:38.756345 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 288 : cluster [DBG] pgmap v291: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:40.789378 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 289 : cluster [DBG] pgmap v292: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:42.796488 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 290 : cluster [DBG] pgmap v293: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:44.821576 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 291 : cluster [DBG] pgmap v294: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:46.835641 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 292 : cluster [DBG] pgmap v295: 1152 pgs: 25 active+undersized, 1030 active+clean, 47 peering, 50 active+undersized+degraded; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail; 794/38643 objects degraded (2.055%)
    2019-04-30 00:32:48.475079 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7371 : cluster [INF] osd.17 marked down after no beacon for 903.631937 seconds
    2019-04-30 00:32:48.475189 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7372 : cluster [INF] osd.20 marked down after no beacon for 901.611316 seconds
    2019-04-30 00:32:48.483726 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7373 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
    2019-04-30 00:32:48.500282 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7374 : cluster [DBG] osdmap e198: 24 total, 21 up, 24 in
    2019-04-30 00:32:49.510909 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7376 : cluster [DBG] osdmap e199: 24 total, 21 up, 24 in

    2019-04-30 00:35:58.536182 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7645 : cluster [INF] osd.7 marked down after no beacon for 902.595536 seconds
    2019-04-30 00:35:58.538784 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7646 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
    2019-04-30 00:35:58.554495 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7647 : cluster [DBG] osdmap e202: 24 total, 19 up, 24 in
    2019-04-30 00:35:59.565253 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7649 : cluster [DBG] osdmap e203: 24 total, 19 up, 24 in
    2019-04-30 00:36:01.657260 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7652 : cluster [WRN] Health check update: Reduced data availability: 202 pgs inactive, 206 pgs peering (PG_AVAILABILITY)
    2019-04-30 00:36:01.657353 mon.cu-pve04 mon.0 192.168.7.204:6789/0 7653 : cluster [WRN] Health check update: Degraded data redundancy: 4903/38643 objects degraded (12.688%), 247 pgs degraded, 285 pgs undersized (PG_DEGRADED)


    --------------------------------------

    2019-04-30 05:31:46.580027 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11871 : cluster [INF] Standby daemon mds.cu-pve05 is not responding, dropping it
    2019-04-30 05:31:46.591494 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11872 : cluster [DBG] fsmap cephfs-1/1/1 up {0=cu-pve06=up:active}, 1 up:standby
    2019-04-30 05:31:50.842218 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9143 : cluster [DBG] pgmap v9201: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 05:31:52.872419 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9144 : cluster [DBG] pgmap v9202: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 05:31:54.899490 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9145 : cluster [DBG] pgmap v9203: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 05:31:56.925830 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9146 : cluster [DBG] pgmap v9204: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 05:31:58.957234 mgr.cu-pve05 client.84813 192.168.7.205:0/2429672320 9147 : cluster [DBG] pgmap v9205: 1152 pgs: 1152 active+clean; 50.1GiB data, 175GiB used, 52.2TiB / 52.4TiB avail
    2019-04-30 05:32:01.596600 mon.cu-pve04 mon.0 192.168.7.204:6789/0 11890 : cluster [DBG] mgrmap e22: cu-pve05(active)

    2019-04-30 05:43:16.717729 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12763 : cluster [INF] osd.18 marked down after no beacon for 902.818940 seconds
    2019-04-30 05:43:16.717846 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12764 : cluster [INF] osd.19 marked down after no beacon for 902.818731 seconds
    2019-04-30 05:43:16.717914 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12765 : cluster [INF] osd.23 marked down after no beacon for 900.786850 seconds
    2019-04-30 05:43:16.726253 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12766 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)
    2019-04-30 05:43:16.742278 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12767 : cluster [DBG] osdmap e253: 24 total, 21 up, 24 in
    2019-04-30 05:43:17.753181 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12771 : cluster [DBG] osdmap e254: 24 total, 21 up, 24 in
    2019-04-30 05:43:19.209031 mon.cu-pve04 mon.0 192.168.7.204:6789/0 12774 : cluster [WRN] Health check failed: Reduced data availability: 51 pgs inactive, 293 pgs peering (PG_AVAILABILITY)

    -----------------------------------------
    2019-04-30 08:56:22.240506 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19905 : cluster [DBG] Standby manager daemon cu-pve04 started
    2019-04-30 05:43:17.030698 osd.18 osd.18 192.168.7.204:6811/5641 3 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
    2019-04-30 05:43:17.030714 osd.18 osd.18 192.168.7.204:6811/5641 4 : cluster [DBG] map e253 wrongly marked me down at e253
    2019-04-30 05:43:18.450669 osd.19 osd.19 192.168.7.204:6807/5309 3 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
    2019-04-30 05:43:18.450689 osd.19 osd.19 192.168.7.204:6807/5309 4 : cluster [DBG] map e254 wrongly marked me down at e253
    2019-04-30 05:43:18.652645 osd.23 osd.23 192.168.7.204:6801/4516 3 : cluster [WRN] Monitor daemon marked osd.23 down, but it is still running

    2019-04-30 05:44:07.065692 osd.20 osd.20 192.168.7.204:6809/5441 4 : cluster [DBG] map e263 wrongly marked me down at e263
    2019-04-30 08:56:22.458718 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19906 : cluster [INF] daemon mds.cu-pve05 restarted
    2019-04-30 08:56:26.088398 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19910 : cluster [DBG] Standby manager daemon cu-pve06 started
    2019-04-30 08:56:26.495852 mon.cu-pve04 mon.0 192.168.7.204:6789/0 19911 : cluster [DBG] mgrmap e23: cu-pve05(active), standbys: cu-pve04

  • 相关阅读:
    VS Code 调试报错
    Nginx反向代理设置
    Nginx 的配置文件
    Nginx 的常用的命令
    CentOS7安装Nginx
    Docker配置
    Centos7 安装MySQL 5.7
    限制Redis使用的最大内存
    C#操作Redis
    Font Awesome 字体图标
  • 原文地址:https://www.cnblogs.com/createyuan/p/10815633.html
Copyright © 2020-2023  润新知