• ceph-pve英语


    adapted accordingly
    并相应地调整

    silos
    n. 筒仓;粮仓;贮仓(silo的复数)

    saturate
    vt. 浸透,使湿透;使饱和,使充满
    While one HDD might not saturate a 1 Gb link

    likelihood
    n. 可能性,可能

    aforementioned
    adj. 上述的;前面提及的

    fail-safe
    n. 自动防故障装置

    colocated
    驻扎在同一地点

    budgeted
    adj. 已安排预算的

    devoted
    adj. 献身的;忠诚的

    In general SSDs will provide more IOPs than spinning disks. This fact and the higher cost may make a class
    based Section 4.2.9 separation of pools appealing. Another possibility to speedup OSDs is to use a faster
    disk as journal or DB/WAL device, see creating Ceph OSDs Section 4.2.7. If a faster disk is used for multiple
    OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be selected, otherwise the faster
    disk becomes the bottleneck for all linked OSDs.
    Aside from the disk type, Ceph best performs with an even sized and distributed amount of disks per node.
    For example, 4 x 500 GB disks with in each node is better than a mixed setup with a single 1 TB and three
    250 GB disk.
    One also need to balance OSD count and single OSD capacity. More capacity allows to increase storage
    density, but it also means that a single OSD failure forces ceph to recover more data at once.

    OSDs can also be backed by a combination of devices, like a HDD for most data and an SSD (or partition of an SSD) for some metadata.

    BlueStore allows its internal journal (write-ahead log) to be written to a separate, high-speed device (like an SSD, NVMe, or NVDIMM) to increased performance.

    However, the most common practice is to partition the journal drive (often an SSD),
    and mount it such that Ceph uses the entire partition for the journal.

    Sizing for block.db should be as large as possible to avoid performance penalties otherwise.

    When using a mixed spinning and solid drive setup it is important to make a large-enough block.db logical volume for Bluestore.

    The Ceph objecter handles where to place the objects and the tiering agent determines when to flush objects from the cache to the backing storage tier.

    VMIDs < 100 are reserved for internal purposes, and VMIDs need to be unique cluster wide.

    block就是primary device
    block.db or block.wal
    A DB device (identified as block.db
    A WAL device (identified as block.wal


    两种普通的使用场景
    1BLOCK (DATA) ONLY
    it makes sense to just deploy with block only and not try to separate block.db or block.wal.

    2BLOCK AND BLOCK.DB


    -----------------------------------------------
    Bibliography

    n. 参考书目;文献目录

    The Proxmox VE management tool (pvesh) allows to directly invoke API function, without using the REST/HTTPS server.

    # single time output
    pve# ceph -s
    # continuously output status changes (press CTRL+C to stop)
    pve# ceph -w

    -------------------------------------
    A volume is identified by the <STORAGE_ID>, followed by a storage type dependent volume name, separated by colon. A valid <VOLUME_ID> looks like:

    local:230/example-image.raw

    pvesm path <VOLUME_ID>

    root@cu-pve04:/var/lib/vz# pvesm path kycfs:iso/CentOS-7-x86_64-Minimal-1810.iso
    /mnt/pve/kycfs/template/iso/CentOS-7-x86_64-Minimal-1810.iso

    root@cu-pve04:/var/lib/vz# pvesm path kycrbd:vm-102-disk-0
    rbd:kycrbd/vm-102-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/kycrbd.keyring

  • 相关阅读:
    imagemagick-图片
    selenium-嘿
    centos命令行连接无线网络
    centos7安装桌面合盖不休眠
    mysql执行命令:ERROR 1820 (HY000): You must reset your password
    编码规范 C++
    Docker使用总结
    JAVA使用总结
    VS IDE 相关
    编程网站总结
  • 原文地址:https://www.cnblogs.com/createyuan/p/10862225.html
Copyright © 2020-2023  润新知