ceph对象存储
作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。
librados软件库非常灵活,允许用户应用程序通过C、C++、java、python和php绑定直接访问ceph存储集群。ceph对象存储还具有多站点功能,即灾难恢复提供解决方案。
部署对象存储
安装ceph-radosgw
[ceph-admin@ceph-node1 my-cluster]$ sudo yum install ceph-radosgw
部署rgw
[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3 [ceph-admin@ceph-node1 my-cluster]$ sudo netstat -tnlp |grep 7480 tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 15418/radosgw
如果要修改为80端口,可修改配置文件 重启
vim /etc/ceph/ceph.conf [client.rgw.ceph-node1] rgw_frontends = "civetweb port=80" sudo systemctl restart ceph-radosgw@rgw.ceph-node1.service
创建池
[ceph-admin@ceph-node1 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/pool [ceph-admin@ceph-node1 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/create_pool.sh [ceph-admin@ceph-node1 my-cluster]$ cat create_pool.sh #!/bin/bash PG_NUM=30 PGP_NUM=30 SIZE=3 for i in `cat /home/ceph-admin/my-cluster/pool` do ceph osd pool create $i $PG_NUM ceph osd pool set $i size $SIZE done for i in `cat /home/ceph-admin/my-cluster/pool` do ceph osd pool set $i pgp_num $PGP_NUM done [ceph-admin@ceph-node1 my-cluster]$ chmod +x create_pool.sh [ceph-admin@ceph-node1 my-cluster]$ ./create_pool.sh
测试是否能访问ceph集群
[ceph-admin@ceph-node1 my-cluster]$ sudo ls -l /var/lib/ceph/ total 0 drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 bootstrap-mds drwxr-x--- 2 ceph ceph 26 Feb 14 13:30 bootstrap-mgr drwxr-x--- 2 ceph ceph 26 Feb 14 13:21 bootstrap-osd drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 bootstrap-rbd drwxr-x--- 2 ceph ceph 26 Feb 15 14:13 bootstrap-rgw drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 mds drwxr-x--- 3 ceph ceph 29 Feb 14 13:30 mgr drwxr-x--- 3 ceph ceph 29 Feb 14 12:01 mon drwxr-x--- 5 ceph ceph 48 Feb 14 13:22 osd drwxr-xr-x 3 root root 33 Feb 15 14:13 radosgw [ceph-admin@ceph-node1 my-cluster]$ sudo cp /var/lib/ceph/radosgw/ceph-rgw.ceph-node1/keyring ./ [ceph-admin@ceph-node1 my-cluster]$ ceph -s -k keyring --name client.rgw.ceph-node1 cluster: id: cde2c9f7-009e-4bb4-a206-95afa4c43495 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3 osd: 9 osds: 9 up, 9 in rgw: 3 daemons active data: pools: 18 pools, 550 pgs objects: 240 objects, 114MiB usage: 9.45GiB used, 171GiB / 180GiB avail pgs: 550 active+clean io: client: 0B/s rd, 0op/s rd, 0op/s wr
使用S3 API访问ceph对象存储
创建radosgw用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin user create --uid=radosgw --display-name="radosgw" { "user_id": "radosgw", "display_name": "radosgw", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "radosgw", "access_key": "DKOORDOMS6YHR2OW5M23", "secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw" }
安装s3cmd客户端
[root@localhost ~]# yum install -y s3cmd [root@localhost ~]# s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: DKOORDOMS6YHR2OW5M23 Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4 Default Region [US]: ZH Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: no On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: DKOORDOMS6YHR2OW5M23 Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4 Default Region: ZH S3 Endpoint: s3.amazonaws.com DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] n Save settings? [y/N] y Configuration saved to '/root/.s3cfg'
编辑s3配置文件
[root@localhost ~]# cat .s3cfg [default] access_key = DKOORDOMS6YHR2OW5M23 access_token = add_encoding_exts = add_headers = bucket_location = US ca_certs_file = cache_file = check_ssl_certificate = True check_ssl_hostname = True cloudfront_host = cloudfront.amazonaws.com content_disposition = content_type = default_mime_type = binary/octet-stream delay_updates = False delete_after = False delete_after_fetch = False delete_removed = False dry_run = False enable_multipart = True encrypt = False expiry_date = expiry_days = expiry_prefix = follow_symlinks = False force = False get_continue = False gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = guess_mime_type = True host_base = ceph-node1:7480 host_bucket = %(bucket).ceph-node1:7480 human_readable_sizes = False invalidate_default_index_on_cf = False invalidate_default_index_root_on_cf = True invalidate_on_cf = False kms_key = limit = -1 limitrate = 0 list_md5 = False log_target_prefix = long_listing = False max_delete = -1 mime_type = multipart_chunk_size_mb = 15 multipart_max_chunks = 10000 preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 put_continue = False recursive = False recv_chunk = 65536 reduced_redundancy = False requester_pays = False restore_days = 1 restore_priority = Standard secret_key = OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4 send_chunk = 65536 server_side_encryption = False signature_v2 = False signurl_use_https = False simpledb_host = sdb.amazonaws.com skip_existing = False socket_timeout = 300 stats = False stop_on_error = False storage_class = throttle_max = 100 upload_id = urlencoding_mode = normal use_http_expect = False use_https = False use_mime_magic = True verbosity = WARNING website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ website_error = website_index = index.html
创建桶并放入文件
[root@localhost ~]# s3cmd mb s3://first-bucket Bucket 's3://first-bucket/' created [root@localhost ~]# s3cmd ls 2019-02-15 07:45 s3://first-bucket [root@localhost ~]# s3cmd put /etc/hosts s3://first-bucket upload: '/etc/hosts' -> 's3://first-bucket/hosts' [1 of 1] 239 of 239 100% in 1s 175.80 B/s done [root@localhost ~]# s3cmd ls s3://first-bucket 2019-02-15 07:47 239 s3://first-bucket/hosts
使用Swift API访问ceph对象存储
创建swift api子用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin subuser create --uid=radosgw --subuser=radosgw:swift --access=full
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "radosgw:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [
{
"user": "radosgw:swift",
"secret_key": "bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
安装swift api客户端
[root@localhost ~]# yum install python-pip -y [root@localhost ~]# pip install --upgrade python-swiftclient
测试
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list first-bucket [root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list [root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD post second-bucket [root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list first-bucket second-bucket [root@localhost ~]# s3cmd ls 2019-02-15 07:45 s3://first-bucket 2019-02-15 08:18 s3://second-bucket