Ambari-单步创建 总体介绍
- 单步创建集群即对于集群内每个服务的安装、开启。组成服务的每个组件host信息的设置等操作都进行一次ambari-server的restAPI的调用。
这样做提高了对集群操作的灵活性。因为Blueprint的出现是在ambari 1.5版本号之后,对非HDP版本号的支持性可能存在不足,故调ambari-server的restAPI採用的是ambari单步创建集群的调用方式。
- 须要注意的是:在调用restAPI加入服务等操作请先确定该服务已经在stack中。
Ambari-单步创建 详细步骤 (以创建HDP版本号为例)
-
调用创建集群的API
-
request body 包含stack版本号
-
POST http://<ambari-server>:8080/api/v1/clusters/<cluster-name>
-
request body:
{"Clusters": {"version" : "BAIDU-1.0”}} -
实际命令
curl -u admin:admin -i -X POST -d '{"Clusters":{"version":"BAIDU-1.0"}}' -H "X-Requested-By:ambari" http://<ambari-server>/api/v1/clusters/<cluster-name>
-
-
向已创建的集群加入host
-
HTTP POST 请求 http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<master/slave host>
-
实际命令
curl -u admin:admin -i -X POST -H "X-Requested-By:ambari" http://nj01-hadoop-rd03.nj01.baidu.com:8080/api/v1/clusters/abaciCluster/hosts/nj01-hadoop-rd03.nj01.baidu.com -
当中 <ambari-server> = nj01-hadoop-rd03.nj01.baidu.com <cluster-name> = abaciCluster <master/slave host> = nj01-hadoop-rd03.nj01.baidu.com
-
注意:假设一个集群有多台host 那么调用多次命令。
-
-
向已创建的集群加入服务 (以加入HDFS服务为例)
-
加入hdfs服务
-
採用HTTP POST http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services
-
request body
{
"ServiceInfo": {
"service_name": "HDFS"
}
} -
实际命令
curl -u admin:admin -i -X POST -d '{"ServiceInfo":{"service_name":"HDFS"}}' -H "X-Requested-By:ambari" http://nj01-hadoop-rd03.nj01.com:8080/api/v1/clusters/abaciCluster/services - 当中 <ambari-server> = nj01-hadoop-rd03.nj01.com <cluster-name> = abaciCluster <service-name> = HDFS
-
- 加入服务组件 (包含namenode datanode)
- 採用HTTP POST http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/<service-name>/components/<component-name>
- 实际命令
curl -u admin:admin -i -X POST -H "X-Requested-By:ambari" http://nj01-hadoop-rd03.nj01.com:8080/api/v1/clusters/abaciCluster/services/HDFS/components/NAMENODEcurl -u admin:admin -i -X POST -H "X-Requested-By:ambari" http://nj01-hadoop-rd03.nj01.com:8080/api/v1/clusters/abaciCluster/services/HDFS/components/DATANODE -
当中 <ambari-server> = nj01-hadoop-rd03.nj01.com <cluster-name> = abaciCluster <service-name> = HDFS <component-name> = NAMENODE/DATANODE
- 注意:调用restapi的次数有该服务的组件种类决定。
-
加入用户对HDFS的配置
- 採用HTTP POST http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/configurations
- request body 以hadoop-user-info.properties.xml为例
{
"type": "hadoop-user-info.properties",
"tag": "1",
"properties": {
"root_ugi": "root,baidu",
"user_ugi": "public,slave",
"content": " # Format: username=password,group1,group2,group3 root=baidu,root public=slave,slave "
}
} - 上面的key-value值会存储在command-json中
- 实际命令:
curl -u admin:admin -i -X POST -d '{"type":"hadoop-user-info.properties","tag":"1","properties":{"root_ugi":"root,baidu","user_ugi":"public,slave","content":" # Format: username=password,group1,group2,group3 root=baidu,root public=slave,slave "}}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster/configurations - 注意:配置项非常多。可能须要调用多次该restapi。
-
更新用户加入的配置 更新后配置信息中的key/value会写入至command-json中
- HTTP PUT 请求 http://<ambari-server>:8080/api/v1/clusters/<cluster-name>
- request body
{
"Clusters": {
"desired_configs": {
"type": "hadoop-user-info.properties",
"tag": "1"
}
}
} -
实际命令
curl -u admin:admin -i -X PUT -d '{ "Clusters" : {"desired_configs": {"type": "hadoop-user-info.properties", "tag" : "1" }}}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster
-
对于每一个组件配置对应的host (03为namenode 02 06为datanode)
- HTTP POST请求 http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts?Hosts/host_name=<host-name>
- request body
{
"host_components": [
{
"HostRoles": {
"component_name": "NAMENODE"
}
}
]
} -
实际命令curl -u admin:admin -i -X POST -d '{"host_components":[{"HostRoles":{"component_name":"NAMENODE"}}]}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster/hosts?
Hosts/host_name=nj01-hadoop-rd03.nj01.com
curl -u admin:admin -i -X POST -d '{"host_components":[{"HostRoles":{"component_name":"DATANODE"}}]}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster/hosts?Hosts/host_name=nj01-hadoop-rd02.nj01.comcurl -u admin:admin -i -X POST -d '{"host_components":[{"HostRoles":{"component_name":"DATANODE"}}]}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster/hosts?Hosts/host_name=nj01-hadoop-rd06.nj01.com
-
-
将全部服务部署完毕后,開始进行集群服务的安装、开启、关闭
- 服务安装
- HTTP PUT 请求 http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/<service-name>
- request body
{
"ServiceInfo": {
"state": "INSTALLED"
}
}
-
实际命令:
curl -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' -H "X-Requested-By:ambari" http://<ambari-server>/api/v1/clusters/abaciCluster/services/ZOOKEEPER
- 服务开启
- HTTP PUT 请求 http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/<service-name>
- request body
{
"ServiceInfo": {
"state": "STARTED"}
}
-
实际命令:
curl -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "STARTED"}}' -H "X-Requested-By:ambari" http://<ambari-server>:8080/api/v1/clusters/abaciCluster/services/ZOOKEEPER
-
服务关闭
- 服务关闭的命令同服务安装全然一致,当该服务在集群中的状态为started时,这时调用ambari-api的安装命令。ambari系统会将正在执行的服务进程通过python脚本内的stop函数进行关闭。
- 服务安装
文章有修改过 详细关于ambari的问题能够联系我 大家一起讨论一下