• devstack 重启服务


    Service nameRestart procedure
    Ceilometer
    1. Log in to a controller node CLI.

    2. Restart the Ceilometer services:

      # service ceilometer-agent-central restart
      # service ceilometer-api restart
      # service ceilometer-agent-notification restart
      # service ceilometer-collector status restart
      
    3. Verify the status of the Ceilometer services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    Cinder
    1. Log in to a controller node CLI.

    2. Restart the Cinder services:

      # service cinder-api restart
      # service cinder-scheduler restart
      
    3. Verify the status of the Cinder services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    5. On every node with Cinder role, run:

      # service cinder-volume restart
      # service cinder-backup restart
      
    6. Verify the status of the cinder-volume and cinder-backup services.

    Corosync/Pacemaker
    1. Log in to a controller node CLI.

    2. Restart the Corosync and Pacemaker services:

      # service corosync restart
      # service pacemaker restart
      
    3. Verify the status of the Corosync and Pacemaker services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    Glance
    1. Log in to a controller node CLI.

    2. Restart the Glance services:

      # service glance-api restart
      # service glance-registry restart
      
    3. Verify the status of the Glance services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    Horizon

    Since the Horizon service is available through the Apache server, you should restart the Apache service on all controller nodes:

    1. Log in to a controller node CLI.

    2. Restart the Apache server:

      # service apache2 restart
      
    3. Verify whether the Apache service is successfully running after restart:

      # service apache2 status
      
    4. Verify whether the Apache ports are opened and listening:

      # netstat -nltp | egrep apache2
      
    5. Repeat step 1 - 3 on all controller nodes.

    Ironic
    1. Log in to a controller node CLI.

    2. Restart the Ironic services:

      # service ironic-api restart
      # service ironic-conductor restart
      
    3. Verify the status of the Ironic services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    5. On any controller node, run the following command for the nova-compute service configured to work with Ironic:

      # crm resource restart p_nova_compute_ironic
      
    6. Verify the status of the p_nova_compute_ironic service.

    Keystone

    Since the Keystone service is available through the Apache server, complete the following steps on all controller nodes:

    1. Log in to a controller node CLI.

    2. Restart the Apache server:

      # service apache2 restart
      
    3. Verify whether the Apache service is successfully running after restart:

      # service apache2 status
      
    4. Verify whether the Apache ports are opened and listening:

      # netstat -nltp | egrep apache2
      
    5. Repeat step 1 - 3 on all controller nodes.

    MySQL
    1. Log in to any controller node CLI.

    2. Run the following command:

      # pcs status | grep -A1 mysql
      

      In the output, the resource clone_p_mysql should be in the Started status.

    3. Disable the clone_p_mysql resource:

      # pcs resource disable clone_p_mysqld
      
    4. Verify that the resource clone_p_mysqld is in the Stopped status:

      # pcs status | grep -A2 mysql
      

      It may take some time for this resource to be stopped on all controller nodes.

    5. Disable the clone_p_mysql resource:

      # pcs resource enable clone_p_mysqld
      
    6. Verify that the resource clone_p_mysqld is in the Started status again on all controller nodes:

      # pcs status | grep -A2 mysql
      

    Warning

    Use the pcs commands instead of crm for restarting the service. The pcs tool correctly stops the service according to the quorum policy preventing MySQL failures.

    Neutron

    Use the following restart steps for the DHCP Neutron agent as an example for all Neutron agents.

    1. Log in to any controller node CLI.

    2. Verify the DHCP agent status:

      # pcs resource show | grep -A1 neutron-dhcp-agent
      

      The output should contain the list of all controllers in the Started status.

    3. Stop the DHCP agent:

      # pcs resource disable clone_neutron-dhcp-agent
      
    4. Verify the Corosync status of the DHCP agent:

      # pcs resource show | grep -A1 neutron-dhcp-agent
      

      The output should contain the list of all controllers in the Stopped status.

    5. Verify the neutron-dhcp-agent status on the OpenStack side:

      # neutron agent-list
      

      The output table should contain the DHCP agents for every controller node with xxx in the alive column.

    6. Start the DHCP agent on every controller node:

      # pcs resource enable clone_neutron-dhcp-agent
      
    7. Verify the DHCP agent status:

      # pcs resource show | grep -A1 neutron-dhcp-agent
      

      The output should contain the list of all controllers in the Started status.

    8. Verify the neutron-dhcp-agent status on the OpenStack side:

      # neutron agent-list
      

      The output table should contain the DHCP agents for every controller node with :-) in the alive column and True in the admin_state_up column.

    Nova
    1. Log in to a controller node CLI.

    2. Restart the Nova services:

      # service nova-api restart
      # service nova-cert restart
      # service nova-compute restart
      # service nova-conductor restart
      # service nova-consoleauth restart
      # service nova-novncproxy restart
      # service nova-scheduler restart
      # service nova-spicehtml5proxy restart
      # service nova-xenvncproxy restart
      
    3. Verify the status of the Nova services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    5. On every compute node, run:

      # service nova-compute restart
      
    6. Verify the status of the nova-compute service.

    RabbitMQ
    1. Log in to any controller node CLI.

    2. Disable the RabbitMQ service:

      # pcs resource disable master_p_rabbitmq-server
      
    3. Verify whether the service is stopped:

      # pcs status | grep -A2 rabbitmq
      
    4. Enable the service:

      # pcs resource enable master_p_rabbitmq-server
      

      During the startup process, the output of the pcs status command can show all existing RabbitMQ services in the Slaves mode.

    5. Verify the service status:

      # rabbitmqctl cluster_status
      

      In the output, the running_nodes field should contain all controllers’ host names in the rabbit@<HOSTNAME> format. The partitions field should be empty.

    Swift
    1. Log in to a controller node CLI.

    2. Restart the Swift services:

      # service swift-account-auditor restart
      # service swift-account restart
      # service swift-account-reaper restart
      # service swift-account-replicator restart
      # service swift-container-auditor restart
      # service swift-container restart
      # service swift-container-reconciler restart
      # service swift-container-replicator restart
      # service swift-container-sync restart
      # service swift-container-updater restart
      # service swift-object-auditor restart
      # service swift-object restart
      # service swift-object-reconstructor restart
      # service swift-object-replicator restart
      # service swift-object-updater restart
      # service swift-proxy restart
      
    3. Verify the status of the Swift services. See Verify an OpenStack service status.

    4. Repeat step 1 - 3 on all controller nodes.

    Operating on more than one unit at a time

    Systemd supports wildcarding for unit operations. To restart every service in devstack you can do that following:

    sudo systemctl restart devstack@*
    

    Or to see the status of all Nova processes you can do:

    sudo systemctl status devstack@n-*
    

    We’ll eventually make the unit names a bit more meaningful so that it’s easier to understand what you are restarting.

    devstack 单元测试

    devstack 重启服务

  • 相关阅读:
    Hystrix高可用系统容错框架,资源隔离,熔断,限流
    Leecode no.25 K 个一组翻转链表
    no.1 Web浏览器
    源码解析-JavaNIO之Buffer,Channel
    Leecode no.24 两两交换链表中的节点
    Kafka RocketMQ 是推还是拉?
    Leecode no.23 合并K个升序链表
    图解计算机底层IO过程及JavaNIO
    Leecode no.21 合并两个有序链表
    AcWing每日一题--摘花生
  • 原文地址:https://www.cnblogs.com/gaizhongfeng/p/10667438.html
Copyright © 2020-2023  润新知