• Openstack Neutron L2 Population


    Why do we need it, whatever it is?

    VM unicast, multicast and broadcast traffic flow is detailed in my previous post:

    Tunnels in Openstack Neutron

    TL;DR: Agent OVS flow tables implement learning. That is, any unknown unicast destination (IE: MAC addresses the virtual switch is not familiar with), multicast or broadcast traffic is flooded out tunnels to all other compute nodes. Any incoming traffic is used for its source MAC address. That MAC address is added to a learning table, so future traffic to that MAC address is not flooded but sent directly to the hosting node. There’s several inefficiencies here:

    1. The MAC addresses aren’t initially known by the agents, but the Neutron service has full knowledge of the topology
    2. There’s still a lot of broadcasts going around in the form of ARP requests. Maybe we can optimize those away?
    3. More about broadcasts: What if a node isn’t hosting any ports in a specific network? Should this node receive broadcast traffic designated to that network?

    A great visual explanation for the third point, stolen shamelessly from the official OpenStack documentation:

    Overview

    When using the ML2 plugin with tunnels and a new port goes up, ML2 sends a update_port_postcommit notification which is picked up and processed by the l2pop mechanism driver. l2 pop then gathers the IP and MAC of the port, as well as the host that the port was scheduled on; It then sends an RPC notification to all layer 2 agents. The agents uses the notification to solve the three issues detailed above.

    Configuration

    ml2_conf.ini:
    [ml2]
    mechanism_drivers = ..., l2population, ...
    [agent]
    l2_population = True

    Deep-Dive & Code

    plugins/ml2/drivers/l2pop/mech_driver.py:update_port_postcommitcalls _update_port_up. In _update_port_up we send the new ports’ IP and MAC address to all agents via a ‘add_fdb_entries’ RPC fanout cast. Additionally, if this new port is the first port in a network on the scheduled agent, then we send all IP and MAC addresses on the network to that agent.

    ‘add_fdb_entries’ is picked up via agent/l2population_rpc.py:add_fdb_entries, which calls fdb_add if the RPC call was a fanout, or directed to the local host.

    fdb_add is implemented by the OVS and LB agents: plugins/openvswitch/agent/ovs_neutron_agent.py and plugins/linuxbridge/agent/linuxbridge_neutron_agent.py.

    In the OVS agent, fdb_add accomplishes three main things:

    For each port received:

    1. Setup a tunnel to the remote agent if one does not already exist
    2. If its a flood entry, setup a flood flow to the remote network. Reminder: A flood flow is sent out to all agents in case a port goes up which happens to be the first port for an agent & network pair
    3. If its a unicast entry, add it to the unicast learning table
    4. A big fat TO-DO about ARP replies. Implemented in the Icehouse release with this patch: https://review.openstack.org/#/c/49227/

    Finally, with l2_population = True, a bunch of code is in the ovs agent is disabled. tunnel_update and tunnel_sync RPC messages are ignored, and replaced by fdb_add, fdb_remove.

    Supported Topologies

    All of this is fully supported since the Havana release when using GRE and VXLAN tunneling with the ML2 plugin, apart from the ARP resolution optimization which is implemented only for the Linux bridge agent with the VXLAN driver. ARP resolution will be added to the OVS agent with GRE and VXLAN drivers in the Icehouse release.

    Links

    http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#ml2_l2pop_scenarios

    本文转自http://assafmuller.com/2014/02/23/ml2-address-population/

  • 相关阅读:
    bzoj1257
    bzoj1833
    bzoj3505
    bzoj2226
    bzoj1263
    bzoj2429
    bzoj1854
    bzoj3555
    bzoj1877
    放两个模版
  • 原文地址:https://www.cnblogs.com/feisky/p/4040409.html
Copyright © 2020-2023  润新知