EVPN VPLS uses the EVPN E-LAN model to carry MP2MP VPLS services. EVPN VPLS over SRv6 BE builds on this technology and uses SRv6 BE paths over public networks to carry EVPN E-LAN services. The implementation of EVPN VPLS over SRv6 BE involves establishing SRv6 BE paths, advertising EVPN routes, and forwarding data.
Figure 1 shows how MAC addresses are learned in an EVPN VPLS over SRv6 BE scenario. Each PE is configured with EVI 1, RT 100:1, and BD 1, and connected to a CE through an AC interface that belongs to VLAN 10.
The process is described as follows:
Figure 2 shows how unicast data is forwarded in an EVPN VPLS over SRv6 BE scenario.
In the data forwarding phase:
After receiving the packet through the BD interface, PE1 searches its MAC address table and finds the SRv6 VPN SID and the next hop associated with MAC3. PE1 then encapsulates the packet into an IPv6 packet using the SRv6 SID A3:1::B300 as the destination address.
PE1 finds the route A3:1::/96 based on the longest match rule and forwards the packet to the P device over the shortest path.
Similarly, the P device finds the route A3:1::/96 based on the longest match rule and forwards the packet to PE3 over the shortest path.
PE3 searches My Local SID Table for an End.DT2U SID that matches A3:1::B300. According to the instruction specified by the SID, PE3 removes the IPv6 packet header and finds the BD corresponding to the End.DT2U SID. PE3 then forwards the original Layer 2 packet to CE3 based on the destination MAC address MAC3.
Figure 3 shows a multicast distribution tree (MDT) for BUM traffic in an EVPN VPLS over SRv6 BE scenario. In this example, PEs exchange inclusive multicast Ethernet tag routes and advertise PMSI tunnel attributes that carry routable PE addresses and End.DT2M VPN SIDs. After receiving the inclusive multicast Ethernet tag routes, each PE establishes an MDT for each EVI.
Figure 4 shows how BUM traffic is forwarded in an EVPN VPLS over SRv6 BE scenario.
In the data forwarding phase:
After receiving the packet through the BD interface, PE1 searches its MDT for leaf node information. After encapsulating the BUM traffic into IPv6 packets using the SRv6 VPN SIDs of the leaf nodes as the destination addresses, PE1 replicates the packets to all the involved leaf nodes in the MDT. The SRv6 VPN SID is A2:1::B201 for BUM traffic from CE1 to CE2, and is A3:1::B301 for BUM traffic from CE1 to CE3.
PE1 finds the desired routes based on the longest match rule and forwards the packets to the P device over the shortest path. The route is A2:1::/96 for BUM traffic from CE1 to CE2, and is A3:1::/96 for BUM traffic from CE1 to CE3.
The P device finds the desired routes based on the longest match rule and forwards the packets to PE2 and PE3 over the shortest path.
PE2 and PE3 search their My Local SID Table based on SRv6 VPN SIDs and find End.DT2M SIDs. According to the instruction specified by the End.DT2M SIDs, PE2 and PE3 remove the IPv6 packet headers and find the BDs corresponding to the End.DT2M SIDs. PE2 and PE3 then flood the original Layer 2 packet through their AC interfaces to receivers.
In a CE multi-homing scenario, after an MDT is established, split horizon needs to be used to prune BUM traffic to prevent BUM traffic loops. The following example uses a dual-homing scenario to describe the control and forwarding processes related to split horizon. On the network shown in Figure 5, CE2 is dual-homed to PE2 and PE3, and CE3 is single-homed to PE3. CE2 sends a copy of BUM traffic.
The process of creating control entries is as follows:
In the data forwarding phase:
After receiving the BUM traffic, PE3 searches the flood prune table based on the last eight bits (that is, the length of the configured Arg.FE2) of the destination address to determine the pruning interface. Finally, PE3 replicates BUM traffic to all AC interfaces except the AC interface bound to Arg.FE2, preventing BUM traffic loops.