Feature Requirements |
Series |
Models |
---|---|---|
When configuring a segment list, do not use a binding SID as the last-hop SID. Otherwise, traffic cannot be forwarded. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In the scenario where an EVPN L3VPN over SRv6 interworks with a BGP L3VPN over MPLS, the DiffServ mode configured in the L3VPN instance on the stitching node and that configured in the L3VPN instance bound to the AC interface are different. In the former case, when traffic leaves the SRv6 tunnel and enters the MPLS tunnel: If the DiffServ mode is set to pipe, the EXP field in the MPLS label is encapsulated based on the priority in the original IPv6 packet. If the DiffServ mode is set to short-pipe, the EXP field in the MPLS label is encapsulated based on the priority in the inner packet. Conversely, when traffic leaves the MPLS tunnel and enters the SRv6 tunnel: If the DiffServ mode is set to pipe, the Traffic Class field in the IPv6 packet is encapsulated based on the EXP priority in the MPLS label. If the DiffServ mode is set to short-pipe, the Traffic Class field in the IPv6 packet is encapsulated based on the priority in the inner packet. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When L3VPN traffic is iterated to an SRv6 tunnel functioning as a public network tunnel and BFD for peer IP protection is supported, set the peer IP address to the network segment address of a VPN SID locator. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When L3VPN traffic is iterated to an SRv6 tunnel functioning as a public network tunnel, the outbound interface cannot be set to BDIF interface or MPLS tunnel interface. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When L3VPN traffic is iterated to an SRv6 tunnel functioning as a public network tunnel, the outbound interface cannot be set to a VLANIF interface, BDIF interface, or MPLS tunnel interface. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When L3VPN traffic is iterated to an SRv6 tunnel functioning as a public network tunnel, packet information is sampled, but the forwarding information, such as next-hop and outbound interface information cannot be sampled. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
The SRv6 egress does not support deep load balancing. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 allows a packet to carry multiple SRHs but parses only the first SRH. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 SRH does not support fragmentation within a tunnel. A maximum of 10 SIDs can be added to a packet header at a time. Properly plan service configurations. Setting a small MTU on the inbound interface of a tunnel to prevent too big packets from traveling through the tunnel. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 next hops do not support dynamic load balancing adjustment. Dynamic load balancing does not take effect on the next hops of the outbound interfaces on the SRv6 source and transit nodes. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When strict URPF is configured, SRv6 packets may be discarded due to a check failure. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
The SRH with the Next Header set to an IPv6 header is only supported. The SRH extension header is at the first IPv6 extension header field, not at the other extension headers. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where SRv6 TI-LFA is deployed between the ingress and a transit node, when traffic is switched to the backup path due to a fault of the primary path, the SRH information about the primary path and the SRH of the backup path are encapsulated into SRv6 packets. In this case, the packet length may exceed the interface MTU. However, the packets are sent to the CPU without being fragmented. Packet loss occurs during the primary/backup switchover. Plan the SRv6 MTU properly. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 egress protection scenario, when traffic is switched to the backup path due to a fault of the primary path, the SRH information about the primary path and the SRH of the backup path are encapsulated into SRv6 packets. In this case, the packet length may exceed the interface MTU. However, the packets are sent to the CPU without being fragmented. Packet loss occurs during the primary/backup switchover. Plan the SRv6 MTU properly. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
Nodes that use SRv6 binding SIDs for forwarding do not support packet fragmentation. Plan the SRv6 MTU properly. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 BE/SRv6 TE Policy scenairo with TI-LFA FRR configured, if the interface MTU of the standby link is smaller than the SRv6 path MTU but but the packet length is greater than the interface MTU, the packets fail to be forwarded. Properly plan the interface MTU and SRv6 path MTU. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
If a packet is encapsulated with an SRH on an SRv6 ingress or transit node, the packet length may exceed the interface IPv6 MTU. In this case, if the first SID in a segment list to be configured on a local node is the local End SID or binding SID, the local node directly sends the packet to its CPU without fragmenting the packet, causing a packet forwarding failure. Properly plan the interface IPv6 MTU. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
After direct next hop is configured, if the explicit path label has only one outbound interface, SRv6 does not support TE FRR. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
End.DX4 and End.DX6 SIDs do not support VLANIF, low-speed, or VE interfaces. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
For End.DX4 and End.DX6 SIDs, if the outbound interface of services is a VBDIF interface, the services cannot access any VXLAN or VPLS network. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
The following four levels of load balancing may be performed for services over SRv6 TE Policies: 1. VPN ECMP 2. Segment list ECMP 3. End SID load balancing 4. Load balancing among trunk member interfaces when the outbound interface is a trunk interface A device supports a maximum of three levels of load balancing. If four levels of load balancing exist, load balancing among trunk member interfaces does not take effect. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a telco cloud scenario where BGP routes recurse to SRv6 remote cross routes, multiple VPN BGP peers are established for a PE, each peer address recurses to multiple SIDs for load balancing, and SIDs recurse to routes for trunk interface load balancing, load balancing may fail among trunk member interfaces. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
The ingress does not detect the reachability of the outermost SID. If multiple segment lists work in load balancing mode and the outermost SID of a segment list is unreachable, traffic forwarded through this segment list cannot be quickly switched to other segment lists. Traffic can be switched only when the controller detects a segment list fault, re-computes a path, and deletes the faulty segment list. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
The ingress does not detect the reachability of the outermost SID. If an SRv6 TE Policy consists of only one segment list and the outermost SID of the segment list is unreachable, the SRv6 TE Policy does not go down, and traffic cannot be quickly switched to the VPN FRR backup or best-effort path. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a binding SID stitching scenario, the outermost SID of SRv6 TE Policy 1 is a binding SID that identifies SRv6 TE Policy 2, and the outermost SID of SRv6 TE Policy 2 is also a binding SID. In this case, traffic is always looped back on the ingress until bandwidth resources are exhausted. Ensure that the configuration is correct and binding SIDs are not nested infinitely. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
If SRv6 TE Policy traffic statistics collection is enabled and a board whose traffic statistics are collected fails, the faulty board cannot report historical statistics. As a result, SRv6 TE Policy traffic statistics decrease sharply. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In inter-board SRv6 VPN FRR switchback scenarios, the local device cannot guarantee the entry delivery sequence for upstream and downstream paths, which may cause inter-board packet loss. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where public IPv4 traffic is redirected to an SRv6 TE Policy, if the redirection configuration or SRv6 TE Policy is deleted, or if the SRv6 TE Policy goes down, packet loss occurs when traffic is switched from the SRv6 TE Policy to an IP tunnel. Perform any of the following operations to prevent this problem: 1. Delete the policy bound to the involved interface. 2. Delete the classifier behavior command configuration in the traffic policy view. 3. Delete the configured traffic matching rule. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In the telecom cloud scenario, after the egress of an SR tunnel is configured to assign SIDs per-next-hop, the assigned SID for an indirectly connected next hop cannot be used to guide traffic forwarding bydefault. If the egress is connected to the peer end that is an indirectly connected next hop, configure a route-policy on the egress so that the route advertised to the peer end carries the gateway IP address attribute. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 BE/SRv6 TE Policy scenario, TI-LFA FRR/TE FRR and poison reverse cannot both be configured. Otherwise, traffic forwarding may be interrupted. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario, only SFFs and SFs can be directly connected. In non-direct connection scenarios, traffic may fail to be forwarded to SFs. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured, cache information indicates the end-to-end path that needs to be re-encapsulated after the original packet is returned from an SF to an SFF. Currently, the path supports a maximum of 11 SIDs, including one mandatory service SID and a maximum of 10 proxy SIDs. Binding SIDs are not supported. IPv6 address reachability cannot be checked and needs to be guaranteed through correct configuration. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured, interfaces must be exclusively used for SFC services. Packets received through an inbound interface enter the SFC regardless of whether the packets are returned from SFs. Interfaces used for SFC services do not support routing protocols. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured in Layer 3 forwarding mode, the inbound and outbound interfaces can be only VBDIF interfaces. If an EVC sub-interface added to a BD has services irrelevant to the SFC, the packets of the services may be broadcast to SFs. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured, proxy SIDs rather than their backup SIDs support dual-homing protection. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy and bypass protection are configured, SF1 and SF2 support only unidirectional protection rather than mutual protection. If mutual protection is configured, a loop may occur. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 SFC Layer 3 mode, if ARP entries are dynamically learned, a few packets are lost during traffic switchback to the link between an SFF and an SF. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 SFC Layer 2 mode, BFD cannot detect the link between an SFF and an SF. As a result, traffic cannot be quickly switched in the case of a fault. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 SFC Layer 3 mode, BFD is used to detect the link between an SFF and an SF to implement fast traffic switching. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
If SRv6 TE FRR is enabled in a scenario where the link between an SF and an SFF fails, service traffic may bypass the SFF. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured, after receiving a packet returned from an SF, the SFF re-calculates the values of the Hop Limit, Traffic Class, and Flow Label fields in the IPv6 header to be re-encapsulated into the packet. In this case, the values may be different from those calculated on the ingress. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured, ping and trace operations cannot be performed between SFFs and SFs. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 SFC Layer 2 mode, packets sent from an SFF to an SF cannot be fragmented. Plan the MTU properly. If fragmentation is required, fragment packets on the SC where the packets enter the SFC. Packet fragmentation cannot be performed on transit nodes. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
For the consistency of hash results based on which load balancing is performed for upstream and downstream traffic on an SRv6 SFC: 1. If different hardwares are used, hash results may be inconsistent. 2. The random algorithm is not supported due to the use of a random polynomial. If the random algorithm is used, hash results may be inconsistent. 3. Upstream and downstream traffic must have the same hash factors (5-tuple or 3-tuple) after the factors are sorted based on certain rules. Otherwise, hash results may be inconsistent. 4. If a Huawei device is connected to a non-Huawei device, different load balancing algorithms may be used, causing hash results to be inconsistent. 5. If IP addresses are not used as hash factors, different hash factors are configured on the SF and SFF, or only the source or destination IP address is used as the hash factor, hash results may be inconsistent. 6. In fault protection scenarios, hash results may be inconsistent. 7. If the number of links used for load balancing of upstream traffic is different from that used for load balancing of downstream traffic, hash results may be inconsistent. 8. The scenario where value-added services are deployed on a centralized board is not supported. 9. Fragmented and non-fragmented packets of the same flow are transmitted through different paths. For fragmented packets, the first fragment and subsequent fragments are transmitted through the same path. 10. Only IPv4 packets can be forwarded. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where traffic is load-balanced among multiple segment lists of an SRv6 TE Policy, if more than seven consecutive segment lists fail at the same time, fast traffic switching cannot be performed. Suggestions: 1. After detecting the fault, SRv6 temporarily isolates the faulty segment lists by default. 2. Deploy VPN FRR protection. If more than seven consecutive segment lists fail, VPN FRR switching can be triggered. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where traffic is load-balanced among multiple segment lists of an SRv6 TE Policy, if SBFD for segment list is deployed, SBFD return packets are forwarded through IP routes, and the return packets of the segment lists share the same path. If this path fails, all SBFD for segment list sessions go down. This leads to a service switchover. It is recommended that VPN FRR protection be deployed so that VPN FRR switching can be triggered. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 TE Policy multi-level load balancing scenario (service load balancing + load balancing over SRv6 TE Policy group + load balancing over segment list + load balancing over segment list's first SID + load balancing over trunk), if more than three levels of load balancing are configured: If the first-level load balancing quantity and fourth-level load balancing quantity are both integers and the same hash factors are used, the fourth-level load balancing cannot be performed. This restriction also applies to second-level and fifth-level load balancing. Run the load-balance hash-arithmetic command to change the hash algorithm to the exclusive OR algorithm to add one level of load balancing. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an L3VPN/EVPN L3VPN load balancing over SRv6 TE Policy group scenario where SBFD for segment list is deployed, if the egress PE fails, traffic cannot be quickly switched to other paths. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, the slice IDs on a main interface must be unique. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, the slice IDs in the same FlexE group must be unique. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, a slicing interface and the corresponding base interface cannot belong to different physical main interfaces, Eth-Trunk main interfaces, or FlexE groups. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, a sub-interface can be bound to a slice ID only if the sub-interface is a dot1q sub-interface not assigned with any VLAN segment. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slice scenarios, only one slice ID of the networking type can be configured for a slicing interface. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, if an SRv6 TE Policy uses loose explicit paths that work in load-balancing mode, the same slice must be configured on each outbound interface. Configuring slices only for some links is not allowed. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, the hash combination of default slices on base interfaces is used for load balancing. The hash algorithm is unaware of the slice bandwidth and may result in uneven hash distributions. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a network slicing scenario where a base interface and a slicing interface belong to different FlexE main interfaces, if the base interface is faulty but the slicing interface is not, route convergence is required, and the fault recovery time may exceed 50 ms; if the slice interface is faulty but the base interface is not, services are switched to the base interface after the slicing interface algorithm table is deleted by the software, and the fault recovery time may exceed 50 ms. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In network slicing scenarios, if the sum of the rate limits for service SQs (including QoS profile SQs and flow SQs) and P2P SQs exceeds the interface bandwidth (or rate configured using the "port-shaping" command), the bandwidth and delay of P2P SQs cannot be ensured due to interface backpressure. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
If traffic is steered to an SRv6 TE Policy based on the service class, behavior aggregate (BA) traffic and ACL complex traffic can be mapped to service class values, but CAR values cannot be mapped to service class values. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 BE network slicing scenarios where VPN FRR or load balancing is configured, if the colors carried by routes are different, the color of the optimal route is selected and packets are sent to the corresponding slice based on the color. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In SRv6 over GRE scenarios, if the length of an encapsulated packet exceeds the SRv6 or GRE tunnel MTU, the packet will be discarded. You are advised to properly plan the path MTU to prevent fragmentation. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In multi-VS scenarios, if the Encap encapsulation mode is adopted for an SRv6 TE Policy, the source address field in the IPv6 header is encapsulated based on the configuration of the admin VS. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where public network service traffic is redirected to an SRv6 TE Policy through multi-field classification or FlowSpec, if no SID is specified and the SRv6 TE Policy does not have the USD capability, the traffic is directly forwarded through IP instead of entering the SRv6 TE Policy. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
For DSCP-based traffic steering in L3VPN over SRv6 TE flow group scenarios, if native IP is configured in the SRv6 TE flow group, Flex-Algos are configured for the associated device, and the locator to which the specified service VPN SID belongs is added to any of the Flex-Algos, traffic may be interrupted. Avoid the preceding configuration. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 TE Policy group shortcut scenario where a policy group and a physical interface work in load balancing mode, if the physical interface has an FRR backup link, only the primary link instead of the backup link is used in the forwarding plane. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 TE Policy group shortcut scenario where a policy group and a physical interface work in load balancing mode, if the physical interface has an FRR backup link, only the primary link instead of the backup link is used in the forwarding plane. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In P2P network slicing scenarios, lossless switching cannot be ensured during interface type switching between channelized and non-channelized interfaces. During the conversion between channelized and unchannelized channels, traffic jitter may occur. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
IS-IS IPv6 segment routing: The END must be globally unique. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
If the Layer 3 link between two nodes passes through a Layer 2 switch and is configured with a static End.X SID and an SRv6 TE Policy passes through the interface bound to the SID, the local node cannot detect the fault of the link between the switch and the peer node, causing continuous packet loss in the SRv6 TE Policy. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In an SRv6 SFC scenario where static proxy is configured: 1. If an SF is dual-homed to SFFs, the peer SID configured on SFF1 must be an End SID configured on SFF2. An SFF only performs IPv6 address validity check rather than SID type check. Therefore, the SID type must be correctly configured. In addition, only global configuration is available in current scenarios, and there are requirements for controller adaptation. 2. If an SF is dual-homed to SFFs, the backup SID is the protection SID of a proxy SID and cannot be configured with any other backup SID. SFF1 and SFF2 that protect each other must be configured with the same backup SID. 3. If bypass protection is configured, SF1 and SF2 support only unidirectional protection rather than mutual protection. If mutual protection is configured, a loop may occur. 4. If SFs implement Layer 2 forwarding, the inbound and outbound interfaces must be EVC sub-interfaces. The outbound interface is an SFF interface connected to an SF, and the inbound interface is an SF interface connected to an SFF. 5. If SFs implement Layer 2 forwarding, EVC sub-interfaces do not support the encapsulation mode configured using the encapsulation default command. The encapsulation mode configured for the SFC must match that configured for the sub-interface used for SFC services. Otherwise, traffic interruption occurs. In addition, in this scenario, EVC sub-interfaces do not support flow rewrite actions. 6. Ping and trace operations cannot be performed between SFFs and SFs. If relevant configurations are not correctly planned, traffic may be interrupted. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
1. The fault detection function does not support path connectivity check. Instead, it can only be used to check whether specified SIDs exist. 2. The fault detection function cannot be implemented across ASs, processes, areas, or levels. 3. If the SID stack contains SIDs that do not support flooding, such as binding and BGP EPE SIDs, the SIDs will fail to be checked. 4. SRv6 must be enabled in the IS-IS process on the ingress for the fault detection function to take effect. Otherwise, topology information cannot be collected, causing verification failures. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In the source version, VPN traffic that enters an SRv6 BE path is limited by the interface MTU. In the target version, VPN traffic that enters an SRv6 BE tunnel is limited by the smaller value between the interface MTU and the path MTU configured in the SRv6 view. If the interface MTU is set to be greater than 1500 bytes in the source version, VPN traffic that enters an SRv6 BE tunnel is limited to 1500 bytes after the upgrade because the default SRv6 path MTU is 1500 bytes in the target version. This ensures that the length of a packet encapsulated with the SRv6 BE IPv6 header does not exceed 1500 bytes. If the VPN traffic involves IPv4 packets and the length of a single IPv4 packet plus the SRv6 BE IPv6 header exceeds 1500 bytes, the IPv4 packet is fragmented first and then encapsulated with the SRv6 BE IPv6 header. If the VPN traffic involves IPv6 packets and the length of a single IPv6 packet plus the SRv6 BE IPv6 header exceeds 1500 bytes, the IPv6 packet is discarded and an ICMPv6 Packet Too BIG packet is returned. If the configured path MTU is decreased to be less than the interface MTU, more packets will be fragmented in the case of IPv4 VPN packets, and more packets will be discarded in the case of IPv6 VPN packets. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
Types of interfaces supported by SRv6 BE: only Layer 3 Ethernet main interface, Ethernet sub-interface, Layer 3 Eth-Trunk interface, Eth-Trunk sub-interface, IP-Trunk interface, and POS interface |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
When the segment routing IPv6 feature is deployed on a network, an IPv6 packet header consumes a lot of payload space. Therefore, properly plan MTUs and reserve some space for the SRH address stack. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
After the device is restarted, the SRv6 TE Policy status is not affected when the BFD session or neighbor is in Admin Down state. After the BFD session is renegotiated, if the BFD session goes Down, the SRv6 TE Policy goes Down. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
IS-IS TI-LFA FRR cross-level protection scenario: 1, Only cross-level protection within an IS-IS process is supported. Protection cannot be provided across processes or protocols. 2, This function can only be uniformly controlled through the "inter-level-protect level-1" command for IS-IS processes. 3, Only SRv6 scenarios are supported. 4, Level-2 protection paths can be provided for IS-IS Level-1 routes, but Level-1 protection paths cannot be provided for IS-IS Level-2 routes. 5, It is recommended to use cross-level protection only for open-loop access rings. If this function is forcibly used for closed-loop access rings, fast switching may fail. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SR-MPLS Flex-Algo does not support multi-source TI-LFA or multi-source microloop avoidance. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 Flex-Algo does not support inter-process route import. After routes are imported, locators should be advertised. Currently, IPv6 prefixes are advertised. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 Flex-Algo does not support inter-area route leaking. Standards define locator route leaking, but not prefix route leaking. Currently, prefix route leaking is supported. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
In a scenario where public network traffic enters an SRv6 TE Policy, if no public SID is available and SL[0] Is compressed, a forwarding error occurs in TE FRR scenarios. Workaround: When compressed locators and TE FRR are deployed, public SIDs must be configured for public network traffic to enter an SRv6 TE Policy. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
After a non-compression locator is upgraded to a compression locator, dynamic SIDs may change, affecting traffic forwarding. Traffic forwarding is restored after dynamic SIDs are regenerated. |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |
SRv6 path MTU configurations do not take effect in the following scenarios: EVPN VPWS over SRv6 BE and EVPN VPLS over SRv6 BE |
NetEngine 8000 F |
NetEngine 8000 F2A/NetEngine 8000 F1A |