The multicast distribution tree (MDT) cannot be correctly established on multicast VPN.
This fault is commonly caused by one of the following:
Run the display current-configuration command on the PE and P devices. The multicast loopback interface has been configured if the following information is displayed:
interface Eth-Trunk1 service type multicast-tunnel
If no multicast loopback interface is configured, run the service type multicast-tunnel command in the Eth-Trunk view.
If a multicast loopback interface has been configured but the fault persists, go to step 2.
Run the display ip routing-table command on the PE and P devices to check reachability of unicast routes between them on the public network.
Run the display ip routing-table vpn-instance vpn-instance-name command on PE devices to check reachability of unicast routes from the VPN instance to devices at VPN sites.
Run the ping command on each CE to check reachability of VPN routes.
If unicast routes are unreachable, rectify routing fault.
If unicast routes are reachable both on the public and private networks but the fault persists, go to step 3.
Run the display current-configuration command to view the configuration.
To implement multicast VPN, PE devices must support multi-instance multicast. The configuration requirement is as follows:
Run the multicast routing-enable command in the system view to enable multicast routing on the public network.
Run the multicast routing-enable command in the VPN instance view to enable multicast routing on the VPN instance.
If "multicast routing-enable" is not displayed in the command output of a PE, multicast routing is disabled in the VPN instance. Run the multicast routing-enable command in this VPN instance view.
If "multicast routing-enable" is displayed in the command output of each PE but the fault persists, go to step 4.
Run the display multicast-domain vpn-instance vpn-instance-name share-group command on PE devices to check whether identical VPN instances bound to PE devices use the same Share-Group address.
If the Share-Group address is not configured for a VPN instance or configured differently on PE devices, run the multicast-domain share-group group-address binding mtunnel number command in the VPN instance view to configure the Share-Group address and bind it to a specified MTI.
If the VPN instance is configured with the same Share-Group address on PE devices but the fault persists, go to step 5.
Run the display current-configuration command to check whether the MTI address is the same as the IP address of the local interface for establishing IBGP peer relationships.
If the MTI is not configured or the MTI address is different from the interface IP address for establishing IBGP peer relationships, multicast packets can reach the VPN instance from the MTI but cannot pass RPF checks. Run the ip address ip-address { mask | mask-length } command in the MTI view to change the MTI address or run the multicast-domain source-interface command in the VPN IPv4 address family view to configure the MTI to dynamically obtain an IP address.
If the MTI is correctly configured on each PE but the fault persists, go to step 6.
Run the display pim [ vpn-instance vpn-instance-name ] neighbor command to check whether PIM neighbor relationships are correctly configured between interfaces. Check whether the same PIM protocol is enabled on MTIs and whether the PIM neighbor relationship is established. If a neighbor address is displayed in the command output, the PIM neighbor relationship has been established.
Check that interfaces are Up.
Run the display interface interface-type interface-number command. If all interfaces, including MTIs, are Up, enable PIM-SM or PIM-DM in the interface view.
Run the display current-configuration command in a VPN instance. If different PIM modes are enabled on the interfaces, enter the corresponding interface view and configure the correct PIM mode.
Run the display current-configuration command on the public network to check the PIM mode on interfaces, including the public network interfaces on PE devices. If different PIM modes are enabled on the interfaces, enter the corresponding interface view and configure the correct PIM mode.
The PIM modes running on the public and private networks can be different.
Check that the interfaces that have established BGP peer relationships can properly forward Hello messages.
Run the display pim vpn-instance vpn-instance-name control-message counters interface interface-type interface-number message-type hello command. If the number of sent and received Hello messages increases in a period longer than a Hello interval, the interfaces are forwarding Hello messages properly.
If the PIM neighbor relationship has been established, go to step 7 (PIM-SM) or step 8 (PIM-DM).
Run the display pim vpn-instance vpn-instance-name bsr-info command on PE devices to check BSR information in the VPN instance. If the same Elected BSR Address value is displayed on the PE devices, the BSR configuration is correct.
Run the display pim vpn-instance vpn-instance-name rp-info group-address command to check RP information for a group. If the same BSR RP Address value is displayed on the PE devices, the RP configuration is correct.
Run the display pim bsr-info command on private network devices to check BSR information. If the same Elected BSR Address value is displayed on these devices, the BSR configuration is correct.
Run the display pim rp-info command on CE devices to check RP information. If the same BSR RP Address value is displayed on the CE devices, the RP configuration is correct.
Run the display pim bsr-info command on the PE and P devices to check BSR information on the public network. If the same Elected BSR Address value is displayed on these devices, the BSR configuration is correct.
Run the display pim rp-info command on the PE and P devices to check RP information. If the same RP Address value is displayed on these devices, the RP configuration is correct.
If the BSRs and RP are correctly configured, go to step 8.
On the public network, if a Switch-Group is used, set group-address to the Switch-Group address. If a Share-Group is used, set group-address to the Share-Group address.
On the private network, set group-address to the multicast group address.
If the interface with the multicast boundary configured is the upstream RPF interface, run the undo multicast boundary { group-address { mask | mask-length } | all } command to delete or change the configuration.
If no multicast boundary is configured, go to step 9.
Run the display igmp group interface interface-type interface-number command to check whether IGMP group information exists on user-side interfaces.
If no IGMP group information is displayed, troubleshoot the fault according to IGMP Entries Cannot Be Created.