4. MPLS Forwarding Policy

The MPLS forwarding policy provides an interface for adding user-defined label entries into the label FIB of the router and user-defined tunnel entries into the tunnel table.

The endpoint policy allows the user to forward unlabeled packets over a set of user-defined direct or indirect next hops with the option to push a label stack on each next hop. Routes are bound to an endpoint policy when their next hop matches the endpoint address of the policy.

The user defines an endpoint policy by configuring a set of next-hop groups, each consisting of a primary and a backup next hops, and binding an endpoint to it.

The label-binding policy provides the same capability for labeled packets. In this case, labeled packets matching the ILM of the policy binding label are forwarded over the set of next hops of the policy.

The user defines a label-binding policy by configuring a set of next-hop groups, each consisting of a primary and a backup next hops, and binding a label to it.

This feature is targeted for router programmability in SDN environments.

4.1. Introduction

This section provides information about configuring and operating a MPLS forwarding policy using CLI.

There are two types of MPLS forwarding policy:

  1. endpoint policy
  2. label-binding policy

The endpoint policy allows the user to forward unlabeled packets over a set of user-defined direct or indirect next hops, with the option to push a label stack on each next hop. Routes are bound to an endpoint policy when their next hop matches the endpoint address of the policy.

The label-binding policy provides the same capability for labeled packets. In this case, labeled packets matching the ILM of the policy binding label are forwarded over the set of next hops of the policy.

The data model of a forwarding policy represents each pair of {primary next hop, backup next hop} as a group and models the ECMP set as the set of Next-Hop Groups (NHGs). Flows of prefixes can be switched on a per NHG basis from the primary next hop, when it fails, to the backup next hop without disturbing the flows forwarded over the other NHGs of the policy. The same can be performed when reverting back from a backup next hop to the restored primary next hop of the same NHG.

The following is the overall CLI hierarchy introduced in Release16.0.

CLI Syntax:
config>router>mpls
[no] forwarding-policies
[no] forwarding-policy name
binding-label label-number
no binding-label
endpoint ip-address
no endpoint
[no] ingress-statistics
[no] shutdown
metric metric
no metric
next-hop-group index [resolution-type {direct | indirect}]
no next-hop-group index
[no] backup-next-hop
next-hop ip-address
no next-hop
pushed-labels label [label]
no pushed-labels
load-balancing-weight weight
no load-balancing-weight
[no] primary-next-hop
next-hop ip-address
no next-hop
pushed-labels label [label]
no pushed-labels
[no] shutdown
preference preference-value
no preference
revert-timer seconds
no revert-timer
[no] shutdown
tunnel-table-pref preference-value
no tunnel-table-pref
reserved-label-block name
no reserved-label-block
[no] shutdown

                                                    ...

CLI Syntax:
next-hop-group index<1..32>[resolution-type {direct| indirect]
no next-hop-group index<1..32>
[no] shutdown

                                                    ...

4.2. Feature Validation and Operation Procedures

The MPLS forwarding policy follows a number of configuration and operation rules which are enforced for the lifetime of the policy.

There are two levels of validation:

  1. The first level validation is performed at provisioning time. The user can bring up a policy (no shutdown command) once these validation rules are met. Afterwards, the policy is stored in the forwarding policy database.
  2. The second level validation is performed when the database resolves the policy.

4.2.1. Policy Parameters and Validation Procedure Rules

The following policy parameters and validation rules apply to the MPLS forwarding policy and are enforced at configuration time:

  1. A policy must have either the endpoint or the binding-label command to be valid or the no shutdown will not be allowed. These commands are mutually exclusive per policy.
  2. The endpoint command specifies that this policy is used for resolving the next hop of IPv4 or IPv6 packets, of BGP prefixes in GRT, of static routes in GRT, of VPRN IPv4 or IPv6 prefixes, or of service packets of EVPN prefixes. It is also used to resolve the next hop of BGP-LU routes.
    The resolution of prefixes in these contexts matches the IPv4 or IPv6 next-hop address of the prefix against the address of the endpoint. The family of the primary and backup next hops of the NHGs within the policy are not relevant to the resolution of prefixes using the policy.
    See Tunnel Table Handling of MPLS Forwarding Policy for information about CLI commands for binding these contexts to an endpoint policy.
  3. The binding-label command allows the user to specify the label for binding to the policy such that labeled packets matching the ILM of the binding label can be forwarded over the NHG of the policy.
    The ILM entry is created only when a label is configured. Only a provisioned binding label from a reserved label block is supported. The name of the reserved label block using the reserved-label-block command must be configured.
    The payload of the packet forwarded using the ILM (payload underneath the swapped label) can be IPv4, IPv6, or MPLS. The family of the primary and backup next hops of the NHG within the policy are not relevant to the type of payload of the forwarded packets.
  4. Changes to the values of the endpoint and binding-label parameters require a shutdown of the specific forwarding policy context.
  5. A change to the name of the reserved-label-block requires a shutdown of the forwarding-policies context. The shutdown is not required if the user extends or shrinks the range of the reserved-label-block.
  6. The preference parameter allows the user to configure multiple endpoint forwarding policies with the same endpoint address value or multiple label-binding policies with the same binding label; providing the capability to achieve a 1:N backup strategy for the forwarding policy. Only the most preferred, lowest numerical preference value, policy is activated in data path as explained in Policy Resolution and Operational Procedures.
  7. Changes to the value of parameter preference requires a shutdown of the specific forwarding-policy context.
  8. A maximum of eight label-binding policies, with different preference values, are allowed for each unique value of the binding label.
    Label-binding policies with exactly the same value of the tuple {binding label | preference} are duplicate and their configuration is not allowed.
    The user can not perform no shutdown on the duplicate policy.
  9. A maximum eight endpoint policies, with different preference values, are allowed for each unique value of the tuple {endpoint}.
    Endpoint policies with exactly the same value of the tuple {endpoint, reference} are duplicate and their configuration is not allowed.
    The user can not perform no shutdown on the duplicate policy.
  10. The metric parameter is supported with the endpoint policy only and is inherited by the routes which resolve their next hop to this policy.
  11. The revert-timer command configures the time to wait before switching back the resolution from the backup next hop to the restored primary next hop within a given NHG. By default, this timer is disabled meaning that the NHG will immediately revert to the primary next hop when it is restored.
    The revert timer is restarted each time the primary next hop flaps and comes back up again while the previous timer is still running. If the revert timer value is changed while the timer is running, it is restarted with the new value.
  12. The MPLS forwarding policy feature allows for a maximum of 32 NHGs consisting of, at most, one primary next hop and one backup next hop.
  13. The next-hop command allows the user to specify a direct next-hop address or an indirect next-hop address.
  14. A maximum of ten labels can be specified for a primary or backup direct next hop using the pushed-labels command. The label stack is programmed using a super-NHLFE directly on the outgoing interface of the direct primary or backup next hop.
    Note:

    This policy differs from the SR-TE LSP or SR policy implementation which can push a total of 11 labels due to the fact it uses a hierarchical NHLFE (super-NHLFE with maximum 10 labels pointing to the top SID NHLFE).

  15. The resolution-type {direct| indirect} command allows a limited validation at configuration time of the NHGs within a policy. The no shutdown command fails if any of these rules are not satisfied. The following are the rules of this validation:
    1. NHGs within the same policy must be of the same resolution type.
    2. A forwarding policy can have a single NHG of resolution type indirect with a primary next hop only or with both primary and backup next hops. An NHG with backup a next hop only is not allowed.
    3. A forwarding policy will have one or more NHGs of resolution type direct with a primary next hop only or with both primary and backup next hops. An NHG with a backup next hop only is not allowed.
    4. A check is performed to make sure the address value of the primary and backup next hop, within the same NHG, are not duplicates. No check is performed for duplicate primary or backup next-hop addresses across NHGs.
    5. A maximum of 64,000 forwarding policies of any combination of label binding and endpoint types can be configured on the system.
  16. The IP address family of an endpoint policy is determined by the family of the endpoint parameter. It is populated in the TTMv4 or TTMv6 table accordingly. A label-binding policy does not have an IP address family associated with it and is programmed into the label (ILM) table.
    The following are the IP type combinations for the primary and backup next hops of the NHGs of a policy:
    1. A primary or a backup indirect next hop with no pushed labels (label-binding policy) can be IPv4 or IPv6. A mix of both IP types is allowed within the same NHG.
    2. A primary or backup direct next hop with no pushed labels (label-binding policy) can be IP types IPv4 or IPv6. A mix of both families is allowed within the same NHG.
    3. A primary or a backup direct next hop with pushed labels (both endpoint and label binding policies) can be IP types IPv4 or IPv6. A mix of both families is allowed within the same NHG.

4.2.2. Policy Resolution and Operational Procedures

This section describes the validation of parameters performed at resolution time, as well as the details of the resolution and operational procedures.

  1. The following parameter validation is performed by the forwarding policy database at resolution time; meaning each time the policy is re-evaluated:
    1. If the NHG primary or backup next hop resolves to a route whose type does not match the configured value in resolution-type, that next hop is made operationally “down”.
      A DOWN reason code shows in the state of the next hop.
    2. The primary and backup next hops of an NHG are looked up in the routing table. The lookups can match a direct next hop in the case of the direct resolution type and therefore the next hop can be part of the outgoing interface primary or secondary subnet. They can also match a static, IGP, or BGP route for an indirect resolution type, but only the set of IP next hops of the route are selected. Tunnel next hops are not selected and if they are the only next hops for the route, the NHG will be put in operationally “down” state.
    3. The first 32, out of a maximum of 64, resolved IP next hops are selected for resolving the primary or backup next hop of a NHG of resolution-type indirect.
    4. If the primary next hop is operationally “down”, the NHG will use the backup next hop if it is UP. If both are operationally DOWN, the NHG is DOWN. See Data Path Support for details of the active path determination and the failover behavior.
    5. If the binding label is not available, meaning it is either outside the range of the configured reserved-label-block, or is used by another MPLS forwarding policy or by another application, the label-binding policy is put operationally “down” and a retry mechanism will check the label availability in the background.
      A policy level DOWN reason code is added to alert users who may then choose to modify the binding label value.
    6. No validation is performed for the pushed label stack of or a primary or backup next hop within a NHG or across NHGs. Users are responsible for validating their configuration.
  2. The forwarding policy database activates the best endpoint policy, among the named policies sharing the same value of the tuple {endpoint}, by selecting the lowest preference value policy. This policy is then programmed into the TTM and into the tunnel table in the data path.
    If this policy goes DOWN, the forwarding policy database performs a re-evaluation and activates the named policy with the next lowest preference value for the same tuple {endpoint}.
    If a more preferred policy comes back up, the forwarding policy database reverts to the more preferred policy and activates it.
  3. The forwarding policy database similarly activates the best label-binding policy, among the named policies sharing the same binding label, by selecting the lowest preference value policy. This policy is then programmed into the label FIB table in the data path as detailed in Data Path Support.
    If this policy goes DOWN, the forwarding policy database performs a re-evaluation and activates the named policy with the next lowest preference value for the same binding label value.
    If a more preferred policy comes back up, the forwarding policy database reverts to the more preferred policy and activates it.
  4. The active policy performs ECMP, weighted ECMP, or CBF over the active (primary or backup) next hops of the NHG entries.
  5. When used in the PCEP application, each LSP in a label-binding policy is reported separately by PCEP using the same binding label. The forwarding behavior on the node is the same whether the binding label of the policy is advertised in PCEP or not.
  6. A policy is considered UP when it is the best policy activated by the forwarding policy database and when at least one of its NHGs is operationally UP. A NHG of an active policy is considered UP when at least one of the primary or backup next hops is operationally UP.
  7. When the config>router>mpls or config>router>mpls>forwarding-policies context is set to shutdown, all forwarding policies are set to DOWN in the forwarding policy database and deprogrammed from IOM and data path.
    Prefixes which were being forwarded using the endpoint policies revert to the next preferred resolution type configured in the specific context (GRT, VPRN, or EVPN).
  8. When an NHG is set to shutdown, it is deprogrammed from the IOM and data path. Flows of prefixes which were being forwarded to this NHG are re-allocated to other NHGs based on the ECMP, Weighted ECMP, or CBF rules.
  9. When a policy is set to shutdown, it is deleted in the forwarding policy database and deprogrammed from the IOM and data path. Prefixes which were being forwarded using this policy will revert to the next preferred resolution type configured in the specific context (GRT, VPRN, or EVPN).
  10. The no forwarding-policies command deletes all policies from the forwarding policy database provided none of them are bound to any forwarding context (GRT, VPRN, or EVPN). Otherwise, the command fails.

4.3. Tunnel Table Handling of MPLS Forwarding Policy

An endpoint forwarding policy once validated as the most preferred policy for given endpoint address is added to the TTMv4 or TTMv6 according to the address family of the address of the endpoint parameter. A new owner of 'mpls-fwd-policy' is used. A tunnel-id is allocated to each policy and is added into the TTM entry for the policy.

The TTM preference value of a forwarding policy is configurable using the parameter tunnel-table-pref. The default value of this parameter is 255.

Each individual endpoint forwarding policy can also be assigned a preference value using the preference command with a default value of 255. When the forwarding policy database compares multiple forwarding policies with the same endpoint address, the policy with the lowest numerical preference value is activated and programmed into TTM. The TTM preference assigned to the policy is its own configured value in the tunnel-table-pref parameter.

If an active forwarding policy preference has the same value as another tunnel type for the same destination in TTM, then routes and services which are bound to both types of tunnels use the default TTM preference for the two tunnel types to select the tunnel to bind to as shown in Table 39.

Table 39:  Route Preferences  

Route Preference

Value

Release Introduced

ROUTE_PREF_RIB_API

3

new in 16.0.R4 for RIB API IPv4 and IPv6 tunnel table entry

ROUTE_PREF_MPLS_FWD_POLICY

4

new in 16.0.R4 for MPLS forwarding policy of endpoint type

ROUTE_PREF_RSVP

7

ROUTE_PREF_SR_TE

8

new in 14.0

ROUTE_PREF_LDP

9

ROUTE_PREF_OSPF_TTM

10

new in 13.0.R1

ROUTE_PREF_ISIS_TTM

11

new in 13.0.R1

ROUTE_PREF_BGP_TTM

12

modified in 13.0.R1 (pref was 10 in R12)

ROUTE_PREF_UDP

254

introduced with 15.0 MPLS-over-UDP tunnels

ROUTE_PREF_GRE

255

An active endpoint forwarding policy populates the highest pushed label stack size among all its NHGs in the TTM. Each service and shortcut application on the router will use that value and perform a check of the resulting net label stack by counting all the additional labels required for forwarding the packet in that context.

This check is similar to the one performed for SR-TE LSP and SR policy features. If the check succeeds, the service is bound or the prefix is resolved to the forwarding policy. If the check fails, the service will not bind to this forwarding policy. Instead, it will bind to a tunnel of a different type if the user configured the use of other tunnel types. Otherwise, the service will go down. Similarly, the prefix will not get resolved to the forwarding policy and will either be resolved to another tunnel type or will become unresolved.

The following are the CLI commands for resolving the next hop of prefixes in GRT, VPRN, and EVPN MPLS into an endpoint forwarding policy. Also, BGP-LU routes can have their next hop resolved to an endpoint forwarding policy.

Example:
configure>router>static-route-entry>
indirect ip-address
tunnel-next-hop
[no] disallow-igp
resolution {any | disabled | filter}
resolution-filter
[no] ldp
[no] mpls-fwd-policy
[no] rsvp-te
[no] [lsp name1]
[no] [lsp name2]
.
.
[no] [lsp nameN]
[no] sr-isis
[no] sr-ospf
[no] sr-te
[no] lsp
exit
exit
exit
exit
configure>router>bgp>next-hop-resolution
shortcut-tunnel
family {ipv4 | ipv6}
[no] disallow-igp
resolution {any | disabled | filter}
resolution-filter
[no] bgp
[no] ldp
[no] mpls-fwd-policy
[no] rib-api
[no] rsvp
[no] sr-isis
[no] sr-ospf
[no] sr-policy
[no] sr-te
exit
exit
exit
exit
configure>router>bgp>next-hop-resolution>labeled-routes
transport-tunnel
[no] family {vpn | label-ipv4 | label-ipv6}
resolution {any | disabled | filter}
resolution-filter
[no] bgp
[no] ldp
[no] mpls-fwd-policy
[no] rib-api
[no] rsvp
[no] sr-isis
[no] sr-ospf
[no] sr-policy
[no] sr-te
[no] udp
exit
exit
exit
exit
configure>service>vprn>
auto-bind-tunnel
resolution {any | disabled | filter}
resolution-filter
[no] bgp
[no] gre
[no] ldp
[no] mpls-fwd-policy
[no] rib-api
[no] rsvp
[no] sr-isis
[no] sr-ospf
[no] sr-policy
[no] sr-te
[no] udp
exit
exit
exit

4.4. Data Path Support

Note:

The data path model for both the MPLS forwarding policy and the RIB API is the same. Unless explicitly stated, the selection of the active next hop within each NHG and the failover behavior within the same NHG or across NHGs is the same.

4.4.1. NHG of Resolution Type Indirect

Each NHG is modeled as a single NHLFE. The following are the specifics of the data path operation:

  1. Forwarding over the primary or backup next hop is modeled as a swap operation from the binding label to an implicit-null label over multiple outgoing interfaces (multiple NHLFEs) corresponding to the resolved next hops of the indirect route.
  2. Packets of flows are sprayed over the resolved next hops of an NHG with resolution of type indirect as a one-level ECMP spraying. See Spraying of Packets in a MPLS Forwarding Policy.
  3. An NHG of resolution type indirect uses a single NHLFE and does not support uniform failover. It will have CPM program only the active, the primary or backup, and the indirect next hop at any given point in time.
  4. Within a given NHG, the primary next hop is the preferred active path in the absence of any failure of the NHG of resolution type indirect.
  5. The forwarding database tracks the primary or backup next hop in the routing table. A route delete of the primary indirect next hop causes CPM to program the backup indirect next hop in the data path.
    A route modify of the indirect primary or backup next hop causes CPM to update the its resolved next hops and to update the data path if it is the active indirect next hop.
  6. When the primary indirect next hop is restored and is added back into the routing table, CPM waits for an amount of time equal to the user programmed revert-timer before updating the data path. However, if the backup indirect next hop fails while the timer is running, CPM updates the data path immediately.

4.4.2. NHG of Resolution Type Direct

The following rules are used for a NHG with a resolution type of direct:

  1. Each NHG is modeled as a pair of {primary, backup} NHLFEs. The following are the specifics of the label operation:
    1. For a label-binding policy, forwarding over the primary or backup next hop is modeled as a swap operation from the binding label to the configured label stack or to an implicit-null label (if the pushed-labels command is not configured) over a single outgoing interface to the next hop.
    2. For an endpoint policy, forwarding over the primary or backup next hop is modeled as a push operation from the binding label to the configured label stack or to an implicit-null label (if the pushed-labels command is not configured) over a single outgoing interface to the next hop.
    3. The labels, configured by the pushed-labels command, are not validated.
  2. By default, packets of flows are sprayed over the set of NHGs with resolution of type direct as a one-level ECMP spraying. See Spraying of Packets in a MPLS Forwarding Policy.
  3. The user can enable weighted ECMP forwarding over the NHGs by configuring weight against all the NHGs of the policy. See Spraying of Packets in a MPLS Forwarding Policy.
  4. Within a given NHG, the primary next hop is the preferred active path in the absence of any failure of the NHG of resolution type direct.
    Note:

    The RIB API feature can change the active path away from the default. The gRPC client can issue a next-hop switch instruction to activate any of the primary or backup path at any time.

  5. The NHG supports uniform failover. The forwarding policy database assigns a Protect-Group ID (PG-ID) to each of the primary next hop and the backup next hop and programs both of them in the data path. A failure of the active path switches traffic to the other path following the uniform failover procedures as described in Active Path Determination and Failover in a NHG of Resolution Type Direct.
  6. The forwarding database tracks the primary or backup next hop in the routing table. A route delete of the primary or backup direct next hop causes CPM to send the corresponding PG-ID switch to the data path.
    A route modify of the direct primary or backup next hop causes CPM to update the MPLS forwarding database and to update the data path since both next hops are programmed.
  7. When the primary direct next hop is restored and is added back into the routing table, CPM waits for an amount of time equal to the user programmed revert-timer before activating it and updating the data path. However, if the backup direct next hop fails while the timer is running, CPM activates it and updates the data path immediately. The latter failover to the restored primary next hop is performed using the uniform failover procedures as described in Active Path Determination and Failover in a NHG of Resolution Type Direct.
    Note:

    RIB API does not support the revert timer. The gRPC client can issue a next-hop switch instruction to activate the restored primary next hop.

  8. CPM keeps track and updates the IOM for each NHG with the state of active or inactive of its primary and backup next hops following a failure event, a reversion to the primary next hop, or a successful next-hop switch request instruction (RIB API only).

4.4.2.1. Active Path Determination and Failover in a NHG of Resolution Type Direct

An NHG of resolution type direct supports uniform failover either within an NHG or across NHGs of the same policy. These uniform failover behaviors are mutually exclusive on a per-NHG basis depending on whether it has a single primary next hop or it has both a primary and backup next hops.

When an NHG has both a primary and a backup next hop, the forwarding policy database assigns a Protect-Group ID (PG-ID) to each and programs both in data path. The primary next hop is the preferred active path in the absence of any failure of the NHG.

During a failure affecting the active next hop, or the primary or backup next hop, CPM signals the corresponding PG-ID switch to the data path which then immediately begins using the NHLFE of the other next hop for flow packets mapped to NHGs of all forwarding polices which share the failed next hop.

An interface down event sent by CPM to the data path causes the data path to switch the PG-ID of all next hops associated with this interface and perform the uniform failover procedure for NHGs of all policies which share these PG-IDs.

Any subsequent network event causing a failure of the newly active next hop while the originally active next hop is still down, blackholes traffic of this NHG until CPM updates the policy to redirect the affected flows to the remaining NHGs of the forwarding policy.

When the NHG has only a primary next hop and it fails, CPM signals the corresponding PG-ID switch to the data path which then uses the uniform failover procedure to immediately re-assign the affected flows to the other NHGs of the policy.

A subsequent failure of the active next hop of a NHG the affected flow was re-assigned to in the first failure event, causes the data path to use the uniform failover procedure to immediately switch the flow to the other next hop within the same NHG.

Figure 53 illustrates the failover behavior for the flow packets assigned to an NHG with both a primary and backup next hop and to an NHG with a single primary next hop.

The notation NHGi{Pi,Bi} refers to NHG "i" which consists of a primary next hop (Pi) and a backup next hop (Bi). When an NHG does not have a backup next hop, it is referred to as NHGi{Pi,Bi=null}.

Figure 53:  NHG Failover Based on PG-ID Switch 

4.4.3. Spraying of Packets in a MPLS Forwarding Policy

When the node operates as an LER and forwards unlabeled packets over an endpoint policy, the spraying of packets over the multiple NHGs of type direct or over the resolved next hops of a single NHG of type indirect follows prior implementation. Refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide.

When the node operates as an LSR, it forwards labeled packets matching the ILM of the binding label over the label-binding policy. An MPLS packet, including a MPLS-over-GRE packet, received over any network IP interface with a binding label in the label stack, is forwarded over the primary or backup next hop of either the single NHG of type indirect or of a selected NHG among multiple NHGs of type direct.

The router performs the following procedures when spraying labeled packets over the resolved next hops of a NHG of resolution type indirect or over multiple NHGs of type direct.

  1. The router performs the GRE header processing as described in MPLS-over-GRE termination if the packet is MPLS-over-GRE encapsulated. Refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
  2. The router then pops one or more labels and if there is a match with the ILM of a binding label, the router swaps the label to implicit-null label and forwards the packet to the outgoing interface. The outgoing interface is selected from the set of primary or backup next hops of the active policy based on the LSR hash on the headers of the received MPLS packet.
    1. The hash calculation follows the method in the user configuration of the command lsr-load-balancing {lbl-only | lbl-ip | ip-only} if the packet is MPLS-only encapsulated.
    2. The hash calculation follows the method described in LSR Hashing of MPLS-over-GRE Encapsulated Packet if the packet is MPLS-over-GRE encapsulated. Refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide.

4.4.4. Outgoing Packet Ethertype Setting and TTL Handling in Label Binding Policy

The following rules determine how the router sets the Ethertype field value of the outgoing packet:

  1. If the swapped label is not the Bottom-of-Stack label, the Ethertype is set to the MPLS value.
  2. If the swapped label is the Bottom-of-Stack label and the outgoing label is not implicit-null, the Ethertype is set to the MPLS value.
  3. If the swapped label is the Bottom-of-Stack label and the outgoing label is implicit-null, the Ethertype is set to the IPv4 or IPv6 value when the first nibble of the exposed IP packet is 4 or 6 respectively.

The router sets the TTL of the outgoing packet as follows:

  1. The TTL of a forwarded IP packet is set to MIN(MPLS_TTL-1, IP_TTL), where MPLS_TTL refers to the TTL in the outermost label in the popped stack and IP_TTL refers to the TTL in the exposed IP header.
  2. The TTL of a forwarded MPLS packet is set to MIN(MPLS_TTL-1, INNER_MPLS_TTL), where MPLS_TTL refers to the TTL in the outermost label in the popped stack and INNER_MPLS_TTL refers to the TTL in the exposed label.

4.4.5. Ethertype Setting and TTL Handling in Endpoint Policy

The router sets the Ethertype field value of the outgoing packet to the MPLS value.

The router checks and decrements the TTL field of the received IPv4 or IPv6 header and sets the TTL of all labels of the label stack specified in the pushed-labels command according to the following rules:

  1. The router propagates the decremented TTL of the received IPv4 or IPv6 packet into all labels of the pushed label stack for a prefix in GRT.
  2. The router then follows the configuration of the TTL propagation in the case of a IPv4 or IPv6 prefix forwarded in a VPRN context:
Example:
config>router>ttl-propagate>vprn-local {none | vc-only | all}
config>router>ttl-propagate>vprn-transit {none | vc-only | all}
config>service>vprn>ttl-propagate>local {inherit | none | vc-only | all}
config>service>vprn>ttl-propagate>transit {inherit | none | vc-only | all}

When a IPv6 packet in GRT is forwarded using an endpoint policy with an IPv4 endpoint, the IPv6 explicit null label is pushed first before the label stack specified in the pushed-labels command.

4.5. Weighted ECMP Enabling and Validation Rules

Weighted ECMP is supported within an endpoint or a label-binding policy when the NHGs are of resolution type direct. Weighted ECMP is not supported with an NHG of type indirect.

Weighted ECMP is performed on labeled or unlabeled packets forwarded over the set of NHGs in a forwarding policy when all NHG entries have a load-balancing-weight configured. If one or more NHGs have no load-balancing-weight configured, the spraying of packets over the set of NHGs reverts to plain ECMP.

Also, the weighted-ecmp command in GRT (configure>router>weighted-ecmp) or in a VPRN instance (configure>service>vprn>weighted-ecmp) are not required to enable the weighted ECMP forwarding in an MPLS forwarding policy. These commands are used when forwarding over multiple tunnels or LSPs. Weighted ECMP forwarding over the NHGs of a forwarding policy is strictly governed by the explicit configuration of a weight against each NHG.

The weighted ECMP normalized weight calculated for a NHG index causes the data path to program this index as many times as the normalized weight dictates for the purpose of spraying the packets.

4.6. Statistics

4.6.1. Ingress Statistics

The user enables ingress statistics for an MPLS forwarding policy using the CLI commands provided in Introduction.

The ingress statistics feature is associated with the binding label, that is the ILM of the forwarding policy, and provides aggregate packet and octet counters for packets matching the binding label.

The per-ILM stat index for the MPLS forwarding policy features is assigned at the time the first instance of the policy is programmed in the data path. All instances of the same policy, for example, policies with the same binding-label, regardless of the preference parameter value, share the same stat index.

The stat index remains assigned as long as the policy exists and the ingress-statistics context is not shutdown. If the last instance of the policy is removed from the forwarding policy database, the CPM frees the stat index and returns it to the pool.

If ingress statistics are not configured or are shutdown in a specific instance of the forwarding policy, identified by a unique value of pair {binding-label, preference} of the forwarding policy, an assigned stat index is not incremented if that instance of the policy is activated

If a stat index is not available at allocation time, the allocation fails and a retry mechanism will check the stat index availability in the background.