Internet-Draft | Metadata Path | December 2024 |
Dunbar, et al. | Expires 6 June 2025 | [Page] |
This draft describes a new Metadata Path Attribute and some Sub-TLVs for egress routers to advertise the Metadata about the attached edge services (ES). The edge service Metadata can be used by the ingress routers in the 5G Local Data Network to make path selections not only based on the routing cost but also the running environment of the edge services. The goal is to improve latency and performance for 5G edge services.¶
The extension enables an edge service at one specific location to be more preferred than the others with the same IP address (ANYCAST) to receive data flow from a specific source, like a specific User Equipment (UE).¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 6 June 2025.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This document describes a new Metadata Path Attribute added to a BGP UPDATE message [RFC4271] for egress routers to advertise the Metadata about 5G low latency edge services directly attached to the egress routers. 5G [TS.23.501-3GPP]is characterized by having edge services closer to the Cell Towers reachable by Local Data Networks (LDN). From an IP network perspective, the 5G LDN is a limited domain [RFC8799] with edge services a few hops away from the ingress nodes. Only selective UE services are considered as 5G low latency edge services.¶
Note: The proposed edge service Metadata Path Attribute are not intended for the best-effort services reachable via the public Internet. The information carried by the Metadata Path Attribute can be used by the ingress routers to make path selections for selective low latency services based on not only the network distance but also the running environment of the edge cloud sites. The goal is to improve latency and performance for 5G ultra-low latency services.¶
This extension is targeted for a single domain with a BGP Route Reflector (RR) [RFC4456] controlling the propagation of the BGP UPDATEs. The edge service Metadata Path Attribute is only attached to the low latency services (routes) hosted in the 5G edge cloud sites. These routes are only a small subset of services initiated from UEs, not for UEs accessing many internet sites.¶
While the proposed Metadata Path Attribute is particularly beneficial for low latency services, the Metadata Path Attributes can be expanded to propagate information about GPU availability, power, or other resources necessary for compute-intensive services such as AI and machine learning. This flexibility makes it a valuable tool for a wide range of applications beyond just low latency services when used within a limited domain network.¶
The following conventions are used in this document.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The goal of this edge service Metadata Path Attribute is for egress routers to propagate the metrics about the running environment for a subset of edge services to ingress routers so that the ingress routers can make path selections based on not only the routing cost but also the running environment for those edge services. The BGP speakers that do not support the Metadata Path Attribute can ignore the Metadata Path Attribute in a BGP UPDATE Message. All intermediate nodes can forward the entire BGP UPDATE as it is. Multiple metrics can be attached to one Metadata Path Attribute. One Metadata Path Attribute can contain computing service capability information, computing service states, computing resource states of the corresponding edge site, or more. Computing service capability information can be used to record information of the computing power node or initialization deployment information for computing service initialization. Computing service states can include one of the service connection numbers, service duration, and so on. Computing resource states can be detailed information on computing resources such as CPU/GPU. They can also be an abstract metric from these detailed parameters to indicate the resource status of the edge site. There could be more metrics about the running environment being attached to the Metadata Path Attribute; e.g., some of the metrics being discussed by the IETF CATS Working Group. This document illustrates a few examples of Sub-TLVs of the metrics under the edge service Metadata Path Attribute:¶
This section specifies how those Metadata impact the ingress node's path selections.¶
When an ingress router receives BGP UPDATEs for the same IP prefix from multiple egress routers, all these egress routers' loopback addresses are considered as the next hops for the IP prefix. For the selected low latency edge services, the ingress router BGP engine would call an edge service Management function that can select paths based on the edge service Metadata received. Section 5.1 has an exemplary algorithm to compute the weighted path cost based on the edge service Metadata carried by the Sub-TLV(s) specified in this document.¶
Section 5 has the detailed description of the edge service Metadata influenced optimal path selection.¶
When the ingress router receives a packet and does a lookup on the route in the FIB, it determines the destination prefix's entire path including the optimal egress node. The ingress router encapsulates the packet destined towards the optimal egress router. For routes that carry the Metadata Path Attribute but lack the Tunnel Encapsulation Path Attribute [RFC9012], it is recommended that the ingress router encapsulate the original packet using an IP-in-IP header. This encapsulation ensures that intermediate nodes not supporting the Metadata Path Attribute do not forward the packet to unintended destinations. The outer header should set the destination address to the optimal egress router and the source address to the ingress router.¶
For routes without the Metadata Path Attribute, no changes are required. Packets are forwarded according to existing behavior: encapsulation is applied when Tunnel Attributes are present, and parkets are forwarded without encapsulation when they are not.¶
For subsequent packets belonging to the same flow, the ingress router needs to forward them to the same egress router unless the selected egress router is no longer reachable. Forwarding packets for a particular flow to the same egress router, also known as Flow Affinity, is supported by many commercial routers. Most registered EC services have relatively short-lived flows.¶
How Flow Affinity is implemented is out of the scope for this document.¶
When a UE moves to a new 5G gNB which is anchored to the same UPF, the packets from the UE traverse to the same ingress router. Path selection and forwarding behavior are same as before.¶
If the UE maintains the same IP address when anchored to a new UPF, the directly connected ingress router might use the information passed from a neighboring router to derive the optimal BGP Next Hop for this route. The detailed algorithm is out of the scope of this document.¶
The Metadata Path Attribute is an optional non-transitive BGP Path attribute that carries metrics and Metadata about the edge services attached to the egress router. The Metadata Path Attribute (TBD1) consists of a set of Sub-TLVs, and each Sub-TLV contains information for specific metrics of the edge services.¶
BGP Peers that intend to exchange the Metadata Path Attribute should indicate this by signaling the Metadata Capability (TBD2) in the Open Capabilities field with the format described in Section 4.1.5. The web of BGP peers that exchange the Metadata Path Attributes forms a limited domain, either within a single AS or within a group of ASes under a single Administrative Authority.¶
The fields within the Metadata Path Attribute and its Sub-TLVs MUST use network byte order (big-endian), where the most significant byte is transmitted first.¶
Only a small subset of BGP UPDATE messages include the Metadata Path Attribute. The choice of which prefix to carry the Metadata Path Attribute is determined by local policies. The Metadata Path Attribute can be included in a BGP UPDATE message [RFC4271] together with other BGP Path Attributes [IANA-BGP-PARAMS], such as Communities [RFC4360], NEXT_HOP, Tunnel Encapsulation Path Attribute [RFC9012], and other BGP attributes.¶
The Metadata Path Attribute has the following characteristics:¶
A BGP speaker that advertises a BGP UPDATE message received from one of its neighbors SHOULD advertise the BGP Metadata Path Attribute received with the UPDATE message without modification only when forwarding to peers within the same domain. Otherwise, the Metadata Path Attribute should be removed. If the UPDATE message did not come with a BGP Metadata Path Attribute, the speaker MAY attach a BGP Metadata Path Attribute to the UPDATE message, if configured to do so, provided that the modification adheres to the domain's policies and security guidelines.¶
A BGP Peer receiving a BGP Metadata Path Attribute should ignore Sub-TLVs with unknown types and process the recognized Sub-TLVs. BGP Peers should not delete any Sub-TLV from the BGP Metadata Path Attribute.¶
To prevent forwarding loops and ensure consistent routing decisions, it is essential that all BGP peers within an Autonomous System (AS) adopt a unified approach to handling BGP Metadata Path Attributes. Specifically, BGP peers should consistently ignore Sub-TLVs with unknown types while processing the recognized Sub-TLVs. Additionally, BGP peers should refrain from deleting any Sub-TLV from the BGP Metadata Path attribute. This ensures that all peers have a common understanding of the routing information and reduces the risk of routing inconsistencies that could lead to forwarding loops.¶
By default, a BGP speaker does not report any unrecognized Sub-TLVs within a Metadata Path Attribute unless configured to send a notification to its management system. The ingress node should be configured with an algorithm to combine the recognized metrics carried by the Sub-TLVs within a Metadata Path Attribute of the received BGP UPDATE message.¶
To ensure consistent route selection, a deployment specific algorithm should be configured across all ingress nodes to factor in the Metadata's contribution alongside existing policies. This will help the ingress node make informed decisions about the optimal path to the next-hop, considering both traditional routing factors and the additional insights provided by the Metadata.¶
The Metadata Path Attribute MUST contain at least one Metadata Sub-TLV. Multiple Metadata Sub-TLVs can be included in a Metadata Path Attribute in one BGP UPDATE message. The content of the Sub-TLVs present in the BGP Metadata Path Attribute is determined by configuration. The domain ingress nodes should process the recognized Sub-TLVs carried by the Metadata Path Attribute and ignore the unrecognized Sub-TLVs. By default, a BGP speaker does not report any unrecognized Sub-TLVs within a Metadata Path Attribute unless configured to send a notification to its management system. The ingress router should be configured with an algorithm to consider the recognized metrics carried by the Sub-TLVs within a Metadata Path Attribute of the received BGP UPDATE message.¶
The "Capabilities Optional Parameter" [RFC5492] allows a BGP speaker to indicate its capabilities during the BGP OPEN message exchange. The Capabilities Optional Parameter is a triple that includes a one-octet Capability Code, a one-octet Capability length, and a variable-length Capability Value.¶
To enable support for the Metadata Path Attribute, a new Metadata Processing Capability code (TBD2) is defined. This capability allows a BGP speaker to communicate its ability to process the Metadata Path Attribute for specified AFI and SAFI pairs.¶
The Value Field of the Metadata Processing Capability:¶
Where:¶
If a BGP speaker does not include the Metadata Processing Capability in its BGP OPEN message for a specific BGP session, or if it does not receive the Metadata Processing Capability from its peer on that session, it MUST NOT send any BGP UPDATE message on that session that bind the Metadata Path Attribute to any prefix.¶
Different services might have different preference index values configured for the same site. For example, Service-A requires high computing power, Service-B requires high bandwidth among its microservices, and Service-C requires high volume storage capacity. For a DC with relatively low storage capacity but high bisectional bandwidth, its preference index value for Service-B is higher and lower for Service-C. Site Preference Index can also be used to achieve stickiness for some services.¶
It is out of the scope of this document how the preference index is determined or configured.¶
The Site Preference Index Sub-TLV has the following format:¶
The Site Physical Availability Index indicates the percentage of impact on a group of routes associated with a common physical characteristic, for example, a pod, a row of server racks, a floor, or an entire DC. The purpose is to use one UPDATE message to indicate a group of routes of different NLRIs impacted by a physical event. For example, a power outage to a pod can cause the Site Physical Availability Index to be 0% for all the routes in the pod. Partial fiber cut to a row of shelves can cause the Site Physical Availability Index to be 50% for all the routes in those shelves. The value is 0-100, with 100% indicating the site is fully functional, 0% indicating the site is entirely out of service, and 50% indicating the site is 50% degraded.¶
It is recommended to assign each route with one Site-ID. When a route is associated with multiple Site-IDs, the latest BGP UPDATE will override any previous associations. For example, one DC can use POD number as Site-ID, another DC can use Row of Shelves as the Site-ID.¶
Cloud Site/Pod failures and degradation include, but are not limited to, a site degradation or an entire site going down caused by a variety of reasons. Examples include fiber cuts impacting a site or among pods, cooling failures, insufficient backup power, cyber threats attacks, too many changes outside of the maintenance window, etc. Fiber-cuts are not uncommon within a Cloud site or between sites.¶
If the access network attached to the egress router doesn't support BFD, it can be challenging for the egress router to directly notify ingress routers about failures within the access network. However, if BFD is implemented on the access network, concatenated path down mechanisms can be employed to propagate failure information more effectively.¶
When there is a failure occurring at an edge site (or a pod), many instances can be impacted. In addition, the routes (i.e., the IP addresses) in the site might not be aggregated nicely. Instead of many BGP UPDATE messages to the ingress routers for all the instances, i.e. routes, impacted, the egress router can send one single BGP UPDATE to indicate the capacity availability of the site. The ingress routers can switch all or a portion of the instances associated with the site depending on how much the site is degraded.¶
The BGP UPDATE for the individual instances (i.e., the routes) can include the Capacity Availability Index solely for ingress routers to associate the routes with the Side-ID. The actual Capacity Availability Index value, i.e., the percentage for all the routes associated with the Side-ID, is generated by the egress routers with the egress routers' loopback address as the NLRI.¶
The Site Physical Availability Index Sub-TLV has fixed length of 8 Octets, including the Type field. Therefore a Length field is not needed.¶
An egress router sets itself as the next hop for a BGP peer before sending an UPDATE with the Metadata Path Attribute that includes the Site Physical Availability Index Sub-TLV. The Site Physical Availability Index Sub-TLV (with RouteFlag-I=1) is for ingress routers to associate the Site Identifier with the prefixes.¶
A BGP UPDATE that includes the Site Availability Index Sub-TLV without specifying attached routes in the NLRI, but instead using the egress router's loopback address in the NLRI, is referred to as a standalone Site Availability Index BGP UPDATE. When an ingress router receives such a BGP UPDATE containing the Metadata Path Attribute with the standalone Site Physical Availability Index Sub-TLV from Router-X or its RR with the Originator-ID equal to Router-X, the ingress router SHOULD use the site availability index to efficiently reduce or increase the preference for all BGP routes attached to Router-X.¶
The BGP UPDATE with a standalone Site Availability Index is NOT intended for resolving NextHop.¶
It is desirable for an ingress router to select a site with the shortest processing time for an ultra-low latency service. However, it is not easy to predict which site has "the fastest processing time" or "the shortest processing delay" for an incoming service request because:¶
Even though utilization measurements, like those below, are collected by most data centers, they cannot indicate which site has the shortest processing time. A service request might be processed faster on Site-A even if Site-A is overutilized.¶
The remaining available resource at a site is a more reasonable indication of process delay for future service requests.¶
The Service Delay Prediction Index is a value that predicts processing delays at the site for future service requests. The higher the value, the longer of the delay.¶
While out of scope, we assume there is an algorithm that can derive the Service Delay Prediction Index that can be assigned to the egress router. When the Service Delay Prediction value is updated, which can be triggered by the available resources change, etc., the egress router can attach the updated Service Delay Predication value in a Sub-TLV under the Metadata Path Attribute of the BGP Route UPDATE message to the ingress routers.¶
When ingress routers have embedded analytics tool relying on the raw measurements, it is useful for the egress router to send the raw measurement.¶
Raw Measurement Sub-TLV has the following format:¶
- Raw-Measurement Sub-Type (16 bits): 4 (specified in this document). Indicating raw measurements Metadata associated with the edge service address.¶
- Length (8 bits): specifies the total length, in octets, of the value field, excluding the Sub-Type and the Length fields. For the Raw-Measurement Sub-Type, the length is determined by the Value field, which carries one or more types of raw measurement.¶
- Reserved (8 bits): These bits are reserved for future use and MUST be set to zero. Future documents may specify different uses for these bits.¶
- Value: The value filed can contain multiple types of raw measurements, each represented as a Sub-Sub-TLV.¶
One example of a raw measurement Metadata Sub-sub-TLV is defined below to convey the total number of packets or bytes transmitted over a specified period for a particular edge service address. When a Data DC GW router cannot directly access the internal state of an edge service, the volume of incoming traffic can be a reliable indicator of its load. A sudden increase in packets or bytes can signal a surge in requests, potentially leading to performance issues or resource constraints on the service side.¶
To differentiate this measurement from others that may be defined in the future, this document assigns a Sub-sub-Type value of 1 to represent the total packets or bytes transmitted to an edge service address.¶
Future documents may define additional Sub-sub-types of raw measurement metadata. Each type of raw measurement will have a unique Sub-sub-type value assigned at the time of its specification.¶
- RawPacketsMeasure Sub-sub-Type (8 bits): 1 (specified in this document). Indicating raw measurements of packets or bits transmitted to or from the edge service address.¶
- Length (8 bits): specifies the total length in octets of the value field, excluding the Sub-sub-Type and the Length fields. For the raw measurements of packets transmitted to or from the edge service address Sub-sub-Type, the length should be 22.¶
- B flag (1 bit): If set to 0, the raw measurement is the number of packets. If set to 1, the raw measurement is the number of bytes.¶
- Reserved (7 bits): These bits are reserved for future use and MUST be set to zero.¶
- Measurement Period: BGP Update period in Seconds or user-specified period.¶
- Total number of packets to the Edge Service (32 bits): This field specifies the total number of packets transmitted to the edge service address over the specified measurement period.¶
- Total number of packets from the Edge Service (32 bits): This field specifies the total number of packets from the edge service address over the specified measurement period.¶
The receiver nodes can compute the needed metrics, such as the Service Delay Prediction, for the service based on the raw measurements sent from the egress router and preconfigured algorithms.¶
The service-oriented capability Sub-TLV is for distributing information regarding the capabilities of a specific service in a deployment environment. Depending on the deployment, a deployment environment can be an edge site or other types of environments. This information provides ingress routers or controllers with the available resources for the specific service in each deployment environment. It enables them to make well-informed decisions for the optimal paths to the selected deployment environment.¶
Currently, the Sub-TLV only has an abstract value derived from various metrics, although the specifics of this derivation are beyond the scope of this document. Importantly, this value is significant only when comparing multiple data center sites for the same service. This value is not meaningful when comparing different services, meaning the capability value relevant to Service A cannot be directly compared with that for Service B. Future enhancements may expand this sub-TLV to include more types of metrics or even raw data that represents direct metrics. This information is important in 5G network environments where efficient resource utilization is crucial for enhancing performance and service quality.¶
Multiple Service-Oriented Capability Sub-TLVs with different metric types can be encoded in a Metadata Path Attribute, indicating that multiple metrics are carried. However, if more than one Service-Oriented Capability Sub-TLVs with the same metric type are encoded in a Metadata Path Attribute, only the first one will be processed and the others will be ignored in processing.¶
The "Service-Oriented Available Resource Sub-TLV" is for distributing a metric that measures the real-time avaiable resources allocated for processing specific services or applications at an edge site. This Sub-TLV complements the "Service-Oriented Capability Sub-TLV" described in Section 4.6, which addresses the static resource capability of a site for a service. While the Capability Abstract Value provides a baseline understanding of a site's potential to handle a service, the Available Resource metric offers a dynamic perspective by quantifying how much of this capacity is currently available. This distinction is crucial for managing resource efficiency and responsiveness in network operations, ensuring that capabilities are not only available but also optimally used to meet the actual service demands.¶
Multiple Service-Oriented Available Resource Sub-TLVs with different metric types can be encoded in a Metadata Path Attribute, indicating that multiple metrics are carried. However, if more than one Service-Oriented Available Resource Sub-TLVs with the same metric type are encoded in a Metadata Path Attribute, only the first one will be processed and the others will be ignored in processing.¶
The propagation scope of the Metadata Path Attribute needs careful consideration to ensure it does not inadvertently leak to other BGP domains. According to Section 3 of [ATTRIBUTE-ESCAPE], it is necessary for the Route Reflector (RR) to be upgraded to constrain the propagation scope when propagating the metadata path attributes. Therefore, the Metadata Path Attribute originator sets the attribute as Non-transitive when sending the BGP UPDATE message to its corresponding RR. Non-transitive attributes are only guaranteed to be dropped during BGP route propagation by implementations that do not recognize them, ensuring that the metadata path attributes do not propagate beyond the intended scope.¶
The RR can append the NO-ADVERTISE well-known community to the BGP UPDATE message with the Metadata Path Attribute when forwarding it to the ingress routers. This signals to the ingress nodes that the associated route's Metadata Path Attribute should not be further advertised beyond their scope. This precautionary measure ensures that the receiver of the BGP UPDATE message refrains from forwarding the received update to its peers, preventing the undesired propagation of the information carried by the Metadata Path Attribute.¶
To address the potential issue where the NO-ADVERTISE well-known community of the BGP UPDATE message can be dropped by some routers, a new AS-Scope Sub-TLV can be included in the Metadata Path Attribute to prevent the Metadata Path Attribute from being leaked to unintended Autonomous Systems (ASes). The AS-Scope Sub-TLV will enforce stricter control over the propagation of the metadata by associating it with specific AS numbers.¶
When a router receives a BGP UPDATE message containing the AS-Scope Sub-TLV, it must perform the following steps to process the AS-Scope value:¶
- AS Recognition: The router will check the AS value in the AS-Scope Sub-TLV.¶
- If the AS value matches the local AS or a recognized AS in its configuration, the router will process the update as usual. If the AS value does not match or is not recognized, the router SHOULD NOT process the Metadata Path Attribute values in the BGP UDPATE and SHOULD NOT propagate the received BGP UPDATE to other nodes. I.e., treat-as-withdraw behavior will be used.¶
Example Usage:¶
Consider a scenario where a router in AS 65001 advertises a BGP UPDATE message with the AS-Scope Sub-TLV set to AS 65001. When another router in AS 65002 receives this UPDATE, it will check the AS-Scope Sub-TLV value:¶
Since AS 65002 does not match the AS value 65001, the router in AS 65002 will drop the UPDATE, preventing the metadata from leaking into AS 65002.¶
This mechanism ensures that the metadata remains confined to the intended ASes, enhancing the security and control over the propagation of BGP metadata.¶
This section describes how the information carried in the Metadata Path Attribute is integrated into the BGP route selection process. RR and Ingress nodes can incorporate metadata into their route selection, depending on the network deployment and local policy configuration. This flexibility ensures that service specific requirements are accounted for while maintaining network-wide consistency.¶
Deployment Specific Attribute Selection:¶
Each deployment, by the local policy, chooses the subset of available metadata attributes to use in setting the local preference for the route. This tailors the route selection process to the specific needs and policies of the network. Both RRs and Ingress nodes can selectively integrate metadata attributes into their computations based on these policies.¶
Influence on the BGP Decision Process:¶
- At the Route Reflector (RR):¶
In deployments where RRs are responsible for pre selecting routes, the RR integrates metadata and traditional BGP attributes when determining the "best" route. The RR reflects only the selected route to its client routers (e.g., Ingress PEs). This ensures that the reflected route already aligns with service specific requirements.¶
- At the Ingress Node:¶
When the RR reflects multiple routes (e.g., using Add Paths), the Ingress node receives all candidate routes. It then integrates metadata attributes with traditional BGP attributes to compute the preference for a route. This allows the Ingress node to make service specific routing decisions based on its local policy and visibility into metadata.¶
Policy Driven Combined Preference Evaluation:¶
The preference for a route is computed based on a weighted combination of metadata attributes and traditional BGP attributes. The weights are determined by local policy:¶
Handling Degraded Metrics:¶
When critical metadata metrics, such as the Capacity Availability Index or Service Delay Prediction Index, degrade beyond a configured threshold, local BGP policy may treat the affected route as ineligible for traffic steering. This behavior is equivalent to BGP local policy declaring the route is not eligible for route selection. This ensures that traffic is not routed to service instances that not capable to process the services, preserving the quality of service for critical applications.¶
Example Scenarios for Policy Based Route Selection:¶
A BGP peer uses local policy run over the route (prefix plus attribute) to select the best route and then use tie breaking based on [RFC4271]. This section simply provides 3 examples of how local policy might weigh the Metadata metrics during that policy selection.¶
Scenario 1: Local Policy Prioritizes Metadata Metrics Over Traditional BGP Attributes.¶
The local policy assigns a higher weight to metadata metrics when computing the preference for routes. The selection process follows these steps:¶
Scenario 2: Local Policy Weighs Metadata Metrics and Traditional BGP Attributes Equally.¶
The local policy assigns equal weight to metadata and traditional BGP attributes during preference computation. The selection process is as follows:¶
Scenario 3: Local Policy Prioritizes Traditional BGP Attributes Over Metadata Metrics¶
The local policy assigns a higher weight to traditional BGP attributes. The selection process follows these steps:¶
Equal Cost Multi Path (ECMP) in BGP Route Selection:¶
When the BGP decision process identifies multiple paths with equal preference after considering both Metadata Path Attributes and traditional BGP attributes, BGP can pass these paths to the forwarding engine to enable ECMP.¶
This Policy Based Metadata Integration approach enables network operators to incorporate Metadata Path Attributes into BGP route selection based on their specific operational goals and requirements, while maintaining compatibility with traditional BGP operations.¶
Route Churn Considerations¶
While the mechanism detailed in this document aims to provide dynamic metrics like Capacity Availability Index, Site Delay Prediction Index, Service Delay Prediction Index, and Raw Measurement to optimize path selection, it is essential to consider the broader implications of metric-induced churn. Particularly, in the context of routes used for BGP nexthop resolution (e.g., labeled unicast), frequent changes in these metrics can lead to significant churn not only for the prefixes carrying the data but also for dependent routes.¶
This behavior is analogous to the impacts observed with RSVP auto-bandwidth, which can introduce considerable instability within a network. Such route churn can propagate through the network, causing a cascade of UPDATEs and potential route flaps, thereby affecting overall network stability and performance.¶
To mitigate these effects, network operators should carefully manage the advertisement intervals of these dynamic metrics, ensuring they are set to avoid unnecessary churn. The default minimum interval for metrics change advertisement, set at 30 seconds, is designed to balance responsiveness with stability. However, in scenarios with higher sensitivity to route stability, operators may consider increasing this interval further to reduce the frequency of UPDATEs.¶
Significant load changes at EC data centers can be triggered by short-term gatherings of UEs, like conventions, lasting a few hours or days. Therefore, a high metrics change rate can persist for hours or days.¶
The Metadata Path Attribute is an optional non-transitive BGP Path attribute that carries metrics and metadata about the edge services attached to the egress router. The Metadata Path Attribute, to be assigned by IANA , consists of a set of Sub-TLVs, and each Sub-TLV contains information for specific metrics of the edge services.¶
When more than one sub-TLV is present in a Metadata Path Attribute, they are processed independently. Suppose a Metadata Path Attribute can be parsed correctly but contains a Sub-TLV whose type is not recognized by a particular BGP speaker; that BGP speaker MUST NOT consider the attribute malformed. Instead, it MUST interpret the attribute as if that Sub-TLV had not been present. Logging the error locally or to a management system is optional. If the route carrying the Metadata path attribute is propagated with the attribute, the unrecognized Sub-TLV remains in the attribute.¶
The edge service Metadata described in this document are only intended for propagating between ingress and egress routers of one single BGP Administrative Domain [RFC1136]. A single BGP Administrative Domain can consist of one AS or multiple ASes.¶
Only the selective services by UEs are considered as 5G edge services. The 5G LDN is usually managed by one operator, even though the routers can be by different vendors.¶
The proposed edge service Metadata are advertised within the trusted domain of 5G LDN's ingress and egress routers. The ingress routers should not propagate the edge service Metadata to any nodes that are not within the trusted domain.¶
To prevent the BGP UPDATE receivers (a.k.a. ingress routers in this document) from leaking the Metadata Path Attribute by accident to nodes outside the trusted domain [ATTRIBUTE-ESCAPE], the following practice should be enforced:¶
BGP Route Filtering or BGP Route Policies [RFC5291] can also be used to ensure that BGP UPDATE messages with Metadata Path Attribute attached do not get forwarded out of the administrative domain. BGP route filtering [RFC5291] allows network administrators to control the advertisements and acceptance of BGP routes, ensuring that specific routes do not leak outside the intended administrative domain. Here are the steps to achieve this:¶
IANA is requested to assign a new path attribute from the "BGP Path Attributes" registry. The symbolic name of the attribute is "Metadata", and the reference is [This Document].¶
+=======+======================================+=================+ | Value | Description | Reference | +=======+======================================+=================+ | TBD1 | Metadata Path Attribute | [this document] | +-------+--------------------------------------+-----------------+ | TBD2 | Metadata Capability in BGP OPEN | [This document] | +-------+--------------------------------------+-----------------+¶
IANA is requested to create a new sub-registry under the Metadata Path Attribute registry as follows:¶
+========+=============================+===================+ |Sub-Type| Description | Reference | +========+=============================+===================+ | 0 |reserved |[this document ] | +--------+-----------------------------+-------------------+ | 1 |Site Preference Index |[this document:4.3]| +--------+-----------------------------+-------------------+ | 2 |Site Physical Avail Index |[this document:4.4]| +--------+-----------------------------+-------------------+ | 3 |Service Delay Predication |[this document:4.5]| +--------+-----------------------------+-------------------+ | 4 |Raw Measurement |[this document:4.6]| +--------+-----------------------------+-------------------+ | 5 |Service-Oriented Capability |[this document:4.7]| +--------+-----------------------------+-------------------+ | 6 |Service-Oriented Available | | | |Resource |[this document:4.8]| +--------+-----------------------------+-------------------+ | 7 |AS-Scope |[this document:5.1]| +--------+-----------------------------+-------------------+ |8-65534 | unassigned | | +--------+-----------------------------+-------------------+ | 65535 | reserved |[this document] | +--------+-----------------------------+-------------------+¶
Changwang Lin¶
New H3C Technologies¶
China¶
Email: linchangwang.04414@h3c.com¶
Acknowledgements to Jeff Haas, Tom Petch, Adrian Farrel, Alvaro Retana, Robert Raszuk, Sue Hares, Shunwan Zhuang, Donald Eastlake, Dhruv Dhody, Cheng Li, DongYu Yuan, and Vincent Shi for their suggestions and contributions.¶
When data centers detailed running status are not exposed to the network operator, historic traffic patterns through the egress routers can be utilized to predict the load to a specific service. For example, when traffic volume to one service at one data center suddenly increases a huge percentage compared with the past 24 hours average, it is likely caused by a larger than normal demand for the service. When this happens, another data center with lower-than-average traffic volume for the same service might have a shorter processing time for the same service.¶
Here are some measurements that can be utilized to derive the Service Delay Predication for a service ID:¶
The Service Delay Prediction Index can be derived from LoadIndex/24Hour-Average. A higher value means a longer delay prediction. The egress router can use the ServiceDelayPred sub-TLV to indicate to the ingress routers of the delay prediction derived from the traffic pattern.¶
Note: The proposed IP layer load measurement is only an estimate based on the amount of traffic through the egress router, which might not truly reflect the load of the servers attached to the egress routers. They are listed here only for some special deployments where those metrics are helpful to the ingress routers in selecting the optimal paths.¶
Multiple instances of the same service could be attached to one egress router. When all instances of the same service are grouped behind one application layer load balancer, they appear as one single route to the egress router, i.e., the application loader balancer's prefix. Under this scenario, the compute metrics for all those instances behind one application layer balancer are aggregated under the application load balancer's prefix. In this case, the compute metrics aggregated by the Load Balancer are visible to the egress router as associated with the Load Balancer's prefix. However, how the application layer Load Balancers distribute the traffic among different instances is out of the scope of this document. When multiple instances of the same service have different paths or links reachable from the egress router, multiple groups of metrics from respective paths could be exposed to the egress router. The egress router can have preconfigured policies on aggregating various metrics from different paths and the corresponding policies in selecting a path for forwarding the packets received from ingress routers. The aggregated metrics can be carried in the BGP UPDATE messages instead of detailed measurements to reduce the entries advertised by the control plane and dampen the routes update in the forwarding plane. Upon receiving packets from ingress routers, the egress router can use its policies to choose an optimal path to one service instance. It is out of the scope of this document how the measurements are aggregated on egress routers and how ingress routers are configured with the algorithms to integrate the aggregated metrics with network layer metrics.¶
Many measurements could impact and correspondingly reflect service performance. In order to simplify an optimal selection process, egress routers can have preconfigured policies or algorithms to aggregate multiple metrics into one simple one to ingress routers. Though out of the scope of this document, an egress router can also have an algorithm to convert multiple metrics to network metrics, an IGP cost for each instance, to pass to ingress nodes. This decision-making process integrates network metrics computed by traditional IGP/BGP and the service delay metrics from egress routers to achieve a well-informed and adaptive routing approach. This intelligent orchestration at the edge enhances the service's overall performance and optimizes resource utilization across the distributed infrastructure. When the egress has merged the compute metrics from the local sites behind it, it can include one or more aggregated compute metrics in the Metadata Path Attribute in the BGP UPDATE to the Ingress. Also, an identifier or flag can be carried to indicate that the metrics are merged ones. After receiving the routes for the Service ID with the identifier, the ingress would do the route selection based on pre-configured algorithms (see Section 3 of this document).¶
As the service metrics and network delays are in different units, here is an exemplary algorithm for an ingress router to compare the cost to reach the service instances at Site-i or Site-j.¶
ServD-i * CP-j Pref-j * NetD-i Cost-i=min(w *(----------------) + (1-w) *(------------------)) ServD-j * CP-i Pref-i * NetD-j¶
When a set of service Metadata is converted to a simple metric, a decision process is determined by the metric semantics and deployment situations. The goal is to integrate the conventional network decision process with the service Metadata into a unified decision-making process for path selection.¶
Not all metadata attributes specified in this document are intended for use in every deployment. Each deployment may choose to consider only a subset of the available metadata attributes based on its specific service requirements.¶
- Deployment-Specific Attribute Selection:¶
A deployment may prioritize only certain metadata attributes relevant to its operational needs. For example, one deployment might only use the Service Delay Prediction Index for latency-sensitive applications, while another might focus solely on the Capacity Availability Index to manage resource availability.¶
- Influence on BGP Decision Process:¶
The edge service Metadata influences next-hop selection differently from traditional BGP metrics (e.g., Local Preference, MED). Unlike a general next-hop metric that can affect many routes, edge service Metadata selectively impacts optimal next-hop selection for specific routes configured to consider these service-specific attributes. This targeted influence allows for optimized path selection without disrupting broader route decisions.¶
- Handling Degraded Metrics (Policy-Based):¶
If a service-specific metric degrades beyond a configured threshold (e.g., the Service Delay Prediction Index exceeds an acceptable delay threshold or the Capacity Availability Index drops below a required level), the ingress router will treat that route as ineligible for traffic steering. This is similar to a BGP route withdrawal, where the degraded route is deprioritized or ignored, even if traditional BGP attributes would otherwise favor it. This ensures that traffic is directed only to service instances that meet the defined performance criteria.¶
- Fallback to Non-Metadata Routes:¶
If no suitable routes with the required metadata are available, the BGP decision process defaults to traditional attribute evaluation [RFC 4271], ensuring consistent routing even when metadata-specific paths are absent.¶
This approach provides flexibility and adaptability in routing decisions, allowing each deployment to apply relevant metadata attributes and enforce performance thresholds for improved service quality.¶