Artigo Revisado por pares

Recent trends in MPLS networks: technologies, applications and challenges

2019; Institution of Engineering and Technology; Volume: 14; Issue: 2 Linguagem: Inglês

10.1049/iet-com.2018.6129

ISSN

1751-8636

Autores

Mohammad Azmi Ridwan, Nurul Asyikin Mohamed Radzi, Wan Siti Halimatul Munirah Wan Ahmad, Fairuz Abdullah, M. Z. Jamaludin, M. N. Zakaria,

Tópico(s)

Network Traffic and Congestion Control

Resumo

IET CommunicationsVolume 14, Issue 2 p. 177-185 Review ArticleFree Access Recent trends in MPLS networks: technologies, applications and challenges Mohammad Azmi Ridwan, Corresponding Author Mohammad Azmi Ridwan m.azmiridwan@gmail.com Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorNurul Asyikin Mohamed Radzi, Nurul Asyikin Mohamed Radzi Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorWan Siti Halimatul Munirah Wan Ahmad, Wan Siti Halimatul Munirah Wan Ahmad orcid.org/0000-0001-6364-6341 Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorFairuz Abdullah, Fairuz Abdullah orcid.org/0000-0002-1030-7554 Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorMd.Zaini Jamaludin, Md.Zaini Jamaludin Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorMohd Nasim Zakaria, Mohd Nasim Zakaria Architecture and Governance, TNB ICT, Kuala Lumpur, MalaysiaSearch for more papers by this author Mohammad Azmi Ridwan, Corresponding Author Mohammad Azmi Ridwan m.azmiridwan@gmail.com Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorNurul Asyikin Mohamed Radzi, Nurul Asyikin Mohamed Radzi Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorWan Siti Halimatul Munirah Wan Ahmad, Wan Siti Halimatul Munirah Wan Ahmad orcid.org/0000-0001-6364-6341 Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorFairuz Abdullah, Fairuz Abdullah orcid.org/0000-0002-1030-7554 Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorMd.Zaini Jamaludin, Md.Zaini Jamaludin Institute of Informatics and Computing in Energy (IICE), Universiti Tenaga Nasional, 43000 Kajang, Malaysia Department of Electrical & Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, MalaysiaSearch for more papers by this authorMohd Nasim Zakaria, Mohd Nasim Zakaria Architecture and Governance, TNB ICT, Kuala Lumpur, MalaysiaSearch for more papers by this author First published: 01 January 2020 https://doi.org/10.1049/iet-com.2018.6129Citations: 3AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract Multiprotocol label switching (MPLS) networks are packet-based networks that offer considerable advantages, including improved network utilisation, reduced network latency, and the ability to meet the quality of service and strict level agreement requirements of any incoming traffic. A vast number of applications are now migrating to packet-based conditions that cause increased pressure on network providers to change their systems. Innovations and improvements on MPLS are still on-going to ensure that such networks can cater to the ever-increasing bandwidth demand whenever required. This study provides a review of MPLS networks and their promising technologies, such as traffic engineering, protection and restoration, differentiated services, and MPLS-transport profile (MPLS-TP) and its applications. This work also reviews recent issues on MPLS networks and discusses the implementation of MPLS-TP networks in the power grid. A review of recent literature shows that researchers should be careful in proposing new protocols or designs for MPLS to ensure that it achieves the most efficient and optimal performance. Furthermore, it can be concluded that although MPLS is a promising technology for future networks, there are challenges to overcome with regards to security and network flexibility, especially as far as migration to MPLS-TP is concerned. 1 Introduction Circuit-based networks can no longer withstand traffic demand due to perpetually increasing bandwidth and delay-sensitive applications. Packet-based applications, such as voice over Internet protocol (VoIP), long-term evolution, and on-demand videos, are becoming increasingly popular. Hence, current communication networks must be improved so that they can manage traffic and fulfil their service level agreement (SLA) [1]. However, network designers face challenges in optimising network performance to achieve the highest efficiency at a reduced cost. Multiprotocol label switching (MPLS) can fully optimise network resources and provide quality of service (QoS) treatment to the traffic, which has become the de-facto standard for core network infrastructure. MPLS is scalable, connection-oriented, and independent of any packet-forwarding transport technology. MPLS also reduces the Internet protocol (IP) address look-up at each router and minimises network latency. MPLS improves packet forwarding in a network and overcomes disadvantages of IP forwarding [2]. An MPLS network can decide the best forwarding path, allocate multiple services in the same network, and treat each traffic based on QoS requirements. Given the considerable benefits of MPLS networks, research is being conducted to ensure that high-bandwidth demand can be addressed. Several related reviews on MPLS networks [3-7], with their advantages, are summarised in Table 1. However, these studies have focused on only one MPLS technology, whereas our review paper addresses broad-ranging MPLS technologies, including traffic engineering (TE), differentiated services (DiffServ), protection and restoration, and MPLS-transport profile (MPLS-TP, an MPLS protocol), in addition to its applications and recent issues. To the best of our knowledge, the present work is the first to discuss recent issues and trends among MPLS technologies. Table 1. Recent review papers on MPLS network Authors Year Reviewed area Haddaji et al. [3] 2018 addressed technical challenges encountered by service providers in migrating to MPLS-TP networks Suhaimy et al. [4] 2018 analysed recent MPLS-TP applications Eugen [5] 2017 reviewed the performance of MPLS DiffServ Kurimoto et al. [6] 2017 reviewed the MPLS software-defined radio-oriented layer-2 on-demand virtual private network services and network function visualisation Adewale et al. [7] 2016 conducted a comparative simulation study of MPLS for latency and packet loss reduction over a wide area network The organisation of this paper is as follows. Section 2 discusses the architecture, terminology and advantages of an MPLS network. Section 3 elaborates on the MPLS technologies, including TE, DiffServ, protection and restoration, MPLS-TP and implementation of MPLS-TP in the power grid. Finally, Section 4 is the conclusions of this study. 2 Architecture and advantages of MPLS networks The Internet Engineering Task Force (IETF) introduced MPLS in 1997 to initially thoroughly address MPLS development. The following issues of MPLS networks were identified: (i) Improving scalability for network layer routing using labels to aggregate forwarding information. (ii) Improving flexibility in delivering routing services using MPLS labels to identify traffic with QoS and thus provide special treatment. (iii) Implementing the label swapping paradigm to optimise networks and thus enhance performance. (iv) Simplifying router integration with cell switching-based technology using common addressing, routing and management control. Fig. 1 shows the conceptual design of an MPLS network. Routers A and C are label edge routers (LERs). Depending on which LER is the source or destination, the routers are known as ingress LER or egress LER, respectively. The ingress LER assigns labels to incoming packets and determines which forward equivalent class (FEC) these packets belong to. Then, ingress LER decides the corresponding egress LER and computes the best path for the packets to route through the MPLS core network based on the FEC. Alternatively, egress LER removes the labels from the packets and forward them using the normal IP forwarding procedure. Fig. 1Open in figure viewerPowerPoint General architecture of the MPLS network The core network has label switch routers (LSRs) connected as either a ring or mesh topology. The LSR reads the label assigned by the ingress LER and then swaps the label with another that will determine the next LSR to be forwarded to. This process is repeated until the packet arrives at the destination or egress LER. The forwarding decision is based on the fixed-format header. The path computed by ingress LER from the source to the destination is called a label switch path (LSP). LSR does not store any route per the IP forwarding scheme, thereby improving the scalability of the network. Different types of traffic can share a single LSP, and given sufficient network resources, one LSP can accommodate all traffic regardless of their type. However, for critical applications that require strict delay and bandwidth treatment, the network defines TE or traffic policies. This process is further explained in the next section. Fig. 2 illustrates an example of an MPLS header. The header is divided into four important segments. The first segment is the label, which has a size of 20 bits and will be indexed into an MPLS forwarding table. The next segment is experimental (EXP) bits, where classes of services (CoS) are specified for each packet. With vast types of applications tunnelling in the network, the EXP segment is crucial for determining the QoS treatment that will be provided to the traffic. The next segment is called the bottom-of-stack (S) bit. This field is used when more than one label is assigned to a packet. Finally, the time-to-live (TTL) segment is utilised for path tracing, wherein the value continues to decrease until it reaches the destination. The packet is discarded when the TTL value becomes zero. In an MPLS network, IP headers on the packets are still intact, but they are ignored by the LSRs. Instead, LSRs investigate only the incoming label, go through the labelling table in each LSR, and swap the label with a new outgoing label immediately. Fig. 2Open in figure viewerPowerPoint Segments in MPLS header: Label, EXP, S, and TTL At the forwarding plane, labels are assigned to each LER and LSR to find the best LSP. The control plane disseminates information from the label. Information can be extracted from the header in an MPLS network through two approaches. The first uses a label distribution protocol (LDP), which is specifically developed to distribute labels. The second approach extends the existing protocol, called the resource reservation protocol (RSVP). LDP offers ease of configuration, session maintenance and reliable transport. In RSVP, LDP is extended to allow the creation and maintenance of LSPs and create associated bandwidth reservation. LDP and RSVP have advantages and disadvantages. LDP is preferred in terms of the initial configuration and scalability. However, RSVP is better suited for migrating from a data link layer to an MPLS network. Precise planning is required to ensure the best performance of the MPLS network. 2.1 Advantages of the MPLS networks The MPLS mechanism can tunnel multiple types of traffic through the core network. The tunnel is the path where traffic flows in the MPLS core network. Tunnelling is a powerful tool because only ingress and egress routers need to know the content of traffic carried through the tunnel. Details are hidden from routers at the core. With the use of MPLS tunnelling, traffic can be explicitly routed by following traffic policies. Tunnels also provide additional protection against data spoofing, given that packets can only be injected at the ingress routers. An MPLS network offers expenditure reduction by allowing network operators to control only a single network for all service types [8]. This feature is important given that emerging applications offered by local providers are becoming increasingly dense in terms of traffic and bandwidth consumption. Finally, the encapsulation of the MPLS overhead, which is only 4 bytes per MPLS header, is small and will reduce the latency and workload in the core network [8]. 3 MPLS technologies In this section, MPLS technologies, including MPLS-TE, protection and restoration, DiffServ, MPLS-TP and implementation of MPLS-TP in the power grid network are discussed. 3.1 MPLS traffic engineering (MPLS-TE) MPLS-TE is implemented in the network to avoid network congestion and improve QoS. IETF RFC2720, Section 2.0 mentions that MPLS-TE aims to minimise packet loss and delay, maximise throughput and support the enforcement of SLA [8]. Minimising congestion is the primary objective of TE. Congestion typically occurs under two conditions, namely, insufficient network resources and inefficient mapping of traffic onto available resources. Network congestion can be addressed by expanding the network capacity or using classical congestion control by limiting data rate, flow control, queuing management, and schedule-based control [8]. TE is useful in rerouting the traffic to ensure that data are transported effectively [9]. In the case of link or node failures in an MPLS network, TE verifies that the affected traffic can still reach its destination [10]. TE will determine that a specific path has the required characteristics in accordance with QoS. For instance, high-priority applications, especially protection systems, are routed via a path that is less congested than others to minimise the delay. Without congestion, data loss and delay are reduced, and throughput is improved. Consequently, enhanced services can be provided to consumers. Local providers can take advantage of MPLS-TE to offer guaranteed bandwidth services that allow clients to have a certain amount of bandwidth available when required. However, the following scenarios need to be highlighted by network providers when setting up TE. (i) The traffic must be forwarded along a predefined path (explicit routing) [11]. (ii) The utilisation of bandwidth resources must be improved [12]. (iii) When resource contention occurs, control must be in place over the resource [13]. IETF RFC 3031 provides two options for LSP route selection, hop-by-hop and explicit routing [14]. Hop-by-hop routing allows each node to choose the next hop for each FEC traffic independently. In explicit routing, the ingress or egress LSR specifies the entire LSP [8]. Explicit routing can be achieved by implementing RSVP in the network. This process enables the LSP to establish the MPLS forwarding state along the path defined at the source. MPLS-TE uses LSP priorities marked at the header to identify, of which LSPs are more important than others. Consequently, the network can confiscate resources from low-priority LSPs and guarantee that high-priority LSPs will always be transported first. The high-priority LSP is established along the shortest and least congested path to ensure maximum throughput and minimum delay. Finally, when LSPs need to be rerouted due to link failure, high-priority LSPs will have improved chances of finding alternative paths as quickly as possible. An example will be discussed using the topology shown in Fig. 3 to provide a thorough understanding of the MPLS-TE process. Supposing Client 1 were to transmit to LSR E, two possible paths can be used, A-B-E and A-C-D-E. From the operator point of view, path A-B-E is preferred because it has fewer hops. However, if path B-E is congested and has a high latency, the traffic will be rerouted via path A-C-D-E. This scenario shows that MPLS does not simply select the path with the lowest cost, i.e., the smallest number of hops; instead, it chooses the best path for the traffic to flow based on QoS requirements. Fig. 3Open in figure viewerPowerPoint Example of MPLS network topology for MPLS-TE explanation 3.1.1 Issues and related works on MPLS-TE Kumar et al. [2] proposed a path protection scheme using MPLS-TE for IEEE 30 bus system communication for smart grids to improve network resiliency, especially for sensitive protection data, such as SCADA, data scanning, and the system refreshment rate for power utility networks. MPLS-TE is used and further validated via OPNET to promptly recover from path failure. Their simulation results showed a reroute time of <10 ms with TE. SCADA is crucial, delay-sensitive data; a microsecond of delay will cause inevitable network failure. The implementation of TE in the MPLS network can ensure that protection applications are also treated according to their QoS requirements. In multilayer networks, MPLS is usually on top of an optical transport network. However, different providers usually operate each layer, and information exchange is limited, thereby leading to network degradation. An agreement on the type of information shared from both layers is crucial for significantly improving network performance. Therefore, [15] proposed dynamic multicast traffic grooming in MPLS over the optical multilayer network. A data-learning scheme was utilised on the IP/MPLS layer for logical link cost estimation, and a light path fragmentation-based method used on the wavelength division multiplexing network layer to improve resource sharing in the grooming process. Therefore, network performance can be greatly improved by managing traffic in the network. The use of TE to optimise bandwidth in the network is not sufficient when delay is not considered. Different applications in the network may be delay-tolerant, but some applications can have strict delay requirements. However, only a few related works on TE have focused on both bandwidth and delay requirements. Thus, Soorki and Rostami et al. [16] presented a new bandwidth- and end-to-end delay-constrained routing algorithm that uses data of the ingress and egress node pair in the network. 3.1.2 Software-defined network for MPLS-TE Software-defined networks (SDNs) are emerging due to their flexibility and programmability. With the separation of the control plane from the data plane, the network can be managed and innovated easily by programming. This feature allows local providers to execute provision and monitoring effectively and improves network agility [17]. A centralised single controller manages all the forwarding devices; multiple controllers can be implemented for networks that are too complex to be handled [18]. The main goal of SDN is to allow flexibility for network providers and control the flow of data through the network. This process enables any traffic policy or TE to be executed with full flexibility rather than have the same scheme provided by vendors [19]. Traffic in the MPLS network is normally dynamic and can be congested at any time throughout the network. TE optimisation over a single traffic matrix may have some limitations, especially when many applications are forwarded in the network because a single traffic matrix can have large measurement errors and depicting traffic fluctuations is insufficient. Moreover, large-scale networks are major challenges for network management to provide QoS guarantees and perform optimisation during network management. The integration of MPLS-TE with SDN can fully optimise the network. Guo et al. [20] presented a multiple-traffic matrix approach to solve large measurement errors that occur when the single traffic matrix is used. Bahnasse et al. [21] presented a solution using SDN to manage complex large-scale networks. The TE/SDN network architecture can effectively manage and provide QoS requirements for multiple service traffic. The reliability of the proposed model is dependent on its ability to achieve high-quality VoIP and video with acceptable delay for HTTP response pages. A related issue in MPLS-TE is the static bandwidth reservation mechanism of RSVP. The RSVP mechanism in the control plane reserves the same bandwidth at each hop along the tunnel and ignores the difference of available bandwidth of other links. This issue rapidly results in bandwidth exhaustion at the congested link even with underutilised links. TE/SDN can solve this problem by providing the non-uniform bandwidth reservation to improve the load balancing and resource utilisation of the network further [22]. By allowing dynamic bandwidth reservation, more protocol label switching tunnels can be computed than that under uniform reservation. An important feature of the MPLS network is that packets are forwarded by performing label look-up at the labelling table in each LSR. Each application will have different EXP bits, and ingress LER will decide on the best path based on the priority level. However, as the bandwidth and complexity increase, a shortage of MPLS labels will occur. Huang et al. [23] claimed that label consumption is expanding rapidly, thereby leading to management complexity, increased operational and capital expenditure and table look-up latency, and reduced performance and scalability. Therefore, the author proposed a method of solving the label space reduction problem in the MPLS network using a hybrid MPLS Open-Flow network scheme via LSP multiplexing. This goal was achieved using label stacking and TTL bits to control packet switching between different LSPs. As a result, traffic with different sources and destinations may share the same LSP, thereby reducing the label space reduction problem with transparent topology According to the abovementioned studies, congestion may still occur even when TE is implemented in a network. Therefore, additional research is needed to improve network traffic optimisation. Having only a hardware-based MPLS network will restrict local providers in complying with fixed traffic policies provided by vendors. However, SDN is fully programmable and offers full flexibility; consequently, the researchers can vary traffic loads and other performance evaluation parameters, such that innovation can be performed easily. Even with the benefits of separating the control and data planes, as implemented in SDN or network virtualisation, one shortcoming still exists; the lack of rapid and reliable implementation prevents the network from growing to its desired capabilities. In an effort to improve this, Mazhin et al. integrated MPLS with network virtualisation in their work [24]. This architecture can expand Internet flexibility and pave the way for the development and commercialisation of network virtualisation and next-generation MPLS. Thus, the integration of MPLS with SDN can substantially impact the future of telecommunication industries. 3.2 MPLS protection and restoration All traffic in the MPLS network must be delivered with zero packet loss and at low latency [25]. This requirement is due to the bandwidth and delay-sensitive content of important applications, especially protection data. The network must not suffer from any data discrepancy between the source and destination. This issue creates a need for a protection and restoration mechanism, which is crucial in promptly handling any failure [26]. Traffic with strict SLA requirement, such as video and VoIP, has stringent tolerance toward reliability and traffic loss. Similar to an MPLS network, immediate recovery after a failure is essential, especially for multiservice networks with applications of different priority levels. The MPLS fast reroute (FRR) was introduced by Cao et al. [25] to provide a guarantee for MPLS tunnels in the event of a failure; this concept is the same as that offered by synchronous optical networking (SONET) automatic protection switching (APS). The primary difference between FRR and SONET APS is that FRR can consistently provide a relatively small recovery time because the recovery decision is made locally. The efficiency of the network's recovery was dependent on the rates of network failure detection and traffic switching to an alternative path. This dependency shows that rapid failure detection is a vital component of MPLS protection. The first step in providing recovery is detecting failure as soon as it occurs. This process can be done either via hardware-based methods, such as using packet-over-SONET/synchronous digital hierarchy (SONET/SDH), or non-hardware-based techniques, such as the implementation of an algorithm at a high layer in the network [27]. Effective fault detection and fault notification implementation are crucial in providing reliable MPLS protection [27]. Protection comes in two forms, end-to-end protection and FRR. End-to-end protection or path protection is commonly used in network deployment. Fig. 4 shows that LSP protection is achieved using a primary and backup LSPs. The backup LSP takes over the traffic in case the primary LSP link fails. Upon receiving the RSVP error at the client's node, the primary LSP switches the traffic to the backup LSP [28]. One shortcoming of this protection scheme is that traffic will continuously transmit over the failed primary LSP until the RSVP error reaches the head end, thereby causing mode delay and data loss. Nonetheless, this option is promising because it can provide accurate information about where the traffic will flow following the failure. However, the backup LSP path may not be connected to the same primary router (Router 1). This limitation may not provide meaningful protection in the event Router 1 fails and both backup and primary LSPs are compromised. Thus, having path diversity is also a vital issue. Fig. 4Open in figure viewerPowerPoint End-to-end protection using the backup LSP The next protection option is FRR, which aims to minimise delay in the event of traffic failure. Fig. 5 illustrates that the traffic at the failed link is rerouted instead of having protection at the head end of the entire path. The advantage of FRR is that the network can choose which resources to protect. In the event of failure, protection can be applied promptly, and traffic is forwarded to the rerouted path, which is computed and signalled prior to failure. Another advantage is that switching time can be improved using this mechanism. Local protection comes in four variants. (i) Link protection (ii) Node protection (iii) One-to-one protection (iv) Facility protection Fig. 5Open in figure viewerPowerPoint FRR protection by using the detour/bypass LSP Link protection refers to the ability to protect traffic from being forwarded to the LSP in the presence of failure along the LSP. To protect against link failure, a backup path was set up around the link for one-to-one protection. Link failure is the most common type of failure in a network. A link might fail when the link itself has a problem or when the other link-end has node failure, thereby disconnecting the entire interconnected link. All four local protection variants have their own advantages and shortcomings. However, no protection scheme in the literature can cover all four [8]. To ensure rapid protection, a backup path must be ready to forward traffic as soon as the failure is detected. To achieve this feature, all backup paths must be computed and signalled beforehand, and the forwarding state must be set up for switchover. The forwarding state must be placed at the head and tail ends or the merge point (MP) of the backup tunnel or point of local repair (PLR) to enable the forwarding of traffic into the backup at PLR and back to the main LSP at MP [8]. In the MPLS network, LSR forward the packets through label swapping, and rerouting decisions are mapped into MPLS labels for protection. Thus, traffic labels that arrive over the backup tunnel will be the same as those over the failed link. For example, Fig. 5 shows that the label of data to be transmitted to Router 3 via Routers 2 and 6 will be the same. To ensure that traffic arrives at MP (Router 3) with the correct label, the backup tunnel label must be pushed on top of the protected LSP label at PLR (Router 2), and penultimate hop-popping (PHP) must be performed for the backup tunnel label before MP (Section 3). Network scalability will enable easy expansion and reduce the inconvenience of network providers in delivering a communication network to newly developed areas. In terms of local protection, the scalability must be addressed. Stronger protection means additional configuration efforts that involve intensive labour and manual path computations. However, many current implem

Referência(s)
Altmetric
PlumX