Artigo Acesso aberto Revisado por pares

Studying impacts of communication system performance on dynamic stability of networked microgrid

2020; Institution of Engineering and Technology; Volume: 3; Issue: 5 Linguagem: Inglês

10.1049/iet-stg.2019.0303

ISSN

2515-2947

Autores

Bishnu Bhattarai, Laurentiu Marinovici, Md Touhiduzzaman, Francis K. Tuffner, Kevin P. Schneider, Jing Xie, Priya Thekkumparambath Mana, Wei Du, Andrew R. Fisher,

Tópico(s)

Smart Grid Energy Management

Resumo

IET Smart GridVolume 3, Issue 5 p. 667-676 Research ArticleOpen Access Studying impacts of communication system performance on dynamic stability of networked microgrid Bishnu Bhattarai, Corresponding Author Bishnu Bhattarai bishnu.bhattarai@pnnl.gov Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorLaurentiu Marinovici, Laurentiu Marinovici Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorMd Touhiduzzaman, Md Touhiduzzaman Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorFrancis K. Tuffner, Francis K. Tuffner orcid.org/0000-0002-1960-9663 Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorKevin P. Schneider, Kevin P. Schneider orcid.org/0000-0003-1749-5014 Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorJing Xie, Jing Xie Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorPriya Thekkumparambath Mana, Priya Thekkumparambath Mana Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorWei Du, Wei Du Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorAndrew Fisher, Andrew Fisher Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this author Bishnu Bhattarai, Corresponding Author Bishnu Bhattarai bishnu.bhattarai@pnnl.gov Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorLaurentiu Marinovici, Laurentiu Marinovici Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorMd Touhiduzzaman, Md Touhiduzzaman Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorFrancis K. Tuffner, Francis K. Tuffner orcid.org/0000-0002-1960-9663 Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorKevin P. Schneider, Kevin P. Schneider orcid.org/0000-0003-1749-5014 Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorJing Xie, Jing Xie Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorPriya Thekkumparambath Mana, Priya Thekkumparambath Mana Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorWei Du, Wei Du Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this authorAndrew Fisher, Andrew Fisher Electricity Infrastructure and Building Division, Pacific Northwest National Laboratory, Washington, WA, USASearch for more papers by this author First published: 30 June 2020 https://doi.org/10.1049/iet-stg.2019.0303Citations: 6AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract The development of smart grid technologies has resulted in increased interdependence between power and communication systems. Many of the operations in the existing power system rely on a stable and secured communication system. For electrically weak systems and time-critical applications, this reliance can be even greater, where a small degradation in communication performance can degrade system stability. However, despite inter-dependencies between power and communication systems, only a few studies have investigated the impacts of communication system performance on power system dynamics. This study investigates the dependencies of power system dynamics operations on a communication system performance. First, a detailed, dynamic networked microgrid model is developed in the GridLAB-D simulation environment, along with a representative multi-traffic, multi-channel, multi-protocol communication system model, developed in the network simulator (ns-3). Second, a hierarchical engine for large-scale infrastructure co-simulation framework is developed to co-simulate microgrid dynamics, its communication system, and a microgrid control system. The impact of communication system delays on the dynamic stability of networked microgrids is evaluated for the loss of generation using three use-cases. While the example use-cases examine microgrid applications and the impact to resiliency, the framework can be applied to all levels of power system operations. 1 Introduction One method to facilitate moderate to high penetrations of solar photovoltaic (PV) is with the coordinated operations of energy storage. This is an example of an increased array of technologies that are dependent on a communications infrastructure for routine operations. Additionally, some power system applications are beginning to require high performance from the communication systems for basic functionality. The impact of degradation of communication system performance to the power system can be even more pronounced for weak grids, such as a low inertia grid, like an islanded microgrid [[1]]. For instance, a loss of solar PV in an islanded microgrid could require an immediate control action to compensate for the generation loss. Because both the monitoring of the PV system and the corresponding control, such as dispatch of battery energy storage system (BESS) to compensate for the loss of PV system, require a reliable communication system, degradation on the communication system performance can even threaten the stability of the microgrid. As increasing resiliency is one of the key goals of the modern power system and microgrids are one of the effective resources for improving resiliency, investigating the dependence of microgrid operations on the microgrid communication system is needed [[2]]. Microgrids improve system resiliency by operating in an islanded mode (i.e. disconnected from the bulk power system) to serve critical loads when the bulk power system is not available. However, the performance of the microgrid can depend on the communication system performance, thereby requiring a very reliable communication system. Therefore, even though the increased deployment of industrial control systems (ICSs) provides several advantages, such as improved observability and controllability to the power system, they may not be suitable for dynamic control operations. Determining if an ICS architecture, such as supervisory control and data acquisition (SCADA), can be used for dynamic control requires a detailed investigation of the interactions between power systems and communications. Such integrated power and communication studies help utilities to: (i) better understand the impact of the communication system on the power system operations, (ii) design/deploy the proper communication technologies, and (iii) prepare for minimising the impacts of the communication system contingencies. In recent years, the results of several investigations [[1], [3]-[7]] of power and communication systems interactions have been published. However, most of those studies were either power or communication focused. Usually, the power system-focused studies oversimplify the communication system by assuming an ideal system with no communication delays, which is very unrealistic. In some literature, the communication system is modelled with a slightly increased level of detail by including a predefined signal transportation delay [[8]]. However, that approach ignores the dynamics of traffic in the communication system. Because power system applications may share communication channels with other users (e.g. Internet service providers), data traffic on such shared channels varies greatly and cannot be modelled accurately using predefined transport delays [[4]]. In some recent studies, the modelled smart grid communication system considered different communication technologies and different channels [[4]]; however, a single isolated control system was assumed. None of those studies modelled simultaneous data traffic from different applications. There are some power and communication systems studies that modelled the smart grid communication in detail, considering multi-channel and multi-protocol communication systems [[9]-[12]]. However, the majority of those communication-intensive studies often oversimplified the power system by assuming ideal power systems and often neglecting the power flow, dynamics, and power quality. There are some smart-grid-communication-focused studies that better model the power system by considering power flow [[13]], but those studies consider only the steady-state model of the power system. Because operating time-scales of the steady-state power system applications range from seconds to minutes, small variations in communication latency in a millisecond range may not impact steady-state performance. There are very few studies that have investigated the interaction between power system dynamics and communication systems [[14], [15]]. The authors in [[14], [15]] investigated the interaction between power system dynamics and the communication system in a cyber-physical testbed. However, incorporation of different data traffic (e.g. AMI, SCADA), shared channels, and multiple protocols together with dynamic power system model is missing. For an urban microgrid scenario, shared channel traffic is very likely, due to economic drivers, and has the potential to impact the overall latency and reliability of the microgrid communication system. To the best of the authors' knowledge, previous publications had not focused on that level of detail for both power and communication system together, particularly for microgrid applications. Even though there is a strong need for studying power system dynamics and communication systems together, none of the existing tools can simulate both the power and communication systems together in the same application, at a uniform level of detail. Usually, simulation tools are designed for a specific function, such as GridLAB-D TM for power flow and electromechanical dynamics, PSCAD for electromagnetic transients, and a tool such as a network simulator (ns-3) and OMNeT + + for communication network simulation [[16]]. Even though a specific simulator can model and simulate the power system or communication system in detail, the existing power and communication system simulators are non-interoperable. One of the key challenges to co-simulating the power and communication systems stems from their different operational time-scales. The communication system operates at nanosecond time scales, whereas the power system dynamics may be modelled in millisecond time scales. This adds complexity and scalability issues to custom-built co-simulation platforms as the system grows [[4], [17]]. Some recent efforts have focused on the development of a platform to enable co-simulation of multi time-scales and multi-domain simulators [[18]-[23]]. The cyber-physical testbed, presented in [[18]], provides a conceptual platform for co-simulating multi-time-scale simulators, and the integrated grid system modelling (IGSM), presented in [[19]], provides integrated transmission and distribution simulations. However, those co-simulation platforms were designed to co-simulate the same domain simulators. Modern co-simulation environments, such as Framework for Network Co-Simulation (FNCS) [[20]] and Hierarchical Engine for Large-Scale Infrastructure Co-Simulation (HELICS) [[21], [22]], are advanced co-simulation platforms that are capable of co-simulating multi-domain, multi-time scale simulators. This paper investigates the inter-dependencies between power system dynamics and communication systems, and the impact to the resiliency of a networked microgid. The key contributions of this work include: (i) Development of a multi-traffic, multi-channel, multi-protocol communication system for a networked microgrid, in ns-3. (ii) Development of a HELICS-based co-simulation framework to co-simulate microgrid dynamics, the communication system, and a microgrid control system. (iii) A detailed study using the HELICS-based co-simulation framework is presented to examine the impact of the dynamic stability of a networked microgrid system to varying communication delays. (iv) An example investigation of a deployed technology (distributed load control devices) using the detailed co-simulation platform. The remaining of this paper is structured as follows. Section 2 presents a HELICS-based co-simulation framework. The power and communication system models are reported in Sections 3 and 4, respectively. Section 5 presents the simulation results for different use-cases, and Section 6 contains the concluding comments. 2 Co-Simulation framework This section presents the proposed HELICS-based power and communication co-simulation framework, a brief overview of the different simulators (hereafter called federates), and their time coordination and data exchange management. Fig. 1 illustrates a high-level schematic of the proposed framework. Fig. 1Open in figure viewerPowerPoint High-level co-simulation architecture 2.1 Co-simulation federates As shown in Fig. 1, the proposed co-simulation framework consists of three federates GridLAB-D, PythonTM, and ns-3. GridLAB-D is used for power system modelling, ns-3 is used for communication system modelling, and Python is used for control system modelling. 2.1.1 GridLAB-D GridLAB-D is a simulation environment with a range of power distribution system simulation capabilities [[24]]. In addition to time-series and dynamic-simulation capabilities, it contains highly-detailed end-use load models. In this study, GridLAB-D is used to develop dynamic models of the microgrid system, including models of the inverters and diesel generators (DGs). 2.1.2 ns-3 ns-3 is a discrete event simulator that is used to model both wired and wireless communication systems. It has rich libraries of communication channels, technologies, and protocols [[25]], and is used to model multi-channel, multi-traffic, and multi-protocol microgrid communication systems [[25]]. 2.1.3 Python Python is an interpreted, high-level, general-purpose programming language. It uses an object-oriented approach to provide an easy platform for building agents [[26]]. In this study, Python is used for modelling control systems, including a microgrid controller (MC). 2.2 HELICS co-simulation platform HELICS is an open-source co-simulation platform that can simulate and coordinate large numbers of off-the-shelf federates, including, but not limited to: electric transmission systems, electric distribution systems, communication systems, market models, and end-use loads [[22]]. It facilitates data exchanges and time coordination among a large number of multi-time-scale and multi-domain federates. Because HELICS provides a rich set of application programming interfaces (APIs) for other languages, including Python, C, Java, and MATLAB, it can easily co-simulate most federates that support those APIs. The HELICS co-simulation framework uses a layered architecture, as presented in [[21]], to optimise performance. It consists of five layers: platform, core, application, simulators, and user interface. From the user perspective, the core and simulator layers require the most interaction. The core layer provides time-management and data exchanges for both time-series and discrete event simulators. The simulator layer provides a mechanism to define standardised data exchange mechanisms (e.g. variable naming, types, timing, synchronisation, etc.) for different simulators. A generic description of overall steps for HELICS co-simulation is presented in Algorithm 1 (see Fig. 2). Fig. 2Open in figure viewerPowerPoint Algorithm 1: Overall steps of HELICS co-simulation 2.3 Time coordination and data exchange management The core layer of the HELICS co-simulation platform is used to coordinate time among GridLAB-D, Python, and ns-3. The overall time coordination among these federates is done according to the last step presented in Algorithm 1 (Fig. 2). To better understand the data exchanges among the federates, the type of data that needs to be exchanged between the actual resources in GridLAB-D and the Python-based MC need to be known. The MC monitors complex power from all distributed energy resources (DERs), and then sends a control signal (active power) back to the dispatchable DERs. In both cases, the signal goes through the communication channel modelled in ns-3. Assuming , , and are the simulation time steps for GridLAB-D, Python, and ns-3 simulators, respectively, a message signal X, originating at a time t in GridLAB-D, reaches to MC latest by . Here, is the delay experienced by the communication signal from the point of transmission (DER at GridLAB-D) to the MC. Similarly, the control signal from MC to DERs is not instantaneous. A message signal L, originating at a time in MC, reaches the DERs modelled in GridLAB-D latest by time . Here, is the time taken by the signal from MC to DER. The and are upstream and downstream signal delays between the point of transmission and reception. Note that depending on the communication architecture, and can be significantly different. 3 Power system model This section presents models of the networked microgrid and control system. Operation of networked microgrids is considered, including dynamic models of inverters and DGs. In addition, the Grid Friendly ApplianceTM (GFA) devices [[27]] are deployed for an under-frequency load shedding (UFLS) scheme, as part of the primary frequency response [[28], [29]]. Furthermore, a decision-making algorithm of the MC is shown. 3.1 Grid model A three-phase, the unbalanced distribution system is modelled, including sections that represent multiple microgrids that can be networked and operated in parallel and various types of DERs [[30], [31]]. 3.1.1 Inverter model The PV inverters are modelled as three-phase, controllable current sources, operating as grid-following inverters. Because an entire simulation requires <10 s to study a transient in a microgrid, a constant, unity power factor injection is assumed at the inverters [[32]]. A typical grid-following control strategy is implemented, with fast control of active power and reactive power [[33]]. The inverters are implemented as controllable current sources using a standard direct-quadrature-zero -axis representation. The inverters are assumed to produce a balanced output power. As shown in Fig. 3, the difference between the active power reference, , and the inverter active power output, P, is fed into a proportional-integral controller to obtain the d-axis current, . Similarly, the difference between the reactive power reference, , and the inverter reactive power output, Q, is fed into a proportional-integral controller to obtain the q-axis current, . Assuming each inverter operates at the unity factor, the MC dispatches only the active power, . Fig. 3Open in figure viewerPowerPoint Control block diagram of a grid-following inverter 3.1.2 Load model End-use loads are modelled as standard ZIP (impedance, current, power) loads. A typical ZIP load can be created with mixed portions of the three primary load types (constant impedance, constant current, and constant power), which respond differently to changes in load voltage [[34]]. For this simulation, it is useful to note that the load model has no frequency dependencies in the dynamic simulation. Furthermore, no model states change over the simulation period, so the ZIP composition of the load remains fixed. Some loads in the power system can be selected for installing the GFA device, as it may not be cost-effective to deploy GFA on every load. As proposed in [[28]], the GFA controllers deployed at end-use loads are used to mitigate transients in the microgrids. As distributed devices, GFA controllers enable the isolation and reconnection of end-use loads on the low voltage (or service) level of the power system. With GFA controllers, end-use loads can respond to the frequency and/or voltage transients independently to support resilient operations of microgrids. For this particular scenario, UFLS capabilities of the GFAs are utilised. 3.1.3 DG model The synchronous machine model proposed in [[30]] is used for the DGs. The unbalanced operation of three-phase synchronous machines is modelled with a simplified fundamental frequency model in phasor representation [[35]]. The model approximates both the main electrical torque and the braking torque associated with the negative sequence current. The machine electrical dynamic equations are built on the standard axis implementation, which begins with the relationship between the d-axis changes in the flux and the transient voltages. 3.2 Control system model This section presents a control algorithm designed to mitigate the variations in solar PV output with BESS devices. The control algorithm resides at the MC, which continuously monitors all resources and makes the optimal decision to use the BESS devices. The control implementation is done using a centralised communication-assisted control architecture. Even though BESS deployed with solar PVs can sense PV generation locally, it can be challenging to detect and respond to the PV outage, especially in a centralised BESS scenario or in cases where it is not co-located with the PV. One can use the local frequency measurement to detect PV outage. However, providing only local control for BESS devices may not optimally use the resources and could also lead to controller interactions. Therefore, this simulation utilises communication-assisted control to alleviate these issues. The MC is developed as a stand-alone agent coded in Python that can be an add-in functionality to an existing MC. The MC includes an optimisation module to compute dispatch signals for BESSs when a power imbalance occurs in the system. Because the proposed MC is designed to survive resiliency events, the control algorithm is prioritised to maintain system stability rather than economic operations. Fig. 4 is a high-level flowchart depicting the decision-making process of the MC. The MC continuously monitors solar PV output, BESSs, DGs, and data from advanced metering infrastructure (AMI) devices. Because the demand from loads and the generation output can vary continuously, the MC continuously monitors power imbalances in the system. For normal operations, these variations are usually small and slow. However, any contingency conditions (e.g. loss of a generator, line, loads) can create significantly bigger power imbalances in the system. Because system frequency deviations are associated with power imbalances, those outages conditions can lead to system instability. In this scenario, the MC should compute the new dispatch points in a timely manner and send to the respective resources via a communication network to survive those transient events. At any time step t, the power imbalance, , in the system is computed as (1)where , , and are the monitored active power generation from the ith DG, jth PV, and kth battery storage, respectively. Similarly, is the load consumption from the lth load measured from the AMI device. Moreover, , , , are the number of DGs, solar PVs, batteries, and loads with AMI devices in the system, respectively. Note that losses are not accounted for in the optimisation. Any power associated with line losses is compensated by the DGs in the system. In the given system, BESS is dispatched to attempt to maintain a specific operating point on the DGs, much like a secondary or tertiary frequency control in the larger bulk grid. During normal operations (non-transient events), those resources operate at the dispatch signal from MC. For the kth BESS, the available flexibility margin, , is computed as (2)where is the rated power of the kth battery. As long as batteries can handle the power imbalances , the MC runs the following optimisation to compute new dispatch signals for each battery to compensate for the power fluctuations (3)such that the following constraints are met: (4)where is an optimisation coefficient computed using the total power imbalance in the system and the available margin of individual batteries as follows: (5)Note that is used to mean the available flexibility margin, whereas is used as a decision variable. For instance, for a BESS rated at 100 kW and operating at 60 kW, is 40 kW (100–60) for charging or −160 (−100 to 60) for discharging. However, is a control variable whose value can be anywhere between −160 and 40 kW. Because is computed proportionally based on the available margin of the individual battery, the proposed approach tries to use the available resource fairly without stressing a specific resource. The optimisation gives a dispatch signal for each battery. With the optimisation complete, the MC computes the new dispatch signal for each battery as follows: (6)Note that can be positive or negative, so the new dispatch signal, , can be higher or lower than the previous dispatch signal . This new dispatch signal is sent to the corresponding dispatchable DER from the MC using the communication system modelled in ns-3. Fig. 4Open in figure viewerPowerPoint Overview of the MC decision-making process 4 Communication system model This section presents general requirements of the communication systems and proposes a communication architecture representative of a networked microgrid system. In addition, metrics for evaluating the communication system performance are defined. 4.1 Requirements for microgrid communication A microgrid often requires a bidirectional communication architecture between the controller and devices to be monitored or controlled. Usually, DERs (e.g. solar PVs, battery storage, etc.), send information to and receive information from the MC, whereas AMI has one-way monitoring. A U.S. Department of Energy report identified communication requirements for DERs and AMI, and suggested 9.6–56.0 kbps bandwidth for distribution system deployments of these technologies [[36]]. Both the required bandwidth and latency for microgrid communications are dependent on response time, which is often dictated by a given power system application. Usually, communication technologies for a specific system/application are chosen based on the required response time, coverage area, and communication system attributes (e.g. latency, throughput, and bandwidth). Zigbee and Wi-Fi are common technologies for local area networks (LAN), whereas long-term evolution (LTE) and Worldwide Interoperability for Microwave Access are the common wireless technologies used for neighbourhood area networks (NANs) [[37]]. Similarly, a dedicated wired channel such as fibre-optics, Ethernet, and asymmetric digital subscriber line, is commonly used technologies for back-haul communication [[37]]. Table 1 summarises common communication technologies and their attributes. Table 1. Communication network requirements [[37]] Coverage Bandwidth Latency Technologies LAN 1000 1–10 WiFi, Zigbee NAN 1–10 10–100 WiMAX, LTE WAN 1000 500 –10 Ethernet, fibre-optics back-haul — 10–100 ADSL, fibre-optics 4.2 Proposed communication architecture Microgrid communication systems can involve DERs (e.g. solar PVs, BESSs, DGs), intelligent electronic devices (e.g. reclosers, protection devices), and AMI. Because data traffic for monitoring and control of DERs and intelligent electron devices is significantly different from AMI traffic, microgrid communication should model multi-traffic scenarios. Because AMI and DER data traffic often share the same channel, the microgrid communication system also should include different, simultaneous data traffic [[36]]. Furthermore, communication protocols used for DERs are usually different than the protocol used for the AMI. For instance, DNP3, Modbus, and IEC 61850 are protocols commonly used by electric utilities for DERs, whereas the common AMI protocols include ANSI c12.22, IEEE 802.11, and IEEE 802.15.4 [[36]]. Given the use of different protocols in microgrids by various devices, a realistic microgrid communication network should model multiple protocols. Considering the reliability of wired technologies and cost-effectiveness of the wireless counterparts, a multi-channel (i.e. a combination of wired and wireless) communication system architecture is proposed. In particular, the proposed microgrid co

Referência(s)