Illuminating the grid

Illuminating the grid

While the way we consume electricity has changed dramatically, utilities have been slow to catch up. Here’s a look at the challenges facing the software powering our electrical grids—and some of the proposed solutions.
Part of
Issue 12 February 2020

Software Architecture

Since Thomas Edison first lit up 400 street lamps from his power plant on the corner of Pearl and Fulton Streets in Manhattan in 1882, the way electrical utilities power the world has changed little. Today’s setup is basically the same: loops of networked premises (like houses or factories) linked by power lines, all of which connect back to a power plant. The mechanics of the system were complex, but the organization was simple. Energy moved one way, unidirectional, with little room to customize.

Yes, electrical grids have become more complicated since Edison, and the infrastructure needed to control them has expanded in turn. Computers also help manage the various inputs and outputs more smoothly, while software automates the processes that help balance energy transfers and offtake. But utilities have been slow to adapt to newer innovations, in part because, until now, Edison’s original, unidirectional structure remained the same.

For instance, consumers have started to generate their own energy, smartphones and app-controlled thermostats have become the norm, and the price of oil has yo-yoed. With thousands of new connections and devices added daily, networks must adapt to rising demand. (2018’s 2.3 percent increase in electricity usage was the largest in a decade, according to the International Energy Agency.) Energy utilities need well-architected software to provide nimble solutions.

Instead, they’ve fallen behind other legacy service industries, such as banking and cell phone networks, in developing software that can serve the needs of the present while preparing for the needs of the future. In fact, utilities rank among the lowest-performing industries in digital experiences, according to a 2019 J.D. Power and Associates report. The report also found that less than half of the utilities surveyed even offered a mobile app.

“The utilities themselves never paid attention to technology advancements,” says Gaurav Grigo, an IT consultant who has worked with multiple energy utilities across Europe and Asia over the last decade. They operate reactively: building software systems that act as Band-Aid solutions to specific problems, rather than developing more resilient, forward-thinking systems.

Truly successful electrical-grid software “has to compare many parameters simultaneously with power generation, transmission, distribution, and utilization,” says Sanjeevikumar Padmanaban, a researcher at Aalborg University’s Center for Bioenergy and Green Engineering in Denmark.

While energy grids have historically been hardware-based networks, they’re increasingly being replaced with smart grids: computer-controlled grids that use real-time supply and demand data to manage energy flows and that are capable of meeting competing demands and handling ever more complexity. Padmanaban is convinced that the architecture of smart-grid software is an essential component to managing the challenges of a one-way system (producers put energy in, customers take energy out) that has become bidirectional.

Developments like these make the hardware (such as energy meters and power plant turbines) and software (such as those running smart grids) infrastructure that underpins these networks all the more important. You can’t run a 21st-century energy network on 20th-century software for long.

Designing software architecture for the grid

The software that controls a utility’s electrical grid is responsible for correctly managing the flow of energy and helping the network’s components communicate. The stakes are high: One misstep can result in millions of blown fuses and vast power outages. Most utilities source their software from a handful of options produced by open-source and commercial suppliers, such as SAP, Siemens, and GE. Others, such as U.S. utilities Duke Energy or Ameren, build at least some of their own software products in-house.

These software products are architected in much the same way. Most have three layers: the physical layer, which comprises every device with access to the grid; the communications or service layer, which handles how those devices, from switches to meters to monitors, interact; and the application layer, which implements software functionality, including demand-management forecasts.

It’s complicated. For instance, a diagram of the BEMOSS (Building Energy Management Open-Source Software) system’s software architecture is a maze of bidirectional arrows, databases, devices, and a fluffy cloud. The system comprises four layers: the user interface (UI) layer, the application and data-management layer, the operating system and framework layer, and the connectivity layer.

BEMOSS’s UI layer has two components: user interface and user management. Users accessing it through a web UI see a dashboard showing a graphic representation of the current settings of devices in a building, or an area of a building. Authenticated users can also control these devices through an on-site interface, dependent on the level of access they’ve been granted: Building engineers have full authority to adjust set points and schedules of electric loads (or the total available amount of electricity at a given time) in buildings, while tenants have limited access to view current status and historical load data, or control selected loads in specific zones.

The application and data-management layer of the BEMOSS system algorithmically monitors and controls hardware devices, allowing more rapid responses to demands on energy flows based on price and requirement. This layer uses Apache Cassandra and a relational database-management system (PostgreSQL) to store data.

In BEMOSS’s operating system and framework layer, a distributed-agent software platform called Volttron, developed by the Pacific Northwest National Laboratory, encourages communication between agents on the system, whether that’s a thermostat or a platform monitor. Companies can also send users notifications via email or SMS.

BEMOSS’s connectivity layer facilitates communication between the operating system, the framework layer, and physical hardware devices. API interfaces translate between devices, sending control commands using simple function calls like getDeviceStatus and setDeviceStatus.

Despite their similarities, each software product has a distinct selling point. BEMOSS touts its ability to monitor, sense, and control equipment to reduce consumption across a grid. The FREEDM (Future Renewable Electric Energy Delivery and Management) architecture specializes in seamlessly integrating renewable power-generation and storage elements. The software architecture promoted by Duke Energy emphasizes its “distributed intelligence” platform, which provides routing, bridging, and gateway capabilities to IP-based networks.

At the minimum, software should help utilities meet their regulatory, risk, and compliance needs, as well as manage the life cycles of key assets and devices on their networks—be they pipelines, power generators, or overhead lines. At the same time, software should help companies manage data security for customers as well as security of supply. Truly forward-facing software would also improve retention and inform future business decisions. But developing, and architecting, software that accomplishes all these goals is no small feat, and few, if any, utilities can claim to have succeeded at it.

Achieving interoperability with NIST

Key to a utility’s ability to manage a bidirectional flow of energy and a growing number of connected devices is software interoperability. As networks grow to accommodate new factories, new residential areas, and new, bigger electricity-drawing devices (such as electric vehicles), distributed software systems that can easily interact with each other are vital.

Passed by the U.S. in 2007 to promote alternative energy sources, the Energy Independence and Security Act assigned the National Institute of Standards and Technology (NIST) the “primary responsibility to coordinate the development of a framework . . . to achieve interoperability of smart grid devices and systems.” The protocols and standards NIST would develop are meant to enable all electrical resources, including those on the demand side, such as energy efficiency and load-management programs, “to contribute to an efficient, reliable electricity network.”

The NIST program focuses specifically on interoperability, “the ability of two or more devices or systems to exchange actionable information in a timely, reliable, and secure manner,” says Avi Gopstein, smart-grid program manager at NIST. “We’re not focused on the software that runs the grid. There are lots of different software tools that utilities will use for optimizing their system. What we care about is how the equipment on the system, and the systems on the system, can interface and communicate with each other.”

When Gopstein joined NIST’s Smart Grid Program Office in 2016, he was tasked with helping design a new grid infrastructure that better met the requirements of the Energy Independence and Security Act. It wasn’t easy. NIST published three versions of its interoperability framework, in 2010, 2012, and 2014. The first version established between 20 and 25 framework priority action plans to tackle, ranging from developing standards for energy-usage information to setting guidelines for the use of IP protocol suites. Then, in 2018, it introduced IEEE 1547, a uniform standard for joining distributed generation resources into the power grid. “It’s a fundamental and wholesale re-envisaging of what devices can do on the system,” says Gopstein.

Crucial to achieving interoperability is how the software running a smart grid is architected. NIST’s reference model is designed to be a blueprint for architecting flexible, uniform, and technology-neutral smart-grid software. The architecture model covers four areas—grid (generation, distribution, and transmission), smart metering (AMI), customer (smart appliances, electric vehicles, premises networks), and communication network.

Communication is what makes this architecture truly interoperable. “The NIST conceptual model revolves around the communication network and the information exchange within grid domains,” explains Ramesh Ananthavijayan and his coauthors in their 2019 article “Software Architectures for Smart Grid System—A Bibliographical Survey,” published in Energies.

Breaking out of the silo with the Common Information Model

Beyond system-wide architectural schemas, interoperability is necessary at an elemental, nuts-and-bolts level, too. Energy IT consultant Gaurav Grigo says he’s worked at companies where one department would call a key component in a network a “switch.” When he’d ask people in a different department how to fix or change the switch, they’d pepper him with questions: Is it a breaker? A fuse? Something else? “They have different terminology for the same thing,” he says.

Grigo has seen this phenomenon replicated in the software: Incompatible and antiquated file formats that don’t translate from one part of the system to another abound. Such complications can make it difficult to communicate across departments, never mind across different providers using the same network. “To mitigate this kind of problem, you need a universal language and a standard across departments,” he explains.

Recognizing the problem, many companies have changed the way they work, utilizing a long-standing concept called the Common Information Model, which provides a set of common standards and definitions for systems, networks, applications, and services, including terminology, basic file formats, and communication protocols. First recognized as a necessity in the 1980s, it took until the 2000s for realizable standards to be developed. Big infrastructure providers in the energy sector, such as Siemens and Schneider Electric, are among those who have adopted the framework.

“This bridges that gap. It provides a whole schema and conceptual diagrams for how you can connect the electricity in [the] physical world, as well as in a logical world in IT systems,” says Grigo.

Adhering to a common information framework is vital in order to future-proof further developments to the system, he says. The model has been designed to impose standardized protocols and best practices on energy companies so that when they develop new infrastructure, they aren’t forced to totally rearchitect their existing systems.

Standardizing file formats for databases and IoT devices

The Swedish telecom company Ericsson predicts that, by 2022, there will be 1.5 billion IoT devices connected to the electrical grid in some way—nearly four times as many as in 2016. Standardizing mechanisms and file formats for transferring data is vital as the number increases. And while APIs can help make data transfers easier, their formatting must remain consistent.

“I’ve seen IT service providers bring out their product and try [to] impose a different file format for the data exchange. Then SAP will have their own, and there will be ad hoc solutions that come out from different IT service providers or the utility companies themselves to integrate the systems,” Grigo says. “When I shoot a file out from my system at 10 p.m., every device connected to that system should get updated with the right information.”

Grigo highlights a hypothetical example based on real issues he’s observed. Energy utilities often rely on surveys of distribution boards (also known as breaker panels) to measure the health of their grid. Distribution boards parcel out electricity from a main feed into subsidiary circuits, while also providing a layer of protection via fuses or circuit breakers. Workers will check the fuses in the distribution boards of a particular area: Are they old, new, broken, in working order? That information can then be input into a system that keeps track of the grid’s infrastructure. But there are often other, parallel systems that draw information from these in-person visits: one that records when the distribution boards were last checked, another that tracks whether they’ve been fitted with the latest fuse models, and another that notes which customers are supplied by that connection point. At many companies, that means four separate databases must be manually updated. Developing software that automatically populates these four separate records would make a clear improvement.

“If I have to develop a one-by-one integration, I’m going to develop four different formats, probably,” says Grigo. To tackle this problem, he recommends using a common file format, such as XML, to minimize duplication. “Then these four receiving-end properties would develop their adapters or feeders so that they filter out only the data they need.”

Rebalancing the emphasis on hardware

Another issue for utilities is their sole emphasis on hardware—an example of the energy sector’s old-fashioned way of working. Investing in hardware infrastructure without a corresponding development of software is a practice that can prove costly for utilities, leading to the mismanagement of existing resources and the misallocation of new ones.

“You can keep spending your money on hardware, new cables and so on,” says Grigo. “But maybe you already have the capacity in the network. How would you find [that] out? Do you have intelligent software that keeps a record of all the load on the systems, the solutions running out there, and all the data you collect? Do you harvest the data? Do you have a platform to do that?”

For many utilities, the answer is no. Grigo has seen companies spend millions laying down more power lines or cables without realizing that the load they’re putting on an existing transformer could blow it. He’s also seen companies build entirely new transformer stations, not realizing that their existing stations had enough spare capacity to meet the demand.

“[If] you can monitor the health of the network and your assets, [you can] save a lot of money before making new investments,” Grigo says. “You can also have a reduced mean time to repair if you have a fault.”

Addressing security fears

The energy industry is also reticent to adapt their systems for fear of opening the door to potentially catastrophic security incidents. As anyone who has experienced an electricity outage knows, the lack is keenly felt, and it impacts every aspect of life—from medical treatment to food supply, business operations to transportation. In a recent report, Finnish cybersecurity company F-Secure detailed nine specific cyberattack vectors that have been used to target the energy industry. Among the victims of one of the most pernicious cyber-threat groups, APT33, was Italian oil and gas company Saipem, which was attacked by a variant of a disk wiper in December 2018.

“The moment you bring IoT into industrial operations, people are scared because they think it’s not secure,” says Grigo. He’s faced many an uphill battle to convince an employer’s IT security department that he wasn’t installing any old IoT device, he was installing industrial IoT devices, such as giant smart meters and switches, which have much higher security requirements than consumer products. He’s had similar squabbles about cloud-based storage systems. “People worry that if I bring the data about my assets into the cloud, somebody will hack into that, or it’s public, or, legislation-wise, it’s not allowed in a certain territory.”

But such fears are generally overhyped. F-Secure itself notes that “the same mitigations that work in the energy sector apply to any industry. Properly implemented, mature passive and active cybersecurity will block most of the attacks, quickly detect the ones which go through the defenses, and make it very hard for attackers.”

Meeting the demands of the future

While the current generation of energy-industry software isn’t always architected or designed to deliver a level of service most other sectors would consider their baseline (such as being able to track energy usage through a consumer-centric app), the next generation—currently being designed in classrooms and boardrooms across the globe—is much more ambitious. The BEMOSS and FREEDM platforms, for example, rely either on open-source principles or plug-and-play interfaces using open standards, such as TCP/IP and (for user interfaces) HTML.

“Obviously the ultimate aim is flexibility, to incorporate new challenges,” says Aalborg University’s Sanjeevikumar Padmanaban, who has tracked the evolution of energy-network architectures. “Until now, we [haven’t had] software in smart grids that can be self-energized to work on faults. They can detect faults [and] isolate the faults, but how to fix them is still a challenge.”

Self-identifying troubleshooting software is nice to have, but when an industry is still struggling to make two massive pieces of infrastructure—such as a power plant and an industrial meter—interact, it has its work cut out for it. Energy utilities must architect their systems to meet present demands. It’s been nearly 140 years since Thomas Edison implemented his pioneering electrical grid, and the industry has much to do to make up for lost time.

About the author

Chris Stokel-Walker is a UK-based features journalist for The Economist, Bloomberg, the BBC, and Wired UK. His first book, YouTubers, was published in 2019, and his second, TikTok Boom, was published in July 2021.

@stokel

Artwork by

Mercedes Bazan

behance.net/mercedesbazan

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues