Critical Infrastructure and Technology - Give Me a “T”!
The Essentials, Twenty-second Edition
At the risk of being contrarian (my husband just read this and laughed out loud), I’m going to admit to skepticism about technology if used solely for technology’s sake. It’s like the “bright shiny object” syndrome we’ve all succumbed to in our lives at some point. We should restrain from deploying new technology just because it’s popular or cool, whether in our personal lives or for business uses. It should be used only after careful consideration of whether or not it uniquely meets a specific need. Especially in the case of critical infrastructure (CI) sectors, the use of technology comes with the risk of cyber vulnerabilities, as I discussed in the last two editions of this newsletter, and other serious and potentially make-or-break challenges. Given how important CI is for our modern way of life, such evaluation is even more important.
Context
The challenges and opportunities involving technology are important to evaluate, but what, exactly, are we talking about when we say “technology” as it relates to our essential critical infrastructure sectors? The “buzz word phenomenon” is alive and well in CI, so much so that important topics like technology sometimes lose their meaning, much less the context. So, let’s dive into what, as a layman with many years of experience working in a few of the CI sectors, interprets as technology (drum roll, please).
As I’ve discussed in several editions of this newsletter, modern technology – that which has been developed over the last 50 years or so, and as applied to CI – has several elements:
Use of telecommunications networks to transmit information (data), commands, or controls in digital format.
Use of digital devices (often called “sensors”) to gather/acquire data from “on the ground”/on site.
Storage of gathered data – sometimes in computer servers on premises, but now often in “the cloud,” which is a euphemism for offsite, large, secure, data centers.
Analysis of such acquired data – to create efficiency, improve operations and maintenance, to predict failures, etc.
Use of digitally based devices/objects to control or manipulate a physical object from afar – such as closing a door or deploying a robotic “arm” on a manufacturing assembly line.
Increasingly, in the last few years, use of AI (perhaps accompanying a sensor) to themselves acquire information from the physical world, analyze the data on site and only relay back relevant/actionable information to the operations center.
One commonality to enabling these devices and systems is the delivery of digital signals or data over communications networks (wired and wireless). Another major element, beginning over 50 years ago, is the use of transistors, circuits, and other power electronics central to computing, and as I discussed in Edition 20 of this newsletter, “IT and Electricity, the Power Couple.”
Describing this makes me realize that another definition is in order – a reminder of what is meant by “digital.” Diffen.com does a good job of comparing and contrasting analog and digital, which is a good way of defining both. I’ve assembled the least technical of the explanation below:
“Analog and digital signals are used to transmit information, usually through electrical signals. …Analog technology is cheaper but there is a limitation to the size of data that can be transmitted at any given time. Digital technology has revolutionized the way most equipment works. Data is converted into binary code and then reassembled back into its original form at the reception point. Since these can be easily manipulated, it offers a wider range of options.”
For a slightly more technical definition of digital, Oxford Languages notes: “(of signals or data) expressed as a series of the digits 0 and 1, typically represented by values of a physical quantity such as voltage or magnetic polarization.”
CI operators still use analog communications because they have not fully integrated digital technology into their operations and, therefore, still need at least basic communications technology. Analog technology can also be more reliable for certain uses and may, therefore, remain in the mix for emergencies even when digital technology is fully deployed. For example, voice communications such as hand-held radios only require analog signals. This type of communication takes less bandwidth and cannot be hacked remotely as can digital technology, so may be less prone to disruption in emergencies.
Even with the cyber risk that digital technology brings, its benefits are such that most CI industries have embraced it. CI operators are deploying digital technology to improve operations, minimize the risk to such operations that greater visibility can provide, upgrade older infrastructure without having to replace it entirely, and to either minimize or eliminate the need to physically travel to certain places on their systems in order to operate, inspect, or maintain the equipment. While the investments in such digital technology can be high, including ensuring operators understand it and can use it properly, the goal is to reduce costs by optimizing their complex systems.
Pros and Cons Deconstructed
Assessing the pros and cons of digital technology deployment for CI operators, therefore, is not at this point about whether it is a good or bad thing overarchingly – that ship has sailed. Rather, companies, utilities, and regulators must now assess how to deploy such technologies, which ones to deploy, when to deploy them, and how to pay for them.
This evaluation is itself complex. CI operators cannot risk marrying digital technology with their physical systems if that technology is unreliable. They also may not have a workforce trained in the technical elements of integrating digital technology with their systems and/or with other digital platforms. Optimal technology in the marketplace may be with a startup company rather than an established one, but working with a new company may pose procurement challenges or undermine existing relationships. For these and other reasons, CI operators may choose to pilot technology deployments prior to full implementation. This can cause delays and frustrate those who want these industries to move faster.
Case in Point
For those of the CI industries that are regulated, regulators can at times send mixed signals (no pun – or offense – intended). For example, for investor-owned electric companies, most state regulations do not allow the software portion of digital technology to be recovered via electric rates, but “brick and mortar” investments such as transmission lines and power plants can be recovered. In the case of not-for-profit utilities, all investments, software or otherwise, must be charged in rates because they have no profit-margin. Their regulators – utility boards and locally elected officials – may look askance at digital technology deployments that have not been proven or that raise rates in the short-term, especially if the technology is complex and difficult to explain.
At the same time, policymakers and regulators expect the electric sector to meet clean energy goals when such goals are often dependent on digital technology deployments.
A “hot off the presses” example of regulators at the federal level encouraging digital technology deployments is the release of Federal Energy Regulatory Commission (FERC) Order 1920 on May 13. This final rule on electric transmission cost allocation and planning was passed by two of the three FERC commissioners. The final rule encourages utilities to use grid-enhancing technologies (GETS), such as dynamic line ratings, advanced power flow control devices, advanced conductors and transmission switching. This encouragement is paired with a requirement in the rule for transmission owners to use their existing facilities more optimally by increasing their transfer capabilities. While I am still reviewing the Order as of this writing, these provisions could serve to boost deployment of GETS by transmission-owning utilities and companies.
To Tech or Not to Tech
As I’ve mentioned, CI industries’ deployment of digital technology is inexorable. While the degree to which each sector needs to retain analog redundancies for reliability and resilience may vary, many of the challenges to deploying digital technology are similar from sector-to-sector, as are the technologies themselves. More collaboration between the sectors on these deployments could help to enhance workforce knowledge, minimize the need for duplicative pilots (at least for some types of deployments), and potentially lower costs by providing cross-cutting use-cases and applications.