Disaster Awaits if We Don’t Secure IoT Now
Disaster Awaits if We Don’t Secure IoT Now
In 2015, Ukraine experienced a slew of unexpected power outages. Much of the country went dark. The U.S. investigation has concluded that this was due to a Russian state cyberattack on Ukrainian computers running critical infrastructure.
In the decade that followed, cyberattacks on critical infrastructure and near-misses continued. In 2017, a nuclear power plant in Kansas was the subject of a Russian cyberattack. In 2021, Chinese state actors reportedly gained access to parts of the New York City subway computer system. Later in 2021, a cyberattack temporarily closed down beef processing plants. In 2023, Microsoft reported a cyberattack on its IT systems, likely by Chinese-backed actors.
The risk is growing, particularly when it comes to internet of things (IoT) devices. Just below the veneer of popular fad gadgets (does anyone really want their refrigerator to automatically place orders for groceries?) is an increasing army of more prosaic Internet-connected devices that take care of keeping our world running. This is particularly true of a sub-class called Industrial Internet of Things (IIoT), devices that implement our communication networks, or control infrastructure such as power grids or chemical plants. IIoT devices can be small devices like valves or sensors, but also can include very substantial pieces of gear, such as an HVAC system, an MRI machine, a dual-use aerial drone, an elevator, a nuclear centrifuge, or a jet engine.
The number of current IoT devices is growing rapidly. In 2019, there were an estimated 10 billion IoT devices in operation. At the end of 2024, it had almost doubled to approximately 19 billion. This number is set to more than double again by 2030. Cyber-attacks aimed at those devices, motivated either by political or financial gain, can cause very real physical-world damage to entire communities, far beyond damage to the device itself.
Security for IoT devices is often an afterthought, as they often have little need for a “human interface” (i.e., maybe a valve in a chemical plant only needs commands to Open, Close and Report), and usually they don’t contain information that would be viewed as sensitive (i.e., thermostats don’t need credit cards, a medical device doesn’t have a Social Security Number). What could go wrong?
Of course, “what could go wrong” depends on the device, but especially with carefully planned, at-scale attacks, it’s already been shown that a lot can go wrong. For example, armies of poorly-secured, internet connected security cameras have already been put to use in coordinated Distributed Denial of Service attacks, where each camera makes a few harmless requests of some victim service, causing the service to collapse under the load.
How to secure IoT devices
Measures to defend these devices generally fall into two categories: basic cybersecurity hygiene and defense in depth.
Cybersecurity hygiene consists of a few rules: Don’t use default passwords on admin accounts, apply software updates regularly to remove newly-discovered vulnerabilities, require cryptographic signatures to validate updates, and understand your “software supply chain:” where your software comes from, where the supplier obtains components that they may simply be passing through from open-source projects.
The rapid profusion of open-source software has prompted development of the US Government’s Software Bill of Materials (SBOM). This is a document that conveys supply chain provenance, indicating which version of what packages went into making the product’s software. Both IIoT device suppliers and device users benefit from accurate SBOMs, shortening the path to determining if a specific device’s software may contain a version of a package vulnerable to attack. If the SBOM shows an up-to-date package version where the vulnerability has been addressed, both the IIoT vendor and user can breathe easy; if the package version listed in the SBOM is vulnerable, remediation may be in order.
Defense in depth is less well-known, and deserves more attention.
It is tempting to implement the easiest approach to cybersecurity, a “hard and crunchy on the outside, soft and chewy inside” model. This emphasizes perimeter defense, on the theory that if hackers can’t get in, they can’t do damage. But even the smallest IoT devices may have a software stack that’s too complex for the designers to fully comprehend, usually leading to obscure vulnerabilities in dark corners of the code. As soon as these vulnerabilities become known, the device transitions from tight, well-managed security to no security, as there’s no second line of defense.
Defense in depth is the answer. A National Institute of Standards and Technology publication breaks down this approach to cyber resilience into three basic functions: protect, meaning use cybersecurity engineering to keep hackers out; detect, meaning add mechanisms to detect unexpected intrusions; and remediate, meaning take action to expel intruders to prevent subsequent damage. We will explore each of these in turn.
Protect
Systems that are designed for security use a layered approach, with most of the device’s “normal behavior” in an outer layer, while inner layers form a series of shells, each of which has smaller, more constrained functionality, making the inner shells progressively simpler to defend. These layers are often related to the sequence of steps followed during the initialization of the device, where the device starts in the inner layer with the smallest possible functionality, with just enough to get the next stage running, and so on until the outer layer is functional.
To ensure correct operation, each layer must also perform an integrity check on the next layer before starting it. In each ring, the current layer computes a fingerprint or signature of the next layer out.
To make a defensible IoT device, the software needs to be layered, with each layer only running if the previous layer has deemed it safe. Guy Fedorkow, Mark Montgomery
But there’s a puzzle here. Each layer is checking the next one before starting it, but who checks the first one? No one! The inner layer, whether the first checker is implemented in hardware or firmware, must be implicitly trusted for the rest of the system to be worthy of trust. As such, it’s called a Root of Trust (RoT).
Roots of Trust must be carefully protected, because a compromise of the Root of Trust may be impossible to detect without specialized test hardware. One approach is to put the firmware that implements the Root of Trust into read-only memory that can’t be modified once the device is manufactured. That’s great if you know your RoT code doesn’t have any bugs, and uses algorithms that can’t go obsolete. But few of us live in that world, so, at a minimum, we usually must protect the RoT code with some simple hardware that makes the firmware read-only after it’s done its job, but writable during its startup phase, allowing for carefully vetted, cryptographically signed updates.
Newer processor chips move this Root of Trust one step back into the processor chip itself, a hardware Root of Trust. This makes the RoT much more resistant to firmware vulnerabilities or a hardware-based attack, because firmware boot code is usually stored in non-volatile flash memory where it can be reprogrammed by the system manufacturer (and also by hackers). An RoT inside the processor can be made much more difficult to hack.
Detect
Having a reliable Root of Trust, we can arrange so each layer is able to check the next for hacks. This process can be augmented with Remote Attestation, where we collect and report the fingerprints (called attestation evidence) gathered by each layer during the startup process. We can’t just ask the outer application layer if it’s been hacked; of course, any good hacker would ensure the answer is “No Way! You can trust me!”, no matter what.
But remote attestation adds a small bit of hardware, such as the Trusted Platform Module (TPM) defined by the Trusted Computing Group. This bit of hardware collects evidence in shielded locations made of special-purpose, hardware-isolated memory cells that can’t be directly changed by the processor at all. The TPM also provides protected capability, which ensures that new information can be added to the shielded locations, but previously-stored information cannot be changed. And, it provides a protected capability that attaches a cryptographic signature to the contents of the Shielded Location to serve as evidence of the state of the machine, using a key known only to the Root of Trust hardware, called an Attestation Key (AK).
Given these functions, the application layer has no choice but to accurately report the attestation evidence, as proven by use of the RoT’s AK secret key. Any attempt to tamper with the evidence would invalidate the signature provided by the AK. At a remote location, a verifier can then validate the signature and check that all the fingerprints reported line up with known, trusted, versions of the device’s software. These known-good fingerprints, called endorsements, must come from a trusted source, such as the device manufacturer.
To verify that it’s safe to turn on an IoT device, one can use an attestation and verification protocol provided by the Trusted Computing Group. Guy Fedorkow, Mark Montgomery
In practice, the Root of Trust may contain several separate mechanisms to protect individual functions, such as boot integrity, attestation and device identity, and the device designer is always responsible for assembling the specific components most appropriate for the device, then carefully integrating them, but organizations like Trusted Computing Group offer guidance and specifications for components that can offer considerable help, such as the Trusted Platform Module (TPM) commonly used in many larger computer systems.
Remediate
Once an anomaly is detected, there are a wide range of actions to remediate. A simple option is power-cycling the device or refreshing its software. However, trusted components inside the devices themselves may help with remediation through the use of authenticated watchdog timers or other approaches that cause the device to reset itself if it can’t demonstrate good health. Trusted Computing Group Cyber Resilience provides guidance for these techniques.
The requirements outlined here have been available and used in specialized high-security applications for some years, and many of the attacks have been known for a decade. In the last few years, Root of Trust implementations have become widely used in some laptop families. But until recently, blocking Root of Trust attacks has been challenging and expensive even for cyber experts in the IIoT space. Fortunately, many of the silicon vendors that supply the underlying IoT hardware are now including these high-security mechanism even in the budget-minded embedded chips, and reliable software stacks have evolved to make mechanisms for Root of Trust defense more available to any designer who wants to use it.
While the IIoT device designer has the responsibility to provide these cybersecurity mechanisms, it’s up to system integrators, who are responsible for the security of an overall service interconnecting IoT devices, to require the features from their suppliers, and to coordinate features inside the device with external resilience and monitoring mechanisms, all to take full advantage of the improved security now more readily available than ever.
Mind your roots of trust!
In 2015, Ukraine experienced a slew of unexpected power outages. Much of the country went dark. The U.S. investigation has concluded that this was due to a Russian state cyberattack on Ukrainian computers running critical infrastructure.In the decade that followed, cyberattacks on critical infrastructure and near-misses continued. In 2017, a nuclear power plant in Kansas was the subject of a Russian cyberattack. In 2021, Chinese state actors reportedly gained access to parts of the New York City subway computer system. Later in 2021, a cyberattack temporarily closed down beef processing plants. In 2023, Microsoft reported a cyberattack on its IT systems, likely by Chinese-backed actors.The risk is growing, particularly when it comes to internet of things (IoT) devices. Just below the veneer of popular fad gadgets (does anyone really want their refrigerator to automatically place orders for groceries?) is an increasing army of more prosaic Internet-connected devices that take care of keeping our world running. This is particularly true of a sub-class called Industrial Internet of Things (IIoT), devices that implement our communication networks, or control infrastructure such as power grids or chemical plants. IIoT devices can be small devices like valves or sensors, but also can include very substantial pieces of gear, such as an HVAC system, an MRI machine, a dual-use aerial drone, an elevator, a nuclear centrifuge, or a jet engine. The number of current IoT devices is growing rapidly. In 2019, there were an estimated 10 billion IoT devices in operation. At the end of 2024, it had almost doubled to approximately 19 billion. This number is set to more than double again by 2030. Cyber-attacks aimed at those devices, motivated either by political or financial gain, can cause very real physical-world damage to entire communities, far beyond damage to the device itself.Security for IoT devices is often an afterthought, as they often have little need for a “human interface” (i.e., maybe a valve in a chemical plant only needs commands to Open, Close and Report), and usually they don’t contain information that would be viewed as sensitive (i.e., thermostats don’t need credit cards, a medical device doesn’t have a Social Security Number). What could go wrong?Of course, “what could go wrong” depends on the device, but especially with carefully planned, at-scale attacks, it’s already been shown that a lot can go wrong. For example, armies of poorly-secured, internet connected security cameras have already been put to use in coordinated Distributed Denial of Service attacks, where each camera makes a few harmless requests of some victim service, causing the service to collapse under the load.How to secure IoT devicesMeasures to defend these devices generally fall into two categories: basic cybersecurity hygiene and defense in depth.Cybersecurity hygiene consists of a few rules: Don’t use default passwords on admin accounts, apply software updates regularly to remove newly-discovered vulnerabilities, require cryptographic signatures to validate updates, and understand your “software supply chain:” where your software comes from, where the supplier obtains components that they may simply be passing through from open-source projects.The rapid profusion of open-source software has prompted development of the US Government’s Software Bill of Materials (SBOM). This is a document that conveys supply chain provenance, indicating which version of what packages went into making the product’s software. Both IIoT device suppliers and device users benefit from accurate SBOMs, shortening the path to determining if a specific device’s software may contain a version of a package vulnerable to attack. If the SBOM shows an up-to-date package version where the vulnerability has been addressed, both the IIoT vendor and user can breathe easy; if the package version listed in the SBOM is vulnerable, remediation may be in order.Defense in depth is less well-known, and deserves more attention.It is tempting to implement the easiest approach to cybersecurity, a “hard and crunchy on the outside, soft and chewy inside” model. This emphasizes perimeter defense, on the theory that if hackers can’t get in, they can’t do damage. But even the smallest IoT devices may have a software stack that’s too complex for the designers to fully comprehend, usually leading to obscure vulnerabilities in dark corners of the code. As soon as these vulnerabilities become known, the device transitions from tight, well-managed security to no security, as there’s no second line of defense.Defense in depth is the answer. A National Institute of Standards and Technology publication breaks down this approach to cyber resilience into three basic functions: protect, meaning use cybersecurity engineering to keep hackers out; detect, meaning add mechanisms to detect unexpected intrusions; and remediate, meaning take action to expel intruders to prevent subsequent damage. We will explore each of these in turn.ProtectSystems that are designed for security use a layered approach, with most of the device’s “normal behavior” in an outer layer, while inner layers form a series of shells, each of which has smaller, more constrained functionality, making the inner shells progressively simpler to defend. These layers are often related to the sequence of steps followed during the initialization of the device, where the device starts in the inner layer with the smallest possible functionality, with just enough to get the next stage running, and so on until the outer layer is functional. To ensure correct operation, each layer must also perform an integrity check on the next layer before starting it. In each ring, the current layer computes a fingerprint or signature of the next layer out.
To make a defensible IoT device, the software needs to be layered, with each layer only running if the previous layer has deemed it safe. Guy Fedorkow, Mark Montgomery But there’s a puzzle here. Each layer is checking the next one before starting it, but who checks the first one? No one! The inner layer, whether the first checker is implemented in hardware or firmware, must be implicitly trusted for the rest of the system to be worthy of trust. As such, it’s called a Root of Trust (RoT). Roots of Trust must be carefully protected, because a compromise of the Root of Trust may be impossible to detect without specialized test hardware. One approach is to put the firmware that implements the Root of Trust into read-only memory that can’t be modified once the device is manufactured. That’s great if you know your RoT code doesn’t have any bugs, and uses algorithms that can’t go obsolete. But few of us live in that world, so, at a minimum, we usually must protect the RoT code with some simple hardware that makes the firmware read-only after it’s done its job, but writable during its startup phase, allowing for carefully vetted, cryptographically signed updates. Newer processor chips move this Root of Trust one step back into the processor chip itself, a hardware Root of Trust. This makes the RoT much more resistant to firmware vulnerabilities or a hardware-based attack, because firmware boot code is usually stored in non-volatile flash memory where it can be reprogrammed by the system manufacturer (and also by hackers). An RoT inside the processor can be made much more difficult to hack.DetectHaving a reliable Root of Trust, we can arrange so each layer is able to check the next for hacks. This process can be augmented with Remote Attestation, where we collect and report the fingerprints (called attestation evidence) gathered by each layer during the startup process. We can’t just ask the outer application layer if it’s been hacked; of course, any good hacker would ensure the answer is “No Way! You can trust me!”, no matter what.But remote attestation adds a small bit of hardware, such as the Trusted Platform Module (TPM) defined by the Trusted Computing Group. This bit of hardware collects evidence in shielded locations made of special-purpose, hardware-isolated memory cells that can’t be directly changed by the processor at all. The TPM also provides protected capability, which ensures that new information can be added to the shielded locations, but previously-stored information cannot be changed. And, it provides a protected capability that attaches a cryptographic signature to the contents of the Shielded Location to serve as evidence of the state of the machine, using a key known only to the Root of Trust hardware, called an Attestation Key (AK).Given these functions, the application layer has no choice but to accurately report the attestation evidence, as proven by use of the RoT’s AK secret key. Any attempt to tamper with the evidence would invalidate the signature provided by the AK. At a remote location, a verifier can then validate the signature and check that all the fingerprints reported line up with known, trusted, versions of the device’s software. These known-good fingerprints, called endorsements, must come from a trusted source, such as the device manufacturer.
To verify that it’s safe to turn on an IoT device, one can use an attestation and verification protocol provided by the Trusted Computing Group. Guy Fedorkow, Mark MontgomeryIn practice, the Root of Trust may contain several separate mechanisms to protect individual functions, such as boot integrity, attestation and device identity, and the device designer is always responsible for assembling the specific components most appropriate for the device, then carefully integrating them, but organizations like Trusted Computing Group offer guidance and specifications for components that can offer considerable help, such as the Trusted Platform Module (TPM) commonly used in many larger computer systems.RemediateOnce an anomaly is detected, there are a wide range of actions to remediate. A simple option is power-cycling the device or refreshing its software. However, trusted components inside the devices themselves may help with remediation through the use of authenticated watchdog timers or other approaches that cause the device to reset itself if it can’t demonstrate good health. Trusted Computing Group Cyber Resilience provides guidance for these techniques.The requirements outlined here have been available and used in specialized high-security applications for some years, and many of the attacks have been known for a decade. In the last few years, Root of Trust implementations have become widely used in some laptop families. But until recently, blocking Root of Trust attacks has been challenging and expensive even for cyber experts in the IIoT space. Fortunately, many of the silicon vendors that supply the underlying IoT hardware are now including these high-security mechanism even in the budget-minded embedded chips, and reliable software stacks have evolved to make mechanisms for Root of Trust defense more available to any designer who wants to use it.While the IIoT device designer has the responsibility to provide these cybersecurity mechanisms, it’s up to system integrators, who are responsible for the security of an overall service interconnecting IoT devices, to require the features from their suppliers, and to coordinate features inside the device with external resilience and monitoring mechanisms, all to take full advantage of the improved security now more readily available than ever.Mind your roots of trust!