Understanding Information Flow in Real Hardware with GLIFT

System developers need new methods to help them understand the security of their designs, but these properties are a function of both the hardware and the software working together. As demonstrated by Spectre, Meltdown, and ZombieLoad hardware and software can interact is some subtle, surprising, and hard to predict ways. Since 2006 our team has been developing ways of analyzing, detecting, and preventing security issues at the hardware/software interface. One result of this extended research effort has Gate-Level Information Flow Tracking (GLIFT): the first method for analyzing, statically verifying, and dynamically managing the information-flow behavior of mixed hardware/software systems. The big advantages of this approach include:

Screen+Shot+2019-06-07+at+11.18.50+PM.jpg

Full Stack Reasoning

An analysis that cuts through all of the layers of digital abstraction -- from the application, through the OS, the firmware, and through machine code, to reveal the true behavior of the actual machine implementation rather than the behavior described by a (potentially out-of-date or imprecise) specification or abstraction.

Captures Timing Channels

We are able to unify and identify all potential digital flows of information including side-channels such as timing and storage channels, implicit flows, and of course all explicit flows. Anything that effects any of the bits, or even the timing of those bits, on the system is captured with our methodology.

Screen+Shot+2019-06-07+at+11.18.23+PM.jpg

Provable Static Analysis

We can prove the absence of vulnerabilities — this could never be achieved with testing alone. This can be either done dynamically (by adding extra enforcement logic) or statically which, as long as the design is sound, requires no extra GLIFT hardware to be inserted into the production design.

Build Real & Sound Systems

We can (and do!) build real systems that manage all of these information flows in a reasonable way to implement provable non-interference and one-way information flow policies. We have demonstrated this on numerous occasions, by building cache controllers, USB hubs, networks on chip, and full general purpose microprocessors.


Gate Level Information Flows

Because this project has progressed significantly over the years, we thought it would be useful to describe the arc this research has taken through UCSB starting with the visionary work of Mohit Tiwari (then a Ph.D. student at UCSB). This work is collaborative at its core and Professors Fred Chong, Ted Huffmire, Ben Hardekopf, Ryan Kastner, and Tim Sherwood and their students all contributed significant new insights. This work was also the inspiration for several commercial efforts in the space including efforts from some of the original authors in the form of Tortuga Logic. Most recently it lead directly into our work on hardware support for verifiable software as well. Please note there are many other people around the globe that have contributed to moving these and other aspects of hardware security forward not reflected in the list below, this is just a slice of it that is our work. That being said, we hope you find the high level descriptions of papers below helpful.

Screen Shot 2019-06-06 at 2.59.43 PM.png

The simple example that started it all: tracking information flow through a 2-input AND gate. The shadow truth table (right) shows the interesting case when one input to an AND gate is low (e.g. not-secret) and the other is high (e.g. secret). The goal is to understand the “secretness” of the output (outL) given a, b, aL, and bL. An output should be secret if and only if it is “influenced by” secret inputs —meaning some change to secret input data could cause a change in the output. The gray arrows indicate the rows that have to be checked for each row on the right. For example, when a=1, b affects the output (row 3 and 4 on the left). Hence row 3 and 4 on the right have outL as secret. These “secrecy” signals then be used in static analysis or SAT solvers, or even (in some cases) represented as booleans built directly into hardware.


Contributions

Publication

Mohit Tiwari, Hassan Wassel, Bita Mazloom, Shashidhar Mysore, Frederic Chong, and Timothy Sherwood. Complete Information Flow Tracking from the Gates Up Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), March 2009. Washington, DC

This paper is the first publication of our GLIFT ideas and formalizes the idea of hardware-level information flow tracking. We show how, given a circuit, you can generate a new circuit that describes how information flows through the original circuit. The method can be done either exactly (which requires walking through the truth table) or with a one-sided error (which is compositional and sound -- if we say there is no flow, then there is no flow, but false positives are possible). We propose to generate information tracking logic which can either be built in along with the design, or can be used simply for testing purposes. If the logic is included with the final design (to assist in run-time information flow tracking) rather than just for testing, the overhead is significant (around 1.7x) a number in later papers we are able to bring down to only 2%. We demonstrate this technique on a very restricted and generally horrible little processor (for example, to avoid “taint explosion” it only support predication, no general purpose branching is allowed). However, this is the first paper to show that timing and storage channels can be unified at the gate level and really set in place the information flow analysis that is the basis for all our future work.

 

Mohit Tiwari, Xun Li, Hassan M G Wassel, Frederic T Chong, and Timothy Sherwood. Execution Leases: A Hardware-Supported Mechanism for Enforcing Strong Non-Interference Proceedings of the International Symposium on Microarchitecture (Micro), December 2009. New York, NY

Here we propose a much better way of designing microprocessors with explicit instructions to help the software manage the information flow of the underlying machine. The instructions implement an idea called "execution leases" through which regions of execution (in both space and time) can be tightly quarantined and their side effects to be tightly bounded. These leases establish an abstraction for containing information which can have many different uses including compartmentalizing secrets (like cryptographic operations) or ensuring integrity of critical software components (like a real-time scheduler). This both a) makes the complete information-flow properties of the machine fully explicit and available to the programmer and b) allows those properties (including properties as strict as non-interference) to be verified all the way down to the gate-level implementation the design. We demonstrate the effectiveness of the resulting design by building a synthesizable prototype of the machine and hand writing software to run on top of it. This is the first paper to show an architecture mechanism that can support policies as strict as non-interference and the first general purpose programmable system to be shown secure with GLIFT.

 

Jason Oberg, Wei Hu, Ali Irturk, Mohit Tiwari, Timothy Sherwood, and Ryan Kastner Information Flow Isolation in I2C and USB Proceedings of the 47th Design Automation Conference (DAC) June 2011. San Diego, CA.

In any system, one of the points at which information and agents of different trustworthiness and secrecy must co-mingle is in the communication hardware. Many high-assurance systems today have completely separate buses for "high" and "low" data to prevent an interference, but increasingly engineers are pressured to combine functions in ways that are tough to argue are safe. In this paper we show that it is possible to retrofit a strong notion of security onto existing protocols in such as way that the resulting designs can be verified all the way down to the gate level implementation. We demonstrate this by implementing and verifying an I2C hub (to demonstrate the core ideas) and a simplified USB controller (to show that the ideas can be applied to even modern complex protocols).

 

Wei Hu, Jason Oberg, Ali Irturk, Mohit Tiwari, Timothy Sherwood, and Ryan Kastner Theoretical Fundamentals of Gate Level Information Flow Tracking IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) To appear.

One of the questions left unanswered in the original GLIFT paper was the cost of precision. We knew that it was sound to compose our analysis (we should never get false negatives), but we knew also that composition would introduce false positives in certain cases. For example, an analysis of a MUX that is done on a gate level implementation (the ANDs and ORs implementing the MUX) as opposed to the full truth table of a MUX introduces a false positive when a "tainted" select signal is selecting between to "untainted" signals that are the same value. Intuitively, the select signal can never effect the output and thus does not "taint" the output. In this paper we formalize GLIFT, and find that while it is possible to remove all false positives, the problem of generating the precise taint tracking logic is in actually in co-NP (i.e. it’s hard). We further explore the tradeoffs between speed and precision and analyze their cost/effectiveness across a set of circuit benchmarks.

 

Mohit Tiwari, Jason Oberg, Xun Li, Jonathan K Valamehr, Timothy Levin, Ben Hardekopf, Ryan Kastner, Frederic T Chong, and Timothy Sherwood. Crafting a Usable Microkernel, Processor, and I/O System with Strict and Provable Information Flow SecurityProceedings of the International Symposium of Computer Architecture. (ISCA) June 2011. San Jose, California

One of the concerns when we did the execution leases paper was whether we could really build a full system that could be verified in this way. In this paper we answer those criticisms by designing, testing, and statically verifying the information-flow security of a hardware/software system complete with support for unbounded operation, inter-process communication, pipelined operation, and I/O with traditional devices. The resulting system is provably sound even when adversaries are allowed to execute arbitrary code on the machine, yet is flexible enough to allow caching, pipelining, and other common case optimizations. We implement a micro-kernel for our microprocessor that uses leases to manage shared buffers for provably safe interprocess communication, and we introduce a simple form of abstract interpretation (via *-logic) to allow us to more quickly statically verify the design (which means no special GLIFT logic ever need be introduced to the final design, it can be used for analysis alone).

 

Xun Li, Mohit Tiwari, Jason Oberg, Vineeth Kashyap, Frederic T. Chong, Timothy Sherwood, and Ben Hardekopf Caisson: A Hardware Description Language for Secure Information Flow 32nd ACM Conference on Programming Language Design and Implementation. (PLDI) June 2011. San Jose, California

While the techniques described above can take a description of hardware (e.g. a netlist) and find security issues, they don’t provide any help to designers that are trying to write secure hardware in the first place. In our work on Caisson, we take language-level techniques for secure information flow and apply them to create a new Hardware Description Language (HDL) named Caisson. Our goal was to enable digital hardware to be written in a way that it is guaranteed to be statically-verifiable as secure. This should enable the quick creation and modification of hardware designs, yet should always result in an implementation that a secure by construction (there are no dynamic checks added or even allowed by the language). Caisson employs two novel features on top of finite state machines, nested states and parameterized states and we demonstrate the utility of the language through the design an information-flow secure processor. Two of the problems left open by this work is how to support more general hardware design practices and how limited dynamic checks might be implemented.

 

Hassan Wassel, Ying Gao, Jason Oberg, Ted Huffmire, Ryan Kastner, Frederic Chong, and Timothy Sherwood. SurfNoC: A Low Latency and Provably Non-Interfering Approach to Secure Networks-On-Chip Proceedings of the International Symposium of Computer Architecture. (ISCA) June 2013. Tel Aviv, Israel

Networks-on-chip (NOCs) connect multiple different components in everything from multiprocessors to FPGAs. When “cores” are used to run separate pieces of the system it is useful to strongly partition (in time and space) the network to keep critical pieces safe from inference or inspection. However, as the number of partitions or the asymmetry in partition bandwidth allocations grows it can really hurt performance. Building on our gate-level information flow analysis methods, we develop an on-chip network that significantly reduces the extra delay introduced by time-partitioning. By carefully scheduling the network into waves that flow across the interconnect, data from different partitions can be quickly carried by these waves yet can be still be strictly and provable partitioned. As the network grows in size, as the number of domains increases, and as the asymmetry between domains becomes larger, the benefit of this surf scheduled network over TDMA continues to increase. In many cases SurfNoC can reduce the latency overhead of implementing cycle-level non-interference by up to 85%

 

Jason Oberg, Sarah Meiklejohn, Timothy Sherwood, and Ryan Kastner. A Practical Testing Framework for Isolating Hardware Timing Channels The Conference on Design Automation and Test in Europe (DATE) March 2013. Grenoble, France

Back in 2013 it was hard to get people to really care about timing channels — time and again we were told that “timing channels are not part of our threat model” by the largest companies. Our work on GLIFT not only unifies timing channels with other channels by looking at the hardware implementation, it demonstrates clearly they are really the same thing. However, it ends up being a huge pain to deal with these information flows (as many in industry are now finding out the hard way). To ensure that two parts of a system are non-interfering, it has to be shown that one part can have absolutely no effect on the other. There are many sources of interference which range from directly modifying functionality to simply changing the timing of events. For example, two parts of a system that opportunistically share a cache might never interfere with one another’s data, but they are not non-interfering because they can affect the time it takes for them to perform their duties. In this paper we try and tease timing apart from other types of channels because everyone kept telling us they did not care about timing, but you end up going through a lot of different hoops to try and get there. Important lesson: timing channels are real channels that, we found out later, cause real vulnerabilities.

 

Xun Li, Vineeth Kashyap, Jason Oberg, Mohit Tiwari, Vasanth Rajarathinam, Ryan Kastner, Timothy Sherwood, Ben Hardekopf, and Frederic Chong. Sapper: A Language for Hardware-Level Security Policy Enforcement Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), March 2014. Salt Lake, Utah

Learning what we did from our earlier language work with Caisson we were eager to find a way to build secure hardware in less draconian ways. In this work we describe Sapper, an HDL for security-critical hardware, that uses a combination of static analysis and automatic insertion of dynamic checks in the resulting hardware, to provably enforce a given information flow policy. The Sapper language is an extension of a synthesizable subset of Verilog it generates synthesizable Verilog that is guaranteed to meet a specified security policy. The semantics of Sapper ensures that the security properties of the resulting system are statically guaranteed, even though these security checks are executed dynamically. During the generated system’s execution, these dynamic checks intercept any information flow that violates the security policy and replaces the offending operation with one that is guaranteed to be secure.

 

Wei Hu, Dejun Mu, Jason Oberg, Baolei Mao, Mohit Tiwari, Timothy Sherwood, and Ryan Kastner. Gate Level Information Flow Tracking for Security Lattices" ACM Transactions on Design Automation of Electronic Systems (TODAES) Volume 20, Issue 1, November 2014.

Most of the work on GLIFT considers only a relatively simple scenario where the world can be divided into to classes: High (e.g. secret) and Low (e.g. not-secret). This model, while simple, is really useful because it can cover both secrecy and integrity and can even be iteratively applied to cover more complex analysis by breaking things down into pairwise relationships. In this paper we explore what happens if you extend GLIFT to directly support more complex Multi-Level Security Lattices. Specifically we show how multilevel security labels can be propagated through arbitrary Boolean circuits and draw some interesting and unexpected connections between information flow tracking and the well studied problem of glitching (or more precisely static logic hazards). Furthermore, we show that if logic is analyzed as a BDD, the information flow analysis can be with complete precision and very little effort.

 

Nestan Tsiskaridze, Lucas Bang, Joseph McMahan, Tevfik Bultan, and Timothy Sherwood. Information Leakage in Arbiter Protocols Proceedings of the International Symposium on Automated Technology for Verification and Analysis (ATVA) October 2018. Los Angeles, CA

While this paper is not GLIFT, it is included because it because it points in new directions for the field. All of the above work is concerned with understanding the yes/no question: Is it possible for this design to leak information? While we now know we can build a machine that leaks absolutely nothing, the pressure to share resources is continuous and immense. Introducing that sharing almost always (unless incredibly carefully managed) leads to some leakage. This begs an answer to the question: how much information does this design leak? Answering this question requires an entirely new and different theory and here we explore one new approach to automatically identifying and quantifying the information leakage, specifically in the context of arbiters. We describe an approach based on the symbolic execution of hardware to extract constraints relating adversary observations (what the attacker can “see”) to victim requests (what the person under attack is “doing”). We then use model counting constraint solvers to quantify the information leaked. We present enumerative and optimized methods of exact model counting and apply our methods to a set of nine different hardware arbiter protocols to see how much the leak relative to one another. The hope is that this and other follow on work will help us all deal with information leaks precisely and quantitatively.