1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
|
---
title: "Hold on to your hat and learn System Transparency in five minutes!"
date: 2020-10-12
---
What do we really know about the systems that run our critical applications?
Not enough is probably a fair summary: much can go wrong between device reset
and execution of a user-land application. System Transparency helps you verify
that what you think is running remotely actually runs, and not, say, a modified
operating system that contains a secret backdoor. I will break it down
top-to-bottom after first motivating the rationale and objective briefly.
## Rationale and objective
Anyone in a position of power should probably be subject to a proportional
amount of transparency. It is an important safeguard that deters malicious
activities, while at the same time making it possible to fix honest mistakes.
Such a principle can of course be applied in real life, but I mainly refer to
the different components that compose a digital system: hardware, firmware,
operating systems, applications, and so forth. Generally I would say that _power
is decreased by transparency because most abuse can be detected_. For example,
it would be proportional by Intel to open up their proprietary management engine
because it is powerful enough to
[hijack your system](https://www.wired.com/2017/05/hack-brief-intel-fixes-critical-bug-lingered-7-dang-years/).
The scenario to keep in mind is accordingly as follows. A remote server is
running a service somewhere that processes your data based on a policy. You
might have reason to believe that said policy is followed now, but will it be
in the future when
[intruders](https://www.eff.org/deeplinks/2020/07/after-weeks-hack-it-past-time-twitter-end-end-encrypt-direct-messages)
and
[law enforcement](https://www.eff.org/cases/apple-challenges-fbi-all-writs-act-order)
knock down the door? I, for one, would prefer if we could _verify_ that the
system in question works as intended (and not just trust that to be the case
blindly). The other benefit of such remote system verification is more subtle:
the service provider could use it to determine if intention matches deployment.
Of course there might be unknown bugs, but by making every part of the system
as transparent as possible it will be easier to find vulnerabilities and assess
trustworthiness.
## Breaking it down, top-to-bottom
The idea is to first make transparent what is allowed running on a given system.
You can view this as the top-most layer that represents an operating system
package with installed programs, configurations, and so forth. Thereafter we
need to enforce that nothing else than the transparent operating system package
is allowed running with a bottom layer. Such enforcement is based on hardware
features that should be transparent as well.
### Reproducible and publicly auditable operating system packages
Suppose that we have an operating system package that we would like to deploy.
As a first step we need to
[build it reproducibly](https://reproducible-builds.org/), such that anyone can inspect
the source code and determine if the resulting package lives up to the claimed
promises. A possible issue that one might find, for example, is that there is
interactive system access installed: pretty much anything could run after a
reconfiguration. Therefore, a transparent system should restrict arbitrary
access and provision updates as new operating system packages that, again,
build reproducibly. For those that are familiar with functional programming, it
is essentially an
[immutable infrastructure](https://web.archive.org/web/20200518230417/http://chadfowler.com/2013/06/23/immutable-deployments.html).
An independent benefit of such maintenance is that
[malware persistence](https://github.com/Karneades/malware-persistence#overview-of-often-and-less-often-used-persistence-mechanisms)
becomes trickier.
A reproducible operating system package serves a limited purpose unless it is
publicly available. Therefore, we should insert it into a
[transparency log](https://transparency.dev/).
This means that anyone can verify whether a package builds reproducibly, and if
it contains, say, a secret backdoor that would be detected by source inspection.
### Measured and remotely attested boot
Now we need to enforce that the publicly disclosed operating system packages
run on our servers and nothing else. At a first glance it might sound daunting,
but today’s hardware platforms ship some pretty useful security features. For
example, there is usually a separate hardware domain for key management,
cryptographic hashing, Platform Configuration Registers (PCRs), and digital
signatures. It is possible to measure code, data structures, and configurations
into a PCR before execution to form a hash chain, such that all initial system
states can be aggregated into a single value. The system’s boot process can be
aborted if a measurement diverges from the expected value, e.g., because the
boot loader did not enforce transparency logging as required by the top layer.
It is also possible to sign PCR values and attest them remotely. In other
words, if these features work we can prove to a third party how the system
booted.
### Open source firmware and LinuxBoot
An immediate concern is that much trust is placed in the underlying hardware
platform. Naturally, it begs the question if such trust is misplaced. A
[talk by Ron Minnich](https://osseu17.sched.com/event/ByYt/replace-your-exploit-ridden-firmware-with-linux-ronald-minnich-google)
brings you up to speed on why the answer is probably "yes". Let us focus on
solutions instead: open hardware, firmware, and boot loaders. It is paramount
that these components are vetted thoroughly in the open because they may
compromise the system
[while running or before it is even started](https://securelist.com/mosaicregressor/98849/).
So, System Transparency implements a flavor of
[LinuxBoot](https://www.linuxboot.org/)
called
[stboot](https://github.com/system-transparency/system-transparency/blob/master/README.md#bootloader-stboot).
It can replace much of the later-stage UEFI components with a Linux kernel and a
user-land environment in Go, such that a subset of proprietary firmware is
removed in favor of an open source option that is safer and customizable. For
example, one possible customization is to enforce transparency logging as a
criteria to boot into the host operating system. It is possible to eliminate
UEFI all together by re-flashing the firmware with
[coreboot](https://doc.coreboot.org/)
and specifying stboot as a payload. The TL;DR is that coreboot is (mostly) open
source firmware that does the bare minimum hardware initialization. It was
recently
[ported to a modern server platform](https://mullvad.net/en/blog/2019/8/7/open-source-firmware-future/).
### Set-up ceremony and tamper-evident hardware
Assuming an open platform that enforces transparency logging as described
above, you can be somewhat sure that said operating system packages run. The
problem is that you cannot easily know if that assumption is true. I am not
claiming that there is a slam-dunk solution here, but measures can be taken to
reduce the risk of a broken setup. For example, assemble and install the
platform while witnessed live by several independent parties that write down
and publish a log book of events that occurred:
"[neutralized the management engine](https://github.com/corna/me_cleaner)",
"added open firmware with checksum XYZ", etc. We can also define some physical
security boundaries that, if breached, automatically activate defensive
mechanisms that preserve the system’s overall integrity after setup.
## Concluding remarks
The described System Transparency design shows how a service provider can
facilitate trust by engineering a system that is more trustworthy. I would like
to emphasize _more trustworthy:_ all of the applied techniques have merit on
their own, and if one part does not fit the use-case or current practice it
might be reasonable to cut it. For example, if you lease cloud servers that
only allow starting stboot from UEFI: so be it. Simply assume that there will
be no firmware and physical attacks for the time being. It is still a
significant improvement when compared to obscure operating system packages
since the attack surface and overall trust domain is reduced.
[The growing problem of malicious Tor relays in the cloud](https://medium.com/@nusenu/how-malicious-tor-relays-are-exploiting-users-in-2020-part-i-1097575c0cac)
could benefit from such a solution because a class of real-world attackers would
not see any traffic (if enforced by Tor). As another example: suppose your
interest is mainly to harden your own internal infrastructure, and not so much
about making it transparent for everyone. It is not a strict requirement to put
the operating system package in the public, i.e., a hash is enough to convince
yourself that nothing else was allowed running.
## Acknowledgments
Fredrik Strömberg provided valuable feedback on this story, which is sponsored
by my
[System Transparency](https://system-transparency.org/)
employment at Mullvad VPN.
|