prognos.is

Moving Beyond Threatbutt

Moving beyond Threatbutt

deck

Good morning, y’all, and thanks for the opportunity to speak to you
 today.

When I submitted my talk proposal some months ago, I imagined that my
talk would go something like this:

  • here’s a bunch of criticism that’s been leveled at CTI;
  • some of that criticism is wrong, here are rebuttals;
  • some of that criticism is right, here are proposals for how we
    should respond as a community.

In the interim, after much consideration, my thinking on this subject
has evolved and hence the talk I’m giving today bears little
resemblance to the talk I imagined giving when I submitted my
proposal. I promise I will get into technical matters related to CTI
in a few minutes, but before we get there, please allow me to try and
frame this discussion in a context you may not have fully considered.

A few weeks ago, I was speaking at another infosec event and after my
talk, a vendor presented their view of the threat landscape in 2016.
It was a perfectly fine talk (in fact, better than average) but it was
like 95% of the infosec talks I’ve heard. To wit, everything is
vulnerable, the world is going to hell in a handbasket, and hence you
should buy our black box or retain us to pentest your network.

In the course of this talk, the speaker made a joke about the
impending end of the unix epoch in 2038 and it started me imagining
what a “2039 threat landscape” talk would look like. And to be honest,
I cannot imagine that it would bear any resemblance to the typical
infosec talk you hear today.

Fear-driven narratives are compelling and it’s easy to get sucked into
them. So, here’s the scary part of the talk.

The internet is rickety. Imagine for a moment that you’re taking a
hike in the forest and come to a stream blocking your path. You step
gingerly from one stone to another, carefully testing each next step
before shifting your full weight to the next stone. As a civilization,
we’ve managed to place our center of gravity on an unsteady rock
called the internet and there’s no way back. (Well, there may be but
it’s somewhat nightmarish to consider.)

The attackers currently have the advantage. We can’t possibly find all
the software bugs faster than the attackers can, much less patch them
in time. We don’t even know how much of a difference patching makes in
the grand scheme of things. Consider Bruce Schneier’s 2014 Atlantic
Monthly article
in which he ponders the problem of zero days and
examines the question whether software vulnerabilities are sparse or
plentiful. We lack sufficient data to say with certainty, but the data
we do have suggests that the number of software bugs we don’t know
about is orders of magnitude greater than the number we do know about.

Even supposing we could clone ten thousand Dan Bernsteins and Meredith
Pattersons to rewrite our software stacks, the installed base problem
would still bite us on the ass. Imagine, if you will, that a study was
published proving beyond a doubt that asphalt causes cancer. It would
still take decades to replace all the world’s roadways. Substitute
vulnerable embedded systems for asphalt and p0wnage for cancer and the
metaphor holds. This is the situation we confront. Metcalfe’s Law
posits that the value of a network is proportional to the square of the
number of connected endpoints. In a world of ubiquitous vulnerable
systems, many embedded and/or effectively unpatchable, the security
challenge appears to scale in rough parallel.

So here we find ourselves, perched precariously on a shaky foundation,
constantly shifting our center of gravity to avoid a perilous fall.

Here’s the good news: the worst case scenario almost never happens!

It’s worth noting that the situation we find ourselves in today is
hardly a novel one. Throughout human history, the rate of
technological change has outpaced culture’s ability to adapt. There
are plenty of examples. If you’ll permit me to play fast and loose
with history, consider the following.

Gutenberg invents the printing press, suddenly folks all over Europe
start reading the Bible for themselves, scads of new religious sects
emerge, and we get the Thirty Years’ War.

Flash forward to the fin de siècle, and rapid advances are being made
in science and industry. Ford gives us the assembly line process and
the Model T. Haber gives us the Haber–Bosch process for extracting
nitrogen from the earth’s atmosphere, thereby launching a revolution
in agriculture. The Wright Brothers give us the airplane. But in short
order all this technological advancement leads to the horrors of WWI:
total industrialized warfare, gas clouds looming over the tank-strewn
trenches of Europe.

Curie, Bohr, Fermi, Heisenberg, et al peel back the curtains to unveil
the undiscovered country lurking inside the atom, leading to
earth-shaking advances in medicine and energy production, but also to
horrors of Hiroshima, Nagasaki, and the Cold War.

I could go on, but this is an infosec talk, not a history lecture. The
point is, there’s this pattern of technologically-induced disruption
leading into a period of societal adjustment. Following the
aforementioned convulsions, things more or less settled down. People
still kill and die for the written word. Nation-states still maintain
WMD stocks (nukes, chemical, biological, etc.) But in general, our
cultures have adapted and life goes on.

So what would a “Threat Landscape: 2039” talk look like? Well, I can
see two mutually exclusive alternatives.

One is that something awful has happened, leading us to decide to
unplug from the networks and go back to something approaching the
paper-driven world we had before. I’m reminded of a scene from the
pilot episode of the Battlestar Galactica reboot. The earth has just
been hit by a devastating surprise attack by the Cylon enemy, just one
Battlestar survives, and the crew are frantically trying to calculate
an FTL jump to safety. At one point a crew member helpfully suggests
to the ship’s captain that if they just interconnected the ship’s
navigational computer with the other onboard computers, they could
make the FTL jump calculations much faster. The Battlestar captain,
having survived the previous Cylon war, barks back in anger, “Many
good men and women lost their lives aboard this ship because someone
wanted a faster computer to make life easier. …I will not allow a
networked computerized system to be placed on this ship while I’m in
command. Is that clear?”

So this is the dark view of a hypothetical “Threat Landscape: 2039”
talk. (Some researchers refer to this possible outcome rather cutely
as “cybermalaise”.)

The other possibility I see (and the one I would argue, perhaps
optimistically, is the more likely scenario) is that by that point we
have undergone a phase transition vis-a-vis cybersecurity. Certainly a
huge part of that will involve advances in technical controls and a
shift towards better IT hygiene, but based on the earlier historical
analogies, I think by far the most significant contributor towards a
positive outcome in 2039 will be culture adapting to catch up with
 technology.

As geeks, we tend to obsess about how cool tech like crypto,
threat-sharing, etc is going to save the world, but in fact it is
advances in meatspace that will make the difference. At some point,
the majority of nation-states will come to see that it is in their
mutual interest to co-exist peacefully in cyberspace and increasingly
collaborate to prosecute bad actors. We’ll see treaty arrangements
vis-a-vis cyberspace along the lines of existing nonproliferation
agreements. It may well take a series of unfortunate events, the likes
of which we have not yet seen, to bring about the necessary shift in
thinking, but I believe that if we can just stay calm and do our best
to hold shit together, we will live to see such a positive sea change.

I am making a leap of optimism. It is absolutely not my intention to
handwave around the challenges we face; striving for intellectual
honesty is one of my core values. To stand here before you today and
offer a message of hope (especially after spending the past several
days digging into CVE-2015-7547) is extraordinarily difficult. But as
a species we are resilient. We have overcome far more daunting
challenges in the past. We can and we will prevail because we must. We
owe that much to future generations.

The advent of CISA in the US (and to some extent, NIS in the EU)
provides liability top-cover for sharing data. I believe that these
are necessary but insufficient to drive widespread sharing of CTI.
Additional incentives are needed and I believe that what will
ultimately drive this will be insurance underwriters tying
cyberinsurance premium rates to active participation in
information-sharing communities.

Now before I shift into talking specifically about CTI, I feel obliged
to note that I’ve been tossing the “cyber” around quite liberally so
far. If you’ve been following the rules of the drinking game, you’re
probably suffering acute liver failure at this point in the talk. My
 apologies.

Language matters. Narrative matters. Framing matters. People exist in
and live by stories. While we enlightened few may laugh at the term
“cyber”, for those outside our circle it embodies the mysterious,
ethereal world of ones and zeros. We all tend to think about new tech
in terms of “it’s the x of y”, a la, Airbnb is the Uber of hotels.
Most policymakers and diplomats are old people. The more we age, the
more we tend to relate new things to the familiar of our youth (a la,
cars as mechanical horses or whatever.) Let me remind you that this is
a thing, so while we should be careful of our metaphors, if senior
decision-makers are more comfortable talking about “cyber-whatever”,
don’t be shy of embracing their language to reframe the conversation
and steer them toward a more technically-correct understanding of the
issues at hand.

With that aside, I’ll do my best to avoid using the term “cyber” from
here on out. Hopefully you’re still sober enough to continue!

The title of my talk is “Moving Beyond Threatbutt”. In case you’ve
been living under a rock, Threatbutt is a hilarious site that
makes a parody of cyber-intelligence vendors. One of the criticisms
that’s been leveled at the concept of CTI is that it’s basically
Antivirus-NG. And this is a fair criticism. The infosec industry has a
long history of vendors hyping the panacea of the week. Antivirus
wasn’t a panacea, nor was SIEM, nor is CTI. But these are all tools in
our toolbox, we should use them all as effectively as possible, and
strive to constantly improve the quality of our tools.

Now to the question of CTI, which we’ll examine in two contexts : one,
in terms of the OASIS standards, and two, as a tool fit for addressing
certain problems.

(I’m well aware that there are other CTI standards competing with the
OASIS ones and will speak to that later. But the rest of my talk will
have a distinct STIX/TAXII bias.)

Last June, during the 2015 FIRST conference in Berlin, something
momentous happened. True to his promise, Richard Struse relinquished
DHS control of STIX, CybOX, and TAXII to an international standards
body. Whereas MITRE (under DHS oversight) previously determined the
direction of the standards, since the transition to OASIS the entire
community of interest control the standards via an open and democratic
 process.

As I’m a co-chair on the CybOX technical subcommittee, I’m conscious
of the fact that folks tend to forget about CybOX and just talk about
STIX. All the vendor marketing talks about “STIX/TAXII”. I’ll go with
that as a shorthand for the purposes of this talk. From here on out,
unless otherwise specified, whenever I say “STIX” I’m actually
referring to both STIX & CybOX.

STIX is far from the only game in town. I’m a long-time fan of
OpenIOC. It does one thing well and from v1.1 provides a very
nice mechanism for flexible extension of the spec via Parameters.
OpenTPX has some really nice features, too. It boasts a nifty
query language, flexible extensibility via a similar mechanism to
OpenIOC , and incorporates useful mechanisms for scoring indicators
and aging them over time. I’m less familiar with some of the other
standards. I’m a bit rusty on IODEF and my knowledge of
Facebook ThreatExchange is based on a somewhat cursory review of
the spec.

Suddenly the famous XKCD cartoon about how standards proliferate
springs to mind.

This is my first foray into the world of standards bodies. To those of
you bearing battle scars from participating in standards groups, some
of what I say will doubtless strike you as naive. OpenIOC and IODEF
precede STIX. There were reasons DHS decided to kick-start a new
standard, some technical and (I assume) some political. I wasn’t in
the room when that decision was taken, so I cannot speak to that.
OpenTPX, Facebook ThreatExchange, etc. were created after STIX. Based
on my experience, having been a part of the STIX community for quite a
while, I’m sure that the folks at Facebook and LookingGlass rightly
identified shortcomings in STIX and decided that it was more pragmatic
to create their own proprietary standards in order to solve problems
today rather than to be blocked waiting on MITRE. (Not meant as a
knock against my MITRE friends, btw, but it is what it is.)

Over in the OASIS CTI world, we’re well aware that STIX isn’t perfect
and are working strenuously to have draft specs for CybOX 3.0, STIX
2.0, and TAXII 2.0 available for public comment by the end of July.

In a perfect world, the folks who got frustrated with STIX and created
their own proprietary standards would come join us in OASIS now and
help to ensure that the new major revisions of STIX et al address
those same shortcomings. As this isn’t a perfect world, over on the
OASIS side we’re looking to those other standards for inspiration.

I’m reminded of the nightmarish world of text-encoding schemes before
UTF-8 became ubiquitous. There’s still plenty of iso-8859 floating
around out there but for the most part the world uses UTF-8 and is
much the better for it. I’ll draw the analogy to STIX. I don’t believe
for a second that there will ever be one universal standard for CTI
but I believe that in the near term STIX will be as ubiquitous as
UTF-8.

STIX as it stands today is a very producer-oriented language, which is
to say that it’s much easier to produce STIX output which is schema
valid than it is to write code to parse every conceivable valid STIX
input. Currently there are too many ways of doing the same things, the
standard is vague on some points, and there’s an awful lot of
indirection and optionality. These are the main issues we’re trying to
address in the upcoming major revisions of STIX and CybOX.

Our goal is to make the languages more Pythonic, insofar as there will
generally be just one clear way of doing things, reducing the level of
idiomatic variance, and hence making it much easier to rigorously
parse any valid input. After all, there’s no point in promoting a
lingua franca unless everybody can easily speak and be understood.

But I’m not going to put you to sleep droning on about the upcoming
standards. Let me just hit a few high points and move on. (Those of
you wanting to deep-dive, feel free to hit me up during the hallway
 track.)

  • We’re moving from XML to JSON+JSON Schema
  • TAXII 2.0 is becoming RESTful
  • We’re making a new Relationship top-level object to allow better
     graph-analysis.

Now a brief word about CybOX. CybOX isn’t sexy, but it’s the
foundation STIX is built upon. Things like IP addresses, file hashes,
Windows registry keys, HTTP session data, etc. CybOX is a language
that allows you to express facts about, well, cyber stuff. A STIX
Indicator builds on top of one or more CybOX facts (or Observables, as
we call them) to allow you to assert, “If you see these things, that’s
probably bad.”

Recently I went back and revisited my first post to the CTI mailing
lists from back in 2013. I was working at a project at Splunk, trying
to parse all of the CybOX types, and looking to the community for
feedback as to which types were most frequently used so as to
prioritize my development efforts. Back then I couldn’t get a decent
 answer.

Flash forward to 2015, Ivan and I are elected CybOX co-chairs, and
we’re confronting the same question in terms of how best to prioritize
our refactoring effort. One of the major challenges of CTI is that for
all the talk of information-sharing, most of the sharing goes on
inside of closed communities and secret squirrel clubs. So we created
a tool called cti-stats that these closed communities could run
against their CTI repositories to collect counts of STIX and CybOX
types. Having aggregated data from a number of major ISACs, ISAOs,
CERTs, and other CTI providers, some interesting patterns emerged.

Overwhelmingly, people are sharing Indicators; based on our
measurements, we’re talking about 96.47% of the STIX objects out there
in the wild. Of the 88 existing CybOX Observable types, we’ve only
seen 17 being actively used. Of these, 97.8% are Address, DomainName,
File, and URI. Which begs the question: are these four observable
types so dominant because they address the primary pain points, would
people like to be using other observable types more but don’t
because of issues with how those types are currently defined, or is
this indicative of the general level of maturity within the wider CTI
 ecosystem?

Feel free to review the numbers yourself. They’re
freely available in aggregate form and you can draw your own
 conclusions.

Reviewing the STIX data, I was frankly shocked to see little to no
evidence of people sharing higher level constructs associated with
incidents (ie, Incident or Exploit Target) or attribution (ie,
Campaign or Threat Actor). After all, the ability to put such
higher-level context around IOCs is one of the main selling points for
STIX itself.

But you can’t measure what you can’t see. Based on a number of private
conversations I’ve had, I believe that people are sharing this sort
of higher-level context but doing so through bilateral trust
relationships rather than broadcasting to their wider sharing
communities. The plural of anecdote, as they say, is not data, but it
certainly appears that this is the case.

Which raises some larger questions: what are the obstacles to scaling
up trust and what are the problems we seek to solve with CTI. I see
three basic problem spaces:

  • detecting active compromise and doing incident response;
  • proactively preventing compromise through indicator-based
     blacklisting;
  • gaining insight into who is targeting us and what motivates them.

There’s clearly a spectrum of maturity across different organizations.
It’s been frequently said that in order to leverage CTI, you have to
already be doing security fundamentals well, you must be this tall to
 ride:

  • know everything on your network
  • have a clear baseline of what normal activity looks like
  • have solid telemetry from your network and endpoints
  • know what your most critical assets are and focus controls around
     them
  • have a SIEM-like system for correlating the aforementioned telemetry
    and searching that for IOCs

That’s easy to say but it’s not easy to do. I hope that the majority
of companies and institutions are now sufficiently aware of the need
to improve their security posture but based on my own observations I
posit that most struggle to achieve the basics I just enumerated.

Recently I spent a day whiteboarding with the SOC team at a major bank
you’ve all heard of. I was trying to understand their entire operation
and help them assess where Soltra Edge could provide value. At one
point, I asked them, “What do you have for endpoint telemetry?
Bit9+Carbon Black? Tanium? Google GRR?” And they were like, “We have
nothing. We can’t afford it.” And I was like, “Seriously, guys, you
literally have a license to print money and you can’t afford endpoint
telemetry?” So it’s not only the little guys.

The 2015 DBIR included a fascinating analysis of the useful
lifespan of indicators, which suggested that in terms of proactively
blacklisting C2 associated with active campaigns, the vast majority of
indicators are useful for a day or less. Assuming that the DBIR
analysis is correct, an obvious corollary is that the majority of IOCs
out there are only useful for detecting and responding to fait
accompli compromise. But if you’re been following the DBIR over the
last few years, it’s clear from the data that most organizations take
a very long time to detect breaches.

That supports my assertion that the vast majority of companies and
institutions have a low level of security maturity. Now, given that,
if I’m a high-maturity organization and I have some really hot intel
about an active campaign, how likely am I to broadcast that widely
within my sharing community, knowing that most recipients lack the
capability to utilize it in a timely fashion? Moreover, once the
threat actors become aware of this hot intel, they’re going to shift
their C2, and suddenly this high-value data is only good for
detection. If I perceive that most participants in my sharing
community are low-maturity, why would I think that they would even be
capable of protecting the confidentiality of my hot data from the
 attackers?

It’s not enough to just share IOCs. Within these trusted sharing
communities, higher-maturity organizations need to help the others
grow up. One thing that would help would be sharing data around the
cost-effectiveness of security controls (both in terms of purchase
price and staffing requirements to manage them.) Purchasing
decisions are frequently made on the basis of somebody reading a
random article in CSO magazine or (at best) based on a Gartner report.
But all too often, infosec vendors sell tools that don’t do what they
claim to. For a smaller organization, once that money is spent,
they’re committed, shitty tool or no.

I am bloody sick of hearing the phrase, “You’re doing it wrong.” To
paraphrase the inimitable Chris Nickerson, “The first time you point
your finger at another person and say they’re the problem, that’s the
point when you should go educate them. Every subsequent time you point
your finger at them and call them the problem you’re just highlighting
your continuing failure to educate them.” Let’s cut it out with all
the “doing it wrong” talk and start showing people the right way.

At the FIRST conference in Berlin last year, the Cisco CSIRT team
launched a new O’Reilly book they co-wrote entitled “Crafting the
InfoSec Playbook”. If you haven’t already read it, I highly recommend
you order a copy today. Several years ago I had the privilege
of spending a few days onsite with the Cisco team and witnessed
firsthand how they drive their security operation around hard metrics.
It was truly impressive! They’re constantly measuring the
effectiveness of their tools, but moreover how those tools are used
(ie, COAs).

Earlier I argued that we just need to “hold shit together” a few more
years while our cultural institutions catch up with our tech. An
integral part of that is for more higher-maturity organizations to
collect and share the sort of security metrics exemplified by the
work of the Cisco CSIRT team within their respective sharing
communities. If BigCompany bought SIEMx, wasted 18 months trying to
get it tuned, ripped it out, and replaced it with SIEMy, that war
story might be vitally helpful to smaller firms facing a purchasing
decision. I’m sure there’s plenty of this sort of information-sharing
already going on. We need more of it.

A word about SIEMs. Commercial SIEMs are not cheap. There are halfway
decent open-source alternatives. Surely I don’t have to say it, but
what a SIEM gives you is the ability to correlate whatever network and
endpoint telemetry you have and search it. The Cisco “Security
Playbook” I mentioned a second ago clearly shows how their entire
security operation hinges around their SIEM. They’ve written it in as
vendor-neutral a manner as possible but you can tell from the
screenshots that (at least at the time of writing) they were running
Splunk. I was working for Splunk’s Security Practice back when I
visited the Cisco team. (Btw, please don’t ask me anything about
Splunk’s Splice tool because I had nothing to do with that.)

Back when I first started looking at STIX/CybOX (and OpenIOC,
incidentally) it immediately jumped out at me that here was an
abstraction layer for interchanging SIEM correlation rules in a
vendor-neutral manner. On the STIX/CybOX side, you have this taxonomy
of things to look for and a patterning language for saying “If you see
x and you see y, that’s bad. If you also see z, that’s a false
positive.” Something like that. Well, how is that essentially
different from a SIEM correlation rule? So I did an ETL mapping of the
STIX/CybOX taxonomy with Splunk’s own taxonomy, a bit of recursive
jiggery-pokery to handle nested boolean logic, and suddenly I was able
to programatically translate (for example) MITRE’s APT1 STIX sample
into a complex Splunk search, the sort of thing you’d never want to
type by hand.

A critical part of SIEM vendors’ sales pitch is how many correlation
rules they ship with, out of the box. But what they don’t tell you is
that most of these are pretty useless to your organization without a
good deal of tuning. So then you wind up paying big bucks for a SIEM
and then you pay big bucks for the SIEM vendor’s professional services
people to come spend a few days or weeks tuning the correlation rules
to your particular systems and needs.

Now one of the really cool things that the Cisco CSIRT do is to
constantly tweak the correlation rules driving their operation and
simultaneously collect performance metrics on those edits. Now
imagine, if you will, a world in which SIEM vendors supported
importing and exporting correlation rules into STIX/CybOX. This would
enable higher-maturity organizations to collaboratively edit and
improve the ability of those correlation rules to find the evil needle
in the haystack. It would also allow lower-maturity organizations to
dramatically improve their ability to detect badness in a timely
fashion. This, in turn, should drive down the average time to breach
detection. As the lower-maturity organizations are able to respond
more quickly to breaches, the average impact should also decrease (ie,
if the attackers are detected earlier, they have less time to pivot
across an organization.) This in turn frees up resources, allowing
these lower-maturity organizations to up their game and start doing
more of the proactive blocking and tackling. As the overall security
maturity level increases within a sharing community, this should
improve the level of trust, facilitating more widespread sharing of
higher-value intel.

But enough about SIEM, back to the world of OASIS.

There’s currently a tension within the OASIS CTI between folks who
want to streamline the standards so as to make it easier for
lower-maturity organizations to play the game and drive wider vendor
adoption and higher-maturity organizations / secret squirrel clubs
demanding support for edge cases seen as critical for their more
sophisticated needs. Case in point, we recently had a seemingly
endless discussion about the timestamp format. There was a small
faction demanding nanosecond-level precision, despite the fact that
relatively few system clocks support anywhere near this level of
resolution. In the world of Star Trek the needs of the many outweigh
the needs of the few but in the OASIS CTI world we play a delicate
balancing game.

I’m not big on mission statements, but I’ve long considered the end
goal of CybOX as defining something like a Dublin Core (or Dewey
Decimal System to all you Yanks) for P0wnage. This would be a useful
thing to have but it’s also an ambitious goal. I find that success
often hinges more on the things we choose to say no to than the other
way around. Ivan and I have been advocating a stepwise strategy for
refactoring CybOX:

To achieve a draft spec for CybOX 3.0 by our end-July 2016 target:

  • start with a greenfield
  • refactor the CybOX core base types
  • refactor the CybOX observable types we see in use today
  • time-permitting, refactor additional observable types based on
    community demand and input

CybOX 3.0 (like STIX 2.0 and TAXII 2.0) will be a
non-backwards-compatible major revision. We don’t want to do this
again for the foreseeable future, QED, the most critical task above is
getting the core base types right.

CybOX currently has a lot of stuff in it but it’s missing a lot of
obvious things, too. Cf my previous comparison to Dublin Core, we’re
trying to move away from a flat list of random objects towards a
logical tree-shaped taxonomy. Following the approval of CybOX 3.0, our
plan is to rapidly iterate through a series of point-releases,
targeting observables aligned with specific use cases, such as mobile,
SDN, virtualized/cloud-based metadata, network and endpoint forensics,
 etc.

But like I mentioned earlier, there’s a delicate balancing act between
addressing the needs of the many versus the needs of the few and hence
we’re currently fighting with some scope creep.

Enough about OASIS CTI stuff, my time is drawing to a close. To sum
 up:

  • To abuse Paul Watzlawick’s famous line, the situation is serious but
    not hopeless.
  • We’ve got some heavy lifting to do within our respective sharing
    communities to help lower-maturity organizations up their game.
  • Perhaps STIX et al will go down in history as Yet Another Threat
    Intel Standard. I don’t believe that is the case but it is a
    distinct possibility. The outcome largely hinges on you guys. We
    need your input and help! CTI is just one tool among many but it is
    a valuable tool. Come join us and let’s make that tool as sharp and
    as useful as can be!
  • Special thanks to Bernd Grobauer and Siemens for organizing and
    hosting this event and to FIRST for giving me the opportunity to
    speak here today.
  • In closing, to borrow the words of Dan Geer, “There’s never enough
    time. Thank you for yours.”