Moving Beyond Threatbutt or: Threat Landscape 2039
Amsterdam, Netherlands · November 2016 (approx.)
Conference recap: O’Reilly Radar
Transcript of remarks delivered in 2016. Minor formatting edits for readability.
Transcript
Good afternoon, y’all, and thanks for the opportunity to speak to you today. As the title suggests, this talk has two components.
I’ll get into technical matters related to threat intelligence momentarily, but first, please allow me to frame this discussion in a context you may not have fully considered.
Earlier this year I was at an infosec event where a vendor presented their view of the threat landscape in 2016. It was a perfectly acceptable talk but like 95% of the infosec talks I’ve heard: to wit, everything is vulnerable, the world is going to hell in a handbasket, and hence you should buy our black box or retain us to pentest your network.
In the course of their talk, the speaker made a joke about the impending end of the unix epoch in 2038 and it started me imagining what a “2039 threat landscape” talk would look like. And to be honest, I cannot imagine that it would bear any resemblance to the typical infosec talk you hear today.
Fear-driven narratives are compelling and it’s easy to get sucked into them. So, here’s the scary part of my talk.
The Internet is Rickety
The internet is rickety. Imagine for a moment that you’re taking a hike in the forest and come to a stream blocking your path. You step gingerly from one stone to another, carefully testing each next step before shifting your full weight to the next stone. As a civilization, we’ve managed to place our center of gravity on an unsteady rock called the internet and there’s no way back. (Well, there may be but it’s somewhat nightmarish to consider.)
The attackers currently have the advantage. We can’t possibly find all the software bugs faster than the attackers can, much less patch them in time. We don’t even know how much of a difference patching makes in the grand scheme of things.
Consider Bruce Schneier’s 2014 Atlantic Monthly article in which he ponders the problem of zero days and examines the question whether software vulnerabilities are sparse or plentiful. We lack sufficient data to say with certainty, but the data we do have suggests that the number of software bugs we don’t know about is orders of magnitude greater than the number we do know about.
Even supposing we could clone ten thousand Dan Bernsteins and Meredith Pattersons to rewrite our software stacks, the installed base problem would still bite us. Imagine, if you will, that a study was published proving beyond a doubt that asphalt causes cancer. It would still take decades to replace all the world’s roadways. Substitute vulnerable embedded systems for asphalt and p0wnage for cancer and the metaphor holds.
Metcalfe’s Law posits that the value of a network is proportional to the square of the number of connected endpoints. In a world of ubiquitous vulnerable systems, many embedded and/or effectively unpatchable, the security challenge appears to scale in rough parallel.
So here we find ourselves, perched precariously on a shaky foundation, constantly shifting our center of gravity to avoid a perilous fall.
Here’s the good news: the worst case scenario almost never happens!
Technological Change and Societal Adaptation
It’s worth noting that the situation we find ourselves in today is hardly a novel one. Throughout human history, the rate of technological change has outpaced our societies’ ability to adapt.
- Gutenberg invents the printing press → religious upheaval → the Thirty Years’ War.
- Industrial and scientific acceleration → WWI and total industrialized warfare.
- Atomic science → medicine and energy, but also Hiroshima, Nagasaki, and the Cold War.
The point is: there’s a pattern of technologically-induced disruption leading into a period of societal adjustment. Following those convulsions, things more or less settled down. People still kill and die for the written word. Nation-states still maintain WMD stocks. But in general, we adapt and life goes on.
Threat Landscape: 2039
So what would a “Threat Landscape: 2039” talk look like? I see two mutually exclusive alternatives.
One: something awful has happened, leading us to unplug from networks and retreat toward something paper-driven.
I’m reminded of a scene from the pilot episode of the Battlestar Galactica reboot. After a devastating surprise attack, a crew member suggests interconnecting systems to speed up calculations. The captain, having survived the previous war, refuses: “Many good men and women lost their lives aboard this ship because someone wanted a faster computer to make life easier… I will not allow a networked computerized system to be placed on this ship while I’m in command.”
This is the dark view. (Some researchers refer to it rather cutely as “cybermalaise.”)
Two: we undergo a phase transition in security maturity — technical controls improve, hygiene improves, but most importantly societies adapt economically, culturally, and at the policy level to catch up with technology.
As geeks, we obsess about crypto, threat-sharing, blockchain-ng, etc. But in fact it is advances in meatspace that will make the difference. Nation-states will increasingly recognize the mutual benefit of peaceful coexistence, collaboration, and prosecution of bad actors. Treaty arrangements will emerge, akin to nonproliferation agreements.
It may take unfortunate events to catalyze the shift, but if we can stay calm and do our best to hold things together, we will live to see such a sea change.
I am making a leap of optimism. It is not my intention to handwave the challenges; intellectual honesty is a core value of mine. But as a species we are resilient. We have overcome daunting challenges before. We can and we will prevail because we must. We owe that much to future generations.
Incentives, Trust, and Information-Sharing
President-elect Trump has spoken openly of withdrawing the US from long-standing international partnerships. Cross-border cooperation may be receding. This is where industry can step in.
One catalyst driving ISACs and ISAOs has been market competitors recognizing that collaboration for mutual defense outweighs any tactical advantage gained when a competitor is breached.
The NIS Directive (EU) and CISA (US) provide liability protection for information-sharing. Necessary, but insufficient. The critical factor remains economic incentives — participation that measurably lowers breach risk and supply-chain risk.
Language and Framing
I’ve been tossing “cyber” around quite liberally.
Language matters. Narrative matters. Framing matters. People exist in and live by stories. If senior decision-makers are more comfortable with “cyber-whatever,” don’t be shy about embracing their language — then using it to steer them toward clearer technical understanding. In that spirit: cyber all the things!
Threat Intelligence Isn’t a Silver Bullet
Threatbutt is a parody of cyber-intelligence vendor hype. One criticism of threat intelligence is that it’s Antivirus-NG. That’s fair. The industry loves panaceas. Antivirus wasn’t one. SIEM wasn’t one. CTI isn’t one. But they’re all tools. Use them well, and keep sharpening them.
STIX 2.0 and the Refactor
Now to CTI in two contexts: (1) the OASIS standards, and (2) CTI as a tool fit for particular problems.
Last June, DHS relinquished control of STIX, CybOX, and TAXII to OASIS. The standards are now controlled via an open and democratic process.
STIX 1.x solved real-world problems, but it was a mess. Parsing arbitrary schema-valid STIX felt like trying to solve the halting problem: too much optionality, extensibility, and multiple ways to express the same thing. Interoperability suffered.
After the move to OASIS, a core group did a greenfield refactor. We simplified radically. We moved from XML to JSON. We targeted an MVP: use cases, measurement of what’s shared in the wild, reduced idiomatic variance, and a graph-based model with first-class relationships for analyst pivoting.
STIX 2.0-rc3 was released on Tuesday and will likely become the final STIX 2.0 release. Vendors are already writing code. We’re already starting on STIX 2.1. And this all happened in under a year — unusually nimble for a standards body.
Remember the world before UTF-8? That’s how I see STIX: not the only standard forever, but likely the lingua franca near-term. Please don’t go build your own CTI standard until you’ve taken a hard look at STIX 2.0.
Observables and What People Actually Share
CybOX became STIX Cyber Observables. The old model was sprawling. We wanted to adhere to the MVP approach and refactor around real use.
But most sharing happens in closed communities. So we created cti-stats so communities could measure their own repositories and share aggregate results.
Patterns emerged:
- Indicators dominated (~96.47% of objects seen in the wild).
- Only a small subset of observable types were widely used; most usage concentrated in Address, DomainName, File, URI.
- Higher-level constructs (incidents, attribution) rarely appeared in shared datasets — likely shared bilaterally, not broadcast.
Security Maturity and the Trust Problem
To leverage CTI effectively, you need fundamentals: asset visibility, baseline, telemetry, critical-asset focus, and SIEM-like correlation/search. That’s easy to say and hard to do.
I once asked a SOC team at a major bank what they had for endpoint telemetry. “Nothing. We can’t afford it.” A bank. “A license to print money.” It’s not only the little guys.
DBIR analysis suggested many indicators are only useful for a day or less for proactive blocking. If detection cycles are long, most IOCs become post-compromise detection artifacts.
If you’re a high-maturity org sitting on hot intel, why broadcast it widely if the recipients can’t use it in time — or can’t protect it from leakage?
So: it’s not enough to share IOCs. Higher-maturity organizations need to help others grow up — including sharing cost-effectiveness data of controls and real war stories.
I am bloody sick of “You’re doing it wrong.” Educate. Then educate again. That, too, is information-sharing.
SIEM Correlation Rules and the Patterning Language
SIEMs are expensive but central: correlate telemetry, search it, iterate on rules.
Early on, STIX looked like an abstraction layer for vendor-neutral correlation rules: “If you see x and y, it’s bad; if you also see z, it’s a false positive.” That’s a correlation rule.
STIX 2.0 introduced the STIX Patterning Language. A working group explored existing standards (Snort, Yara) and ultimately built something new. The MVP is powerful, and we’re excited about enabling distributed, collaborative development of correlation rules in a vendor-neutral form.
If SIEM vendors supported import/export of rules via STIX, higher-maturity orgs could collaboratively sharpen detection. Lower-maturity orgs could accelerate capability. Detection times drop, impact drops, resources free up, maturity rises, trust improves, and higher-value sharing becomes more feasible. A rising tide of structured threat data lifts all boats.
What’s Next, and How You Can Help
STIX 2.1 work is underway (i18n, malware characterization, OpenC2 alignment, expanded observables, more patterning capabilities).
Two asks:
- If you’re making a purchasing decision, consider adding STIX 2.0 support to your RFP. If you’re building, add STIX 2.0 support to your tools.
- If you have SMEs (mobile malware, SDN, cloud metadata, forensics, ICS/IoT), please reach out. We can’t be experts at everything — but we can herd SMEs into a coherent standard.