The Security Table

Adam Shostack -- Thinking like an Attacker and Risk Management in the Capabilities

February 06, 2024 Chris Romeo Season 2 Episode 4
The Security Table
Adam Shostack -- Thinking like an Attacker and Risk Management in the Capabilities
Show Notes Transcript

Threat modeling expert Adam Shostack joins Chris, Izar, and Matt in this episode of the Security Table. They look into threat actors and their place in threat modeling. There's a lively discussion on risk management, drawing the line between 'thinking like an attacker' and using current attacker data to inform a threat model. Adam also suggests that we must evaluate if risk assessments serve us well and how they impact organizations on various levels. The recurring theme is the constant need for evolution and adaptation in threat modeling and risk management processes. You can tune in to get a rich perspective on these key cybersecurity topics.

Link
Threat Modeling Manifesto: https://www.threatmodelingmanifesto.org/
Threat Modeling Capabilities: https://www.threatmodelingmanifesto.org/capabilities/

Threats: What Every Engineer Should Learn From Star Wars by Adam Shostack - https://threatsbook.com/

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel

Thanks for Listening!

Chris Romeo:

This is just so the folks at home know, this is literally take 117. So take 117. Here we go.

Matt Coles:

Snap.

Chris Romeo:

That's right. Where's the little clappy thing. Welcome folks to another episode of the security table. I am joined by Izar and Matt, and we have a special guest, Adam Shostak, who, if you've ever done anything with threat modeling, You've definitely heard of Adam, read something he wrote, or listened to him talk, or seen him at a conference, or one of all those many different things. And so just to set the stage, Izar wrote an email. I know it seems strange, someone so in touch with technology would write an actual email versus sending us a Snapchat or something, I don't

Izar Tarandach:

I try once a day.

Chris Romeo:

But

Matt Coles:

You succeed sometimes.

Izar Tarandach:

No, no, wait, wait, wait. The email, the email actually comes from Slack, right?

Chris Romeo:

oh, oh, so yeah.

Izar Tarandach:

we, we have the Slack channel for the Threat Modeling Manifesto group, where we basically, uh, to achieve what's, what's it, we try to achieve rough consensus by questioning the things that we did the week before. And the thing that we did with the week before was we released the Threat Modeling Capabilities document. And Adam, as is his want, in his blog, gave us, uh, the the honor of a blurb, and he also pointed out a couple of things in there that, Adam, you take it. I'm not going to speak for you.

Adam Shostack:

So, so, you know, in the spirit of rough consensus that we operate in, were things that I didn't love about the capabilities doc, but wanted to get to consensus rather than Having a, having a big, long argument about it. And so I wrote a little bit about the mention of attackers and the mention of risk management to add, Hey, here's, here's my perspective on, on these things. Should I, should I keep going?

Izar Tarandach:

And that perspective is?

Chris Romeo:

Much.

Adam Shostack:

That perspective

Izar Tarandach:

leave us in suspense.

Adam Shostack:

thinking, thinking about attackers leads to problems. And so I was a little bit, I was a, yeah, I, I was, y'all were wrong. So, so very wrong.

Matt Coles:

So, so needless to say that this, uh, this, uh, this simple statement from, from Adam in the way that he generally delivers these statements. Blew up on the Slack channel. With, uh, with, I think at last check there were 40 something replies and cross replies for it. So, so, uh, it was, it was worthy of debate, although we might wonder why it wasn't debated before we released, but that's okay. It's okay.

Izar Tarandach:

It might have been.

Matt Coles:

It might have been, I'm sure it was, uh, and somewhere it got lost in the shuffle, so,

Chris Romeo:

to ensure, let me just ensure, so we're talking about the continuing education. Right? This was the part of the capabilities where it says people are encouraged and supported to become aware of new classes of threats, manifestations of threats, threat actors, new security features of their environment, and how to best apply them. So,

Matt Coles:

And threat actors was specifically the point of contention.

Chris Romeo:

yeah, so it wasn't, so it was, so, so, Adam, just to confirm, when we said, when the capability says threat actor, that's what you're attributing to the attacker side.

Adam Shostack:

Yes,

Izar Tarandach:

I think that part of the, uh, not controversy, but part of the discussion that we had in there was what exactly did we mean by, uh, by using the word attacker? Because I think

Matt Coles:

We didn't use, we didn't use attacker.

Izar Tarandach:

The word, right? But some of us...

Matt Coles:

We use the word attacker.

Izar Tarandach:

I think that some of us may have fallen into the trap of think like an attacker, and that was what we seem to be, that, that was what could be understood from what we said, and then somehow we managed to pull this towards threat intel and the value or lack. And, uh, and then we spoke about, uh, risk management, right? But, um, I, think that Adam, you have a very, uh, uh, strong position on the think like an attacker thing. And that's always worth hearing again.

Adam Shostack:

Sure. So, so the, the essential thing that I have to say about Think Like an Attacker is that nobody knows what it means, how to teach us. Or, and it doesn't give you any structure in go ahead and do the work. So when I compare saying, you know, if I'm going to ask you, Izar, to think like an attacker, or give me three spoofing threats against this system, I can teach you how to give me three spoofing threats. I can show you what reasonable answers to that question look like, think like an attacker. is so open ended that if we want to have people across an organization threat model every story, got to give them something beyond think like an attacker as they do it.

Matt Coles:

Yeah, so can I, um, maybe I just want to loop in here a, something that Izar said in the past, uh, ages ago, uh, and, you know, that we should be teaching principles, right? And I think what you're highlighting for, and just for, you know, our, our viewers and listeners. By thinking like an attacker, you're trying to get in the mindset of the people who are looking to exploit a system. As opposed to understanding the technologies involved and understanding what principles are being violated in that, right? And one you can predict, and the other one you can't. Or rather, there are so many variations of the attacker mentality that that it, you lose, you can lose, easily lose track of things. That's how I interpret your statements around thinking, uh, why thinking like an attacker is bad.

Adam Shostack:

And so I, I

Izar Tarandach:

I usually bring that back to the, the,

Adam Shostack:

go ahead Izar.

Izar Tarandach:

No, no, Adam, please go.

Adam Shostack:

We have lag here, so these, there, there, there's pauses as we tell each other to go. So I'll go.

Izar Tarandach:

It's us thinking, it's us thinking, we are just, we are just pondering.

Matt Coles:

Use the force, Izar.

Izar Tarandach:

These are very deep thoughts, we need time between them.

Adam Shostack:

So, I started with the think like an attacker is bad, and then started to expand that to why are we thinking so much about attackers, what are the pros and cons of thinking about attackers, And one of the things is, well, it's hard to think like an attacker in general. It's also apparently really hard to think like a specific attacker, and I don't want, you know, my, my former colleagues at Microsoft are having a bad week, but they've, and so no shade on them, but. Microsoft has a whole Threat Intelligence Center, which includes a bunch of really smart people who spend a lot of time thinking about how to specific attackers work. And they had to file a report with the SEC this week, or last week, about what they missed. And so, again, I really don't mean to cast shade on people who I enjoy working with, who are smart and dedicated folks, but if they can't do this and drive their defenses where they need to be, what hope does a smaller organization that doesn't run its own threat intelligence center have? Or, what happens if you miss an attacker who should be on your list? And so, going back to what you said, Matt, if we think about the technology and the principles that we want it to have, or the properties we want it to have, that may be a better route to what we want than thinking like attackers or thinking about attackers.

Matt Coles:

and you're, and you're, I mean, you're spot on, I think, about, uh, you know, the, the threat intelligence discussion, part of that discussion. You're only as good as the intelligence you have, and if there's a new attacker group, you're not going to have a priori knowledge of their capabilities. I'm thinking about the MITRE Att&Ck framework is only self, you know, it's only what, What the, what organizations that feed information into that, into that framework provide is the attacker's, you know, body of knowledge that you would have, um, and any, you know, private threat feeds and whatnot. And so how do you predict what's possible, what somebody's going to do, what somebody's going to go after and be good enough, right? To question four is, are

Adam Shostack:

Izar,

Izar Tarandach:

Yeah, no, I'm just waiting to see, I'm just waiting to see if Adam's going to,

Matt Coles:

And see the two

Izar Tarandach:

the, the thing for me is that, uh, at the end of the day, what we're trying to do here,

Matt Coles:

screen,

Izar Tarandach:

it back to threat modeling.

Matt Coles:

will follow

Izar Tarandach:

threat elicitation, right?

Matt Coles:

so,

Izar Tarandach:

So we, we, we do the whole system modeling and now we think, okay, what could go wrong? And we are feeding from all kinds of things. And in the past, I have been very vocal about, for example, uh, threat libraries, which I compare to driving, looking at the rear view. Mirror, because basically you are moving forward but you're looking at the things that already happened to you, right? And, uh, and to think like an attacker for me has always been the put me in the kitchen, tell me to think like a chef, but order the pizza because nothing is going to come out of it because I don't know what a chef does. I don't know the techniques, I don't know how to put the ingredients together, so nothing is going to come out of it. But then this Threat Intel thing came into view, and we had this very interesting thing happen on X, I think it was? I don't know, it might have been Twitter at the time. And somebody said, uh, there's no need for threat modeling anymore because Threat Intel is going to do it, to do it all. Right? Threat Intel is going to kill threat modeling. Uh After I came back from the floor, where for some reason I was rolling around laughing, I don't know, might be connected, might not be connected, I don't know, I started to think about that a bit more and, uh, the idea that Threat Intel does have some kind of value for threat modeling came to me in terms of, you know what, I can think of principles. I can think of, uh, libraries, but there is new stuff coming in and there is chaining of old stuff coming in. And there is value to see what's happening in the world and see how that can inform my, my, my threat elicitation. And more than that, I think that, and here's where the risk conversation comes in. When we start talking about if I can in real time see what's the next wave of attacks coming in, I can use that to calibrate who might want to attack me as, as a specific customer and specific organization and say, okay, these are the techniques that are being pointed my way. So I can build stronger drawbridges and, and, and deeper modes. in between me and those techniques. But again, I, I see that, uh, that all as a, as a, as a continuum and not as one specific thing that's going to, to take place. So the, the things like an attacker to me turned into, take a look at what the attackers are doing and use that to inform your practice. And then we started talking about risk.

Chris Romeo:

Don't go to risk yet. We're not, we're not done with

Matt Coles:

Yeah, don't go,

Izar Tarandach:

We're not doing risk yet.

Chris Romeo:

Let Adam, yeah, let Adam, let Adam respond to the whole threat intel thing. I'm curious what his thoughts are.

Adam Shostack:

So, you know, It's challenging, because on the one hand, I totally agree with, we can use the threat intel to check our work, we can use the threat intel to say, let's emphasize this defense, and so, so the question that I have is not the sort of theoretical could we, it's the sort of theoretical we can use to say, let's emphasize this defense, and so the question that I have is not the sort of theoretical could we, The question I have is, what works better, you know, what works for... Do we use threat intelligence when we're threat modeling operations, when we're updating operational defenses, and more general principles as we design, um, the, one of the things we talked about in the manifesto, and I forget the exact words we ended up with, but is being reflective enough while still getting stuff done. And I think this is a, an example of that is what is the role of threat actors? And this is why I didn't, you know, try and stop the capabilities from coming out is because we can have these useful conversations about it. Um, but for me, my overall thinking is. We've, we've had, let's see, CERT was launched in 1988 in the aftermath of the Morris worm. We've got ISAC, we've got CERTs, we've got ISACs, we've got ISAUs, we've got all of these information sharing initiatives to try and get threat intelligence to the people who need it.

Matt Coles:

job here. So, anyway, well, alright,

Adam Shostack:

And maybe it is bending the curve so that things are not going badly. going badly as much as they would have if we didn't have the threat intel, but I don't think it's changing the direction of the curve and saying we're actually getting better at defense and I definitely think it's not doing that. in proportion to the energy that it gets.

Matt Coles:

Yeah, I think that if I may, I think that that have, I think that, and I'm, I'm not a SOC analyst. I'm never, never have been. Um, but I, I have a sense that the reason for that is because, especially like ISACs and, and others, other organizations that share threat intelligence, that really helps with detection. More so than defense, right? It's really how to recognize attacks. Hey, so and so organization indicates that they were attacked in this way. And oh, if you're in a similar business, you're running similar technologies, be on the lookout for this. Not here's, you know, necessarily, you know, what firewall rules to put in place or how to re architect or re architect an application on the fly to, to avoid the vulnerabilities that they're leveraging. Right, uh, by and large, those are long term things as opposed to the ISAC, the threat intelligence sharing role is, Hey, this came for us, look out for it coming for you, too. Uh huh,

Chris Romeo:

I want to go a little more fundamental here. I want to go back to the capability because when I look at the capability, I don't believe the capability is prescribing think like an attacker. I agree and have taught, I've heard Adam, Adam's the one who introduced me to how we need people to think like security people and not like attackers. And so I've taught that way after. So I agree with Adam on that, that core thing. But when I look at the capability, what I see in the capability is we're saying that we want a solid threat modeling program to inform people that are doing threat modeling about and make them aware of new classes of threat actors. And so when I think about this, like one of the things that I put in the fundamental security training that, that, um, I used to be a part of the company that did that, um, was always introducing people to at least the common categories of threat actors. So, nation states, cyber criminals, um, you know, anonymous, whoa, whoa, whoa, Matt's dog is not agreeing with that. Apparently I said the wrong category again. Um,

Adam Shostack:

He's like, dogs are threat actors too.

Chris Romeo:

categories.

Matt Coles:

Dogs are threat actors, too.

Chris Romeo:

Add dogs to that list. So, dogs, nation states, cyber criminals, you know, people that want to burn the world down, um, and so. But I always taught people at least that here, here, just to make them aware, because one of the worst things I found is people are sitting there going, nobody would ever want to attack us. We had that phase, right? And, and so making them aware of what those are. And so that's what I think the capability is saying is make people aware as new classes of attackers come out, don't think like an attacker, but I want to get Adam's take on that.

Adam Shostack:

Yeah, and, and let's quote here, right? So it says people are encouraged and supported in becoming aware of new classes of threats, manifestations of threats, threat actors, new security features of their environment. So I think you're absolutely right in that. The general statement is not about thinking like an attacker, and, and my question is do we get all excited that dogs are not on our list of threat actors, because they can bite, and they can poop on the rug, and there's relevant threats there?

Matt Coles:

to be on your keyboard.

Izar Tarandach:

so I have an answer for that one.

Adam Shostack:

okay.

Izar Tarandach:

So a lot of times, and I'm sure that this happened to you guys too, when you're threat modeling and you find threats. you think that you found a threat, somebody is going to look at you and say, but who would possibly do that, right? And I think that it's very easy today to say, you know what, perhaps today nobody would, but tomorrow you never know. So perhaps having that awareness that the threat actor and the classes of threats may change, at the end of the day informs the fact that things that we chose to push down in the priority list are Perhaps come up because all of a sudden it's easier to exploit, I don't know, cache, L3 cache bugs, right? Which yesterday we said, Hey, this is really difficult to do. We don't care.

Chris Romeo:

I mean, even better example, think about what GPUs did to password cracking. Before GPUs, like we had to run password crackers for decades to try to get something, all of a sudden you have

Izar Tarandach:

All kinds of gadgets.

Adam Shostack:

So,

Matt Coles:

little time.

Adam Shostack:

So, I want to go back to what Izar said about no one would ever do that. Because I think that that thinking is common and it's dangerous. Because if you don't happen to know about something that an attacker is doing because it was sent under TLP Red and you can't talk about it with other people without permission, um. It distorts things to be so focused on what are people doing today versus what could they do, what are, what defenses could we put in place. And there's a, there's a lot of nuance in this one. There's a lot of subtlety. And when I think about subtlety and I think about scaling. I think it's hard to... Subtlety is a nuance. These are things that experts bring to the table. And if we're going to scale to the size of the companies that we've all worked for in the past, you know If I, if I were to throw a dart and hit a random Cisco employee or a random Microsoft employee and ask them questions like this about how to use attackers in threat modeling, I don't expect I'm going to get a nuanced answer and that's no disrespect, right? They're smart in whatever it is they're doing for their employer, but this is nuance. This is, this is the fun stuff that we as experts get to talk about. I'm curious how you all think about that in terms of what do you do at a program level?

Matt Coles:

So, I'd like to respond to that with, uh, and that reinforces the need for education, I think, which is why I think its placement in the capabilities is useful, at least so it's Accounted for. So I'm not going to answer your question directly just yet. Um, but I do want to highlight, well I want to highlight something else that we talked about on the Slack thread pertinent to this. In terms of, we have threat actors in, so the Threat Modeling Manifesto and the capabilities are covering both privacy and security. And by and large, we talk about attackers in terms of, of security, because those are the defenses, you know, you know, people coming from the internet and attacking systems. And, and one of the things that threat actors that we, and pertinent to what you just described about, you know, pick a random employee from a company and, you know, what are they thinking about in terms of, of, um, you know, if you ask them, you know, uh, where do attackers fit in their threat model? Um. or actors fit in the threat model. They're probably not thinking about the inadvertent actor where information bleeds to, then introduces privacy risk, as an example, or privacy threats. And so, uh, and it's not strictly that, but, but certainly there is a need for understanding that actors exist in the model, that there's a lot of actors that exist in the model that we most of the time ignore. Right? Because we don't necessarily care about the, uh, person who maintains the building or the person who maintains the network physical infrastructure, um, or somebody who main who manages billing for the company that we that man you know, that that where the colo is, right? And yet, those are places where information can bleed to introduce privacy concerns. So having a place where threat actors, threat actors, not attackers, can exist and providing that education so that you can get better quality models and analysis is I think what we tried to go for, uh, that's at least what I thought we were trying to go for and I, and hopefully that's, I think that's where we will succeed if we focus on that as opposed to,

Izar Tarandach:

That was the rough consensus.

Matt Coles:

Sorry, I, I needed to get that point in and I wasn't sure how. You opened the door so, so there it is. Yep,

Adam Shostack:

pick up on something you said, which is actors exist and we normally ignore them in the model.'cause this is one of those things that when I train people, they're like, but I could put this in the model and I could put this in the model and I could put this in the model. Where do I stop? And again, one of the things about experience and expertise is that the four of us have ideas about, oh, I should pull this thing that we normally leave out because models leave out details. I should pull this one in now. And this is, this is really a matter of training. It's a matter of experience. It's a matter of maturity in your practice. That is so hard to develop. So I love that you're bringing in this point in light of the capabilities, because the capabilities are explicitly not a maturity model, but we start to see how capabilities interrelate with maturity.

Chris Romeo:

Yeah, and that was something that, uh, Matt was one of the people who, and Jonathan Marcel were both the two that were keeping us honest when it came to, I don't know how many times Matt in the, in the dialogue about the capabilities was like, uh, starting to sound like a maturity. That sounds like maturity. Like, we should have had it on record. So you can just push a button. Beep! That's maturity. But it was a good point, because like, to Adam's point, like, it was so easy to drift from capability into, what if we did it this way, but then they could do it this other way that would be better? And we just naturally went down the maturity path, and, and folks like Matt and Jonathan kept pulling us back, saying, no, come on, circle back around. It's zero or

Matt Coles:

Got

Chris Romeo:

Either

Matt Coles:

into it,

Chris Romeo:

it.

Matt Coles:

Even Brook got into it, which was great, uh, so,

Chris Romeo:

So let's talk risk management then. Let's go there. Izar set it up. You were, you were dying to get to this 15 minutes ago and I turned the corner back around and did a U turn. So go ahead and take us back onto the street.

Izar Tarandach:

minutes ago, I don't remember anymore. But

Matt Coles:

risk.

Izar Tarandach:

it's a risk, it's a risk, but I, I, I think that where we were going was that there was another mention in there of another thing that we tried to keep away from the capabilities. And actually, I think that we tried to keep away from the manifesto itself, and it was the idea of risk. But at some point, we just had to go there. I think that if we dive this whole threat actor and attacker thing, I think that in the view of the big question, who would ever do this? Which is when we translate this probability thing to developers. And all of a sudden we can say, oh, look at these guys here. They are doing it on that other company. The probability just went up and, uh, the impact is going to be different for every company, of course, but it's already a recognition of risk, even if we are not using the term, even if we are not putting it all there. My question is, as we, we all, I think, converged into considering risk management a very separate part from threat modeling, which once was not true and actually in some methodologies is still very much not true. Those two things are very linked there. Where do we land now with risk management and threat modeling?

Adam Shostack:

I'm an extremist again, again

Izar Tarandach:

I'm shocked!

Adam Shostack:

My extremist view is that risk quantification is the path to the dark side. When we start putting numbers on risk, what we do, and by risk specifically, what I mean here is likelihood and impact as the quantified factors. When we do that, we create things that are listed as high risk, right, either high impact, high probability, or some combination that leads you to whatever your bar for high is, and it ignores the cost of fixing. It ignores whether the thing being changed is in a changed state where we're making updates to it or a fixed state where we're not. And so what we end up with is the security people using a risk number get all worked up about something that the organization isn't going to touch. And that leads to anger, and anger leads to hatred, and hatred leads to the dark side.

Matt Coles:

So, Adam, I just, I want to take a quick, uh, for, for our, our listeners who may not be familiar with these, some of these terms, uh, you said

Adam Shostack:

I'm quoting Yoda.

Matt Coles:

Oh, I know you are, uh, uh, likelihood and, likelihood and impact, uh, as a measure of risk. And usually that's done in terms of dollar, dollar amounts and, and loss event probabilities and whatnot. That is different from severity. which is ease of exploitation and impact, but at a different level. Correct. I just want to make sure we're, we're on agreement in that. And that if, if folks have questions about that, that there are two different terms, we talk about severity a lot in threat modeling, what you're suggesting is risk being distinct is something to be wary and potentially avoided in that conversation.

Izar Tarandach:

And spoiler alert, both are different from priority, which is at the end of the day what we wind up telling people to actually pay attention to.

Adam Shostack:

Yes.

Izar Tarandach:

So where do we land now? What do we give people? They

Matt Coles:

Well, let, let, let, let, uh, let Adam respond. Yeah.

Adam Shostack:

I agree with what you're saying, right? Severity is not risk, and priority is not severity. But when we make statements like Teams use threat modeling to understand and appropriately quantify risk, period. I,

Izar Tarandach:

yeah.

Adam Shostack:

you know, again, rough consensus versus Adam's extremist views. Um,

Matt Coles:

Is it appropriate that threat modeling should inform risk? The very second sentence there is the organization uses threat modeling to inform and adapt its risk profile.

Adam Shostack:

yeah, that, that was sort of Jonathan's excellent editing to get my perspective in here. Um, and, you know, I think, I think about the classic risk management of mitigate, eliminate, accept, and transfer. And I like to start that with mitigate. Because mitigate, what are we going to do about it, precedes eliminate, accept, transfer as things we do when mitigation is hard.

Matt Coles:

Or

Izar Tarandach:

always thought that

Matt Coles:

Sorry, Izar,

Izar Tarandach:

we are missing a sweep under the rug on that scale. But, uh,

Matt Coles:

Isn't that accept? I mean, is that

Izar Tarandach:

No, no, no, no. Sweep under the rug is definitely not accept. Sweep under the rug is, let's not talk about it anymore, okay?

Chris Romeo:

for all the, uh, for all the corporate people here that have to file, you know, reports and things like that. Please just ignore what Izar said for the last 60 seconds. For your good and for ours, this is a public service announcement from The Security Table.

Izar Tarandach:

it under the rug! But the, the, the thing

Matt Coles:

Is that, see no evil?

Izar Tarandach:

Actually, it's more like see no evil. Uh, one of the things that, uh, one of the things that's important to say in terms of the capabilities document is that one of the big, big things that was leading us through throughout the work was we were trying to be concise. And I think that when that editing, uh, happened and RISC appeared that way. I think that we opted to use language that most of the interested people would be able to read, recognize, and understand for varying values of understanding. I have to disable this thing somehow. Damn it.

Adam Shostack:

I think you need to leave it on.

Chris Romeo:

unfunny for us.

Izar Tarandach:

I'm getting a Linux laptop.

Matt Coles:

Where, where, where's the, where's the, where's the, where's the, can you go do this?

Chris Romeo:

Yeah, bring the fireworks. Come on.

Izar Tarandach:

That's the laser. That's the laser show. That's the dark side. That's me doing Palpatine. The Emperor.

Adam Shostack:

I think if you're going to do that, Izar, you need to do it like this.

Izar Tarandach:

yeah, but nothing happens. I tried, nothing happens.

Adam Shostack:

long and prosper, dude.

Izar Tarandach:

My computer does feel a bit more blessed right now, but, uh,

Chris Romeo:

ha, ha,

Matt Coles:

Sounds like a

Chris Romeo:

model and prosper. All right, well, where do we land here, Adam, on this whole risk management thing? Like, what's, like, give us a, let's, let's kind of wrap this to a, towards a conclusion from your side.

Adam Shostack:

I don't know. Um, why do I have to conclude? Um.

Izar Tarandach:

It's a risk you're running.

Matt Coles:

It is acceptable to say, don't do this capability.

Izar Tarandach:

Rug, rug, rug.

Adam Shostack:

So, so I think that the, I'm not quite where I'm, what I'm, I'm not quite at the thing I'm going to say next, but when I started talking about how think like an attacker was a bad thing to say, um, I was, I was almost literally called a heretic. I am running in the direction of a lot of the ways we talk about risk management are, have been taken as axiom, axiomatic, rather than as testable statements. And I think the thing we should start to do is start to evaluate if the, the jump to risk management tools. Who does it serve, and how does it serve them, and who does it hurt, and how does it hurt them, right? Because there's definitely people who are like, Ooh, I get to do risk math now. That's fun. It's, I don't mean to be unfair to them. Um, thank you. Thank you, Matt. I appreciate that.

Matt Coles:

Unintended.

Adam Shostack:

Um, but we should

Izar Tarandach:

are my coins? Where are my coins?

Adam Shostack:

We should, we should figure out if it. If it helps us be more secure, we should figure out if the risk assessments we're making are helping the organization make decisions faster, better. Uh, and yeah, I think we need to, I think we need to be rigorous in asking if this is a good thing or a bad thing, rather than just, Adam has one opinion, Matt has another opinion.

Chris Romeo:

And when we think about risk management in reality, it's very different than what we think of as threat modeling in, for example, in big companies, big company. We've talked about this on the security table before. Risk management is a whole department that's like, what if a hurricane hits? The path the trucks drive our supply chain items through, like they're so much bigger than thinking about what are the bad things that could happen to a given piece of software, a user story, for example. And so I've, I don't know, I'm starting to, my, my, my, I've definitely become more enlightened on that first statement under risk management, and I don't like that as much. As I think I, before I was kind of like, eh, it's fine. Um, I don't like it as much now. Um, just as we've kind of dug into it more. Um, but it will give us something to discuss at the, you know, capabilities family reunion next year. All right, Matt Izar, any final thoughts from you before we wrap this up?

Izar Tarandach:

I don't know, I think that's the place where I landed with this whole thing, is that as a threat model practitioner, when I think about it, I want to be able to participate in a threat modeling exercise and take in input from the real world, things that I may not be aware of, that I may not know are happening or new or stuff like that, but up to the point of how that influences the things that I'm building. Less, as Adam said in the thread, I don't want to know about APT 694 and where, which base they sit in, wherever country they sit. I want to know what they are doing that I haven't seen before and how that influences my internal, uh, sorting algorithm of what's important or not. At the same time, from the risk side of things, as a practitioner, I want to be useful, and if I have risk management people come to me and say you are not giving me enough information so that I can calculate a risk number, that's one question that I have to deal with somehow, but at the immediate level, what I should be able to do is to give the people who are writing code, who are putting systems into place, deploying and testing them, a list of priorities of the things that I think that in terms of security, these are the most important ones to do before the 10, 000 other ones. and now I need to develop a way to back these perhaps with hard data or with numbers or whatever so that I can explain to the right public why am I telling them that thing. Does that make sense out of my head?

Matt Coles:

Sufficient. Sure. If it makes sense to you.

Izar Tarandach:

I don't know.

Matt Coles:

So,

Chris Romeo:

Matt, Matt's scale of whether something's, Matt's scale is like sufficient or not insufficient. And that's it. That's all he gives

Izar Tarandach:

Acceptable! Acceptable!

Chris Romeo:

Okay. Mr. Mr. Data here on the program, you know, that's, that's a sufficient answer.

Matt Coles:

I guess the only answer I'm gonna give here is that the, the capabilities are like a model. They're meaningful, they have to have meaning and value to the people who are using it. They are things that are measurable. They are things that we think are going to be valuable to those who use them. Uh, and we'll see. Some of them may not be as, as, may not be of interest to some. Uh, we may see that mismanagement doesn't get, uh, as much traction as, as, as we expect. Or maybe there's a nuance we missed. And that's certainly fine, right? This is version one. Anybody who adopts these capabilities will almost certainly see an improvement in their ability to have a program for threat modeling that is effective for them. And whether that's ignoring actors, looking at attacker data, using threat intelligence, informing or driving risk, it's all positive.

Izar Tarandach:

Very true.

Chris Romeo:

Yeah, good, good way to, uh, to wrap it up there for us. So Adam, anything you want to point, uh, our audience towards, I know you've written a bunch of things in the past. SAST, and are always writing new things for the future, so anything you want to point to the audience to?

Adam Shostack:

Um, you know, just in the space of what should people learn, one of, one of the things that I talked about in the threats book, so I'll point people to the threats book and I'll point people towards the end of it, where I talk about genres of music versus new songs. Right, and I think of the threats. Thank you. Thank you. Um, I think of the threats as genres, rock and roll, jazz, and there's a new song every day. And I think it's important to understand what the genres are more than, and look, it's great to get excited about a new song or a new threat. But the context is the thing that I think is really important. And as Matt was saying about the capabilities document, getting, maturing our thinking, moving away from I think to here's evidence that this works or that works is really the place I'd love to see everyone go.

Izar Tarandach:

The lovely thing about the music genres analogy is that you got the whole thing going and then all of a sudden they find bits and pieces of an old Beatles recording. and then you have a hit. So things that nobody paid attention to before all of a sudden they come together in a different way and bang and have another thing that you have to pay attention to.

Matt Coles:

Getting ready for the AI deepfake.

Chris Romeo:

Alright, well, thanks folks for listening to another episode of the Security Table. Adam, thanks for joining us and uh, just walking through these issues with us and uh, we'll have you back at some point in the future to talk and discuss something else around the Security Table.

Podcasts we love