The Security Table

The Hamster Wheel of Scan and Fix

September 26, 2023 Chris Romeo Season 1 Episode 30
The Security Table
The Hamster Wheel of Scan and Fix
Show Notes Transcript

Matt and Izar join in a debate with Chris Romeo as he challenges the paradigm of "scan and fix" in application security. Chris references a LinkedIn post he made, which sparked significant reactions, emphasizing the repetitive nature of the scan and fix process. His post critiqued the tools used in this process, noting that they often produce extensive lists of potential vulnerabilities, many of which might be false positives or not appropriately prioritized. He underscores the need for innovation in this domain, urging for a departure from the traditional methods. 

Izar gives some helpful historical context at the beginning of his response. The discussion emphasizes the significance of contextualizing results. Merely scanning and obtaining scores isn't sufficient; there's a pressing need for tools to offer actionable, valid outcomes and to understand the context in which vulnerabilities arise. The role of AI in this domain is touched upon, humorously envisioning an AI-based scanning tool analyzing AI-written code, leading to a unique "Turing test" scenario.

Addressing the human factor, Izar notes that while tools can evolve, human errors remain constant. Matt suggests setting developmental guardrails, especially when selecting open-source projects, to ensure enhanced security. The episode concludes with a unanimous call for improved tools that reduce noise, prioritize results, and provide actionable insights, aiming for a more streamlined approach to application security.

Chris encourages listeners, especially those newer to the industry, to think outside the box and not just accept established practices. He expresses a desire for a world where scan-and-fix is replaced by something more efficient and effective. While he acknowledges the importance of contextualizing results, he firmly believes that there must be a better way than the current scan-and-fix pattern.

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel

Thanks for Listening!

Chris Romeo:

All right. Hey folks, welcome to another episode of the wild and wacky world of the security table. And you could only see the five minutes after we hit the record button up until now, as far as what just, what just went down, what just happened, but unfortunately due to content restrictions and YouTube rules, we're not allowed to play that part right there. But, uh, I'm Chris Romeo. Joined by my good friends, Izar Tarandach and Matt Coles, and we talk about all things security around the security. And so I kicked a hornet's nest, which I've become fond of doing now. Um, but I kicked a hornet's nest real good. And so we're going to talk a little bit about this post I put on LinkedIn and just kind of the reactions that we got from some different people as well as Matt and Izar's perspective on this. So, um, I put a post out a couple of days ago and we'll put a link to it in the show notes in case you want to jump in. And the premise of the, of the post is, The hamster wheel of Scan N Fix. And so my, my premise is that in application security, we are on this hamster wheel of scan and fix. And so I started thinking about, okay, where do we get this from? Like, why do we have this pattern of tooling that is a scan something and then generate a list of 10, 000 different things that have to be fixed? And so I started thinking about, okay, what's the earliest tool that we are aware of in AppSec? It's SAST, Static Application Security Testing. But SAST didn't create this pattern. Vulnerability scanners, way before SAST, created this pattern of, let's scan something, let's generate anywhere from 100 to 100 million results, and let's... Put it into a list and send it to somebody, but SAST is really where it kind of entered the picture from The AppSec world, AppSec perspective. And so I think this pattern is is just wrong I think it's just broken and I think we've seen a history of the challenges that this following this pattern does in Working with developers. And so I started thinking about Is AI the answer? Can I get an AI bot that will create a PR and fix my problems for me? I'm still doing scan and fix at that point, it's just fancy robot fixing, so I don't think that's the answer. Is RASP IAST, the pattern of RASP and IAST, which I think of as view a request and then block or allow the request. Could we do SAST? Inside the runtime, but then I start thinking about that like now I think because of performance issues and just the strangeness of where we place that control. And then my last thought was, do we do this in the IDE like people have been talking about forever? Does that get us close enough to the developer to fix the problem? So with that, I think I've, I've, I've spent. enough time setting up what this post was. And it's funny what happens when it's a Thursday night and you have to write something or Wednesday night, you have to write something for a weekly newsletter. You can't think of anything. And so I literally went and do you guys know Travis? Um, what's Travis's last name from resource that Travis McBeak was at Netflix? Um, he had a post that basically said scan and fix is wrong and I'm like, yeah, let me write some more paragraphs that go with that. And all of a sudden I had a, a hornet's nest that I kicked. So, all right. What do you guys think? What's your reaction to this? Do you think Matt's giving me that look when Matt was like, he's ready to, to, to fight over this. So I love it. Let's go.

Matt Coles:

hear Izar's response first because, uh, yeah.

Izar Tarandach:

So, the year was 199x, right? And the thing coming out was Saint and Satan, I forget which one came first, then, uh, then Farmer and, uh, Vici Venema. And all of a sudden, everybody started looking around their Unix boxes and seeing this, this processes that were scanning them from the outside and checking for open services. And all of a sudden, all this overflow thing and all kinds of different things and checking for configurations. And this thing is too permissive and that thing is too open. And that was actually the first time that I, that I met ScanNFix. And then, a bit later on, we, I worked with a good friend of ours. in a company called Netact, creating a scanner called HackerShield that was doing basically scanning somebody, please come and fix. And that was 1997, 8, 9? Something like that. And that's when I started building my own, what the hell is happening here with the scan and fix thing. And very quick, it's to me, at least it became clear that it was. Uh, uh, double quote solution to the problem of again, a black art needing to be made available, commoditized and the constant search for a silver bullet. Me as, me as a network engineer, me as a system administrator, I want this tool that I point in the direction of my box, I click a button and it gives me out a list of things that I need to fix in order to be secure. And that gives me psychological safety on a bunch of different things. First of all, I have the imprimatur of a tool, a recognized tool, that was written by people who know their stuff, that actually comes and says, this box has been scanned. And, uh, I can go to sleep at night knowing that somebody who knows more than me took a look over me. Then I have the psychological safety that, uh, I don't have to assume the responsibility, because the tool itself is responsible for what it says, and if there's a problem down the road, I can always point at the tool and say the tool didn't tell me that. Because at that time, and even today going forward, that kind of knowledge is... not immediately available. So you're basically hiring an expert in that scan cycle. And then there's the fact that, uh, it, I think, and I could be wrong, but I think that it's much more natural for a human being to produce something, focusing on those things that interest them, and then putting it on the table and telling people, now somebody who knows more than me about other things, come and look at it and tell me what needs to be different. So the Scan Fix, to me, it's something that came, that fit very well in that model of first I build and then I test, first I build and then I bolt on the security. And, um, even if we look at SAST, way back in the day, I don't remember the name of the scanner, it wasn't ISS, it was something else, but it would go over a C program and basically just tell you, hey, you're using memcpy here. Or things like that, or even, uh, Microsoft's, uh, hmm?

Matt Coles:

rats probably, or flaw

Izar Tarandach:

Probably, yeah, RATS, RATS, RATS. And, uh, then there was Microsoft with the Include... What was the name of the Include that you could give and when it

Matt Coles:

uh, dangerous functions.

Izar Tarandach:

Dangerous functions.

Chris Romeo:

the library. Silence.

Izar Tarandach:

And then,

Matt Coles:

Uh, BAMS, BAM. H, I think it was,

Izar Tarandach:

Right. And then, uh, GCC started including some stuff in its, in its warnings about security, right? But that, that was later in the thing. But, uh, so, so it wasn't even SAST, it was first like the, the network scanner, the thing that looked out from the box. That started, in my opinion, the whole, in my memory, the whole, uh, uh, scan and fix cycle, and SAST only came, came later, and, and, we could, we could even say that these were the first forms of DAST, I think, right? I, I, I remember bugs like the, the palmito bug, and that went against, uh, ProFTPD, Jordan Ritter founded in, I think, 97, 98, and it was already doing the whole handshaking, and then at a later Time into the protocol, it would do the, the, the buffer flow. So you could say that that's an early form of DAST, extremely focused at, at that specific, what became later cv, but, but DAST. So I, I think that in, in your writing, you, you pulled SAST as the first thing. So, I, I just think that it's like, it's the other way around and that matters, exactly because of the thing that I have my box, I point something at it, it says that I'm okay, I can go to, to, to sleep, fine.

Chris Romeo:

Yeah, but those, those scanners in those early days were vulnerability scanners, right? Like

Izar Tarandach:

Without being called so.

Chris Romeo:

Nessus was not a DAST. It wasn't testing application level issues. It was testing for, a lot of times it was looking at a banner and saying this banner says 1.85.

Matt Coles:

right? a port And

Chris Romeo:

a vulnerability in 1. 85 and you've got a

Izar Tarandach:

Right, but the

Matt Coles:

a vulnerability scanner, And a configuration scanner.

Izar Tarandach:

but the example of, uh, sorry, the Palmito bug, the Pro FTPD, as implemented in HackerShield, for example, already put you much closer to a DAST because it was actually building a payload, exercising the protocol, and then inserting the payload. So it was later on. It wasn't just checking the version and saying, oh, the version looks old, so maybe you have this thing. So I think that my point is that, uh, it started. from looking from the outside in, it moved into looking at the code as it gets written. And then it got deeper and deeper and deeper in the code as it's compiled, as it's running. And then we got to where we have today with with RASP and IAST and all the good stuff.

Matt Coles:

Yeah, so,

Izar Tarandach:

Now...

Matt Coles:

can I get a word in there too? I'm serious,

Izar Tarandach:

Hahahahaha

Matt Coles:

Thank you for the, thank you for the history lesson. I, I, I just want to highlight, I, I think we're, we're, it's, the history lesson is great. I 100 percent agree with you. I think, and you touched upon this, but I think it's really important just to reiterate this, at least, I, I think on this side of the fence of this argument. is that, um, the tools themselves, uh, are not the problem. The tools are a, uh, uh, an easy scapegoat for why the tools exist, why scan and fix as a cycle exists. It's similar to the quality problem, right? Developers are not infallible, designers are not infallible. You have to have a way, a scalable way, of analyzing a system for defects. Whether you're looking at quality or security, and tooling, automation, helps with that. Part of the problem is the tools have gotten very noisy over time. So, I want to, I'm going to be careful, and I'm going to try to inform this properly, and table that for just a moment, but I'm going to introduce that concept of noise in these tools. We have a need for building quality in. We have a need for building security and the way we do that is either we ensure that what gets written is secure.

Chris Romeo:

You.

Matt Coles:

and in order to do that we have to do analysis, right? Or we have to have very defined patterns, but because we want developers to have to bring their intelligence to bear and their creativity to bear. We don't make everything cookie cutter. And so you have to have a way of analyzing code, looking for security in insecurity patterns. You have to have a way of looking at components, looking for components that have vulnerabilities and. Due to, in part due to complexity of the systems we're looking at, and the noisiness of these tools, this problem of, I have so many things now to look at, what do I catch first, and then I have to keep iterating and iterating and iterating, so it drives a, it drives a Organizational pattern that drives a program pattern, program process pattern that utilizes the tool in this iterative approach. And so I think that's, that's fundamentally the problem of why, why there's this perception of a hamster wheel of scan and fix. But we have a need for this because A, humans are, humans make mistakes. As Izar rightly called out, the tool provides a certain amount of assurance, but the tool also brings with it a certain amount of noise. And so, we need to overcome those challenges to,

Chris Romeo:

That is the tool,

Izar Tarandach:

No, wait.

Chris Romeo:

challenges are the tool.

Izar Tarandach:

No, no, no, no,

Chris Romeo:

disconnect the tool from the challenges in the tool. The tool has the challenges, and so somebody has to build something better. That was my whole point. We need a better pattern. The tools need to implement a better pattern than scan and fix. There's gotta be a better way.

Izar Tarandach:

the end of the day, Scan Fix is a response to something existing, and to somebody needing to rent knowledge. Right?

Matt Coles:

and by the way, if you didn't scan, what would you do?

Izar Tarandach:

Exactly. So You are renting knowledge. You are renting what was in the head of somebody else that knows that stuff well, who coded that in a certain way that can be used in a tool to figure out those things. So,

Chris Romeo:

the head of somebody, I'm gonna draw the illustration further, who has forgotten random things in the midst of all the things that they know. And tends to see things that don't exist. And so it's not, you know, you're not renting a

Izar Tarandach:

wait, wait, wait, no, no, no, no, no, wait, wait, wait, wait, wait. Let's look at things the way that they are, not the way that people are saying that they are going to become soon. A scanning tool is basically a decision tree. If you see this, and you see this, and you see this, and you see this, chances are that you have a problem. Now, the verbosity of them is erring on the side of caution. It was decided at some point that it was much better to let people know there might be a problem than to shut up about it and be bitten by it. The problem is that we have so many chances now for problems to appear. and we have so much complexity in the way that those problems may appear, that the amount of, uh, alerts at all different levels... right, is ridiculous. So, rather than break the pattern, which I still see as a pattern that does have value, especially as the knowledge that we have to rent gets better and better, I think that going to what Matt said, it becomes a problem of prioritization. Now, that prioritization, again in my opinion, could be completely wrong. A huge part of it is contextualization. And contextualization will only come from knowing what are you scanning and what's the environment that that thing operates in or lives in. And that's when you take a step back from the scan and fix to the let's understand the environment where this thing that I am scanning lives and look at other factors that all of a sudden may inform, contextualize, enrich all the different aspects that I have, that my scan is providing me and that the rules are acting on top of to actually give the customer the top things that they have to deal with. Does that make any kind of sense?

Matt Coles:

And by the way, if it wasn't scan, I mean, so, if it was manual code review, instead of sAST, we would still have the same problem. Right?

Izar Tarandach:

With the added problem that you are not renting recognized knowledge, you are... trusting the knowledge that you have in place.

Matt Coles:

Well, and you'll probably have to hire a ton of people to scale to the level that you can execute a tool over. Same thing for looking at a, at a, you know, pattern matching rules for vulnerabilities. You know, looking at a component inventory or looking at a system and doing fingerprinting of binaries and looking and doing manual matches, right? The tool replaces the human. Automation always improves productivity because it does better what a human can do by hand. It does it faster, usually.

Chris Romeo:

on.

Matt Coles:

It better, usually.

Chris Romeo:

Hold on. So if we had enough experts, let's do a little thought experiment here. If we had, let's just assume we have unlimited experts

Matt Coles:

And unlimited money, because you'd have to pay them all, right?

Chris Romeo:

Yeah, but the thought experiment isn't considering money. It's considering the, it's good. Well, cause you said

Matt Coles:

the challenges here, just to let you know.

Chris Romeo:

But no, no, you said, Mo, what you said was automation is always better than manual, effectively, is what you said. And so my point is, if I had an unlimited number of, of experts, automation would not be better than manual. If I had an unlimited number of, of, uh, Jim Manicos, who know a lot about coding, secure coding in a lot of different languages, that could look at the code, and we had unlimited time, would, would a army of Jim Manico clones come up with better results than running a SAST tool?

Izar Tarandach:

That depends. Does the sAST, does the SAST tool has the same decision tree in their mind that Jim Manico have?

Chris Romeo:

I don't think any, I don't think any SAST tool at this point has the decision tree that's in Jim Manico's mind just because he's experienced that like SAST tools don't experience things. They're just, they're just rules where somebody tried to capture something and tried to, uh, make it so that it's not so noisy

Matt Coles:

so for any

Izar Tarandach:

what you're

Matt Coles:

so for any given rule set, sorry, for any given rule set, for any given rule set, the tool is going to be... More efficient at executing that rules than an army of humans because humans will be will almost certainly miss something Even if the

Chris Romeo:

you're assuming that you can, you can load that engine with the rules that Manico's head.

Izar Tarandach:

No, we are assuming the fact that given that all... Let's go philosophical here for a second. Given that all humans are fallible, and tools are made by humans, tools are fallible. Now, in your thought experiment, you say if I had an infinite amount of Jim Manicos. It could be argued that given the same input, an infinite amount of Jim Manicos would produce the same output all the time. Because he himself has his own process, and I really hope, Jim, if you're listening to us, it's coming out of love. So, given the same input, he would produce the same output. Why? Because in his mind, he has a certain decision tree that he uses most of the time, and that we are trying to reproduce in our tools.

Chris Romeo:

I mean, if you're talking about the same piece of code, yes, I would say if you're talking about scanning different examples of code or having Jim review, an army of Jim's review multiple different pieces of code, he, the thing that he could bring to the table that the scanning tool can't is experience. His mind can interpret things he's seen before and knit them together and see new things. That's my point is that a tool can't do that. Tools only as good as the person

Izar Tarandach:

but, no, no, no, no, no, no, wait, wait, if you take SAST into consideration, Okay, and we have seen enough approaches to SAST, and I am by no means an expert or a researcher in SAST, but you have seen the the GREP approach, you have seen the, uh, let's interpret the code and play the the tainting approach, you have seen the in the language approach of pro tainted mode. right? So what they try to do in a certain way, it's either look for patterns or look for, uh, an interpretation of the data coming in and how it gets transformed and what goes out of it. And these are all rule based at the end of the day. What you're proposing with the experience is because I have seen many different things, my set of rules is much richer.

Chris Romeo:

Correct. That's what I am. I am arguing that point. Exactly.

Izar Tarandach:

now, now we come to where we are in, in tech and I hate to go there, but everybody's going there. So why not go there? And you train your model and you retrain your model and you fine tune your model and you're going to have an amazing model for that specific use case, right? Because it's, it's very difficult to have a very good fine tuned model for the generic case. So everybody's going there. Everybody's saying that AI is the next thing. I'm going to have a lot of, uh, a really fun time when I have to sit back and watch an AI based scanning tool, scan the code written by an AI, right?

Chris Romeo:

Infinite loop, we'll be

Izar Tarandach:

And not infinite loop, but I'm going to enjoy very much that discussion between both of them. It's going to be the new Turing test. But, uh, my point is that at the end of the day, we are trying to rent that knowledge and we are trying to duplicate that knowledge in, as Matt said, scalable ways. that you can come and apply again and again and again at larger scales. My point is that we are going to find more and more and more issues and findings, but we still have to prioritize them. And for that you need contextualization.

Matt Coles:

so to break, to shortcut this a bit, I think where you're getting with, from the hamster wheel, uh, post that you made originally was the tools are noisy and they generate a lot of results. And so you have to have, you have to, developers need time to fix, you have to do scan and fix and scan and fix and scan and fix. The way you solve that is you make, you make the results be prioritized, more actionable, less noisy. So reduce false positives. I will try to eliminate false negatives. Right? Because that's, if you don't catch it in SAST, you're going to catch it in vulnerability scanning, or you're going to catch it in fuzzing, or you're not going to catch it at all, and it's going to go out to the field and be reported back to you, and now you have a bigger loop. Uh, and so, you need the tools to be smarter, because that will reduce the noise and allow the humans to make intelligent decisions about which to fix first. But you're still not going to ever solve the problem, because developers are introducing bugs into the system. Until that stops, you still have to have analysis and until you, I mean, you you don't, you don't, if you don't, if you don't do QA and you ship something

Chris Romeo:

everything, everything you just said about what needs to get better in the tools, people have been saying it for 20 years and nobody's done it.

Matt Coles:

and it hasn't happened yet.

Izar Tarandach:

But

Chris Romeo:

why we need a new pattern. That's my whole point though. That pattern, I'm saying that pattern cannot be kept to the

Matt Coles:

What pattern? Oh, which, which pattern? Which

Chris Romeo:

and fix, the scan and fix pattern

Matt Coles:

The scan and fix pattern? but not the tools, not

Chris Romeo:

all right, Izar's going to fall out of his chair here.

Matt Coles:

not the tools.

Izar Tarandach:

So Matt, Matt got so close to it, so close to it, so close to it. The thing is

Matt Coles:

the tool.

Izar Tarandach:

No, it's not the tool, it's not the tool, it's the human. Now the thing is, we don't have to break the pattern. We have to put the pattern, my opinion, we have to put the pattern where it belongs. We have to place the pattern in the bigger pattern of things. Now We keep designing infinite loops and circles and whatever, okay. And, And,

Matt Coles:

Mobius strips and all. Yeah, that whole works.

Izar Tarandach:

whatever, right? And, And, and, and, that's that, that's the nature of the beast. We, we start with the M V P and we grow to incredible companies. And the thing is cyclical, but Matt said, said, said, right. The problem here is that, that people are fail and they continue being fail. And if we. Give them tools, and if we change the environment they operate in, they're going to find new ways to be failable at those new environments, right? So it's like, we keep trying to do, what's the saying? We keep trying to do idiot proof things, and the universe keeps making better idiots. So,

Matt Coles:

We love all developers equally, but, uh,

Izar Tarandach:

yeah, yeah, so the, the, the, the point here, and, and yes, I am going to go there, I am going to go there. I will get there. Uh, the thing that's going to break NO! The thing that's going to put that loop into its place and make things better is only that tool that we all know and love that says, Why don't, before you do something, you look into it and you see what could go wrong, right? Because,

Chris Romeo:

we had a solution, or we had an idea that would do that.

Izar Tarandach:

it's, it's, it's, it's where you break the cycle, right? It's, it's where you say, rather than just building my thing, putting it on the table, scanning and hoping that somebody tells me what's wrong, Why don't I ask those hard questions first, and I already... apply some forethought, right? I could even use the best experience of a scanner, the best experience of Uh, IAST, RASP, things that I've seen, threat intelligence, threat hunting, all that input, and say what's next on the next round. And hopefully make things like, that might help make the, the, but there's another thing too. We talked about prioritization. Prioritization is not only about. pointing at the important things at the top of the list that you have to do first. That's probably the most important part of the prioritization. The other part of prioritization is what do I do with the rest of the stuff? at which point you start cutting your losses and you say, I accept that risk. And that only comes from understanding that risk and understanding your environment enough so that you can say, those 10, 000 things are fine.

Chris Romeo:

I mean, and that's the challenge today, right? That risk is being accepted. Just nobody's going through an informal process of doing it. They're just leaving 10, 000 items on the backlog that

Izar Tarandach:

There's a difference between accepting the risk and putting it under the

Chris Romeo:

mean, they're accepting it. They're accepting it. They're just not willingly accepting

Matt Coles:

implicitly accepting

Izar Tarandach:

yes.

Chris Romeo:

They're not making a statement. They're not going, we're accepting all this, but they're accepting it anyway by not, by not, a lack of action is an acceptance of the risk. not, I wouldn't go to court and make that argument, but,

Izar Tarandach:

because what we are missing today, I think, is the understanding of that risk. And again, the contextualization of that risk in terms of where we are. It could be that something that's critical. Let's not go there.

Matt Coles:

Yeah.

Izar Tarandach:

Something that's a medium. in my environment, could well keep being a medium, and in your environment, because it can be chained with three other mediums, all of a sudden, give somebody the opportunity, the means, somebody who has the motive and the inclination, to go and do something bad. And at that moment, it becomes a critical. So even if CVSS didn't step first and said, it's a critical panic, panic in the streets. And we should talk about CVSS at some

Matt Coles:

CVSS doesn't actually do that, but go ahead.

Izar Tarandach:

Oh no, but it doesn't do that. But that's the way that we decided to use it. Because we don't have anything better.

Chris Romeo:

Yeah.

Matt Coles:

So, so let me just add two other things to that, that, that list of stuff. So we, we probably, in order to help reduce this problem of volume noise, right, is we need to look at, um, one, one, probably obvious thing is we need our tools to be smarter in that we need to stop looking at, we need to maybe reduce our reliance on purely signature based findings,

Izar Tarandach:

Yes.

Matt Coles:

simply, do you use get s or sprint f is a lot of volume, right? In old legacy codebases.

Chris Romeo:

them.

Matt Coles:

But is that necessarily effective? Probably not, because it misses all the control flow and data flow information that And those are the other, you know, analysis techniques that were pioneered over the years around SAST to get better accuracy on results so that a single finding wasn't simply, Oh, look, you have a variable called password, therefore you probably have a problem with cleartext passwords. No, I have a variable that has a password that goes into a UI element. In plain text, that's a password problem, you know, in plain text problem. So you get contextual information. So we have to, we need our tools to be a little bit smarter in providing actionable, valid results, not simply, you have open SSL, you know, 098A, you have a, you have a vulnerability, all these other vulnerabilities, right? So very basic signature things we probably need to reduce. The other piece we need to, need to consider there, maybe a little bit more, um, Uh, uh, aggressive and, or unpopular will be take developer choice and lessen, lessen full developer choice. Today, how many GitHub projects are there of open source code and how, what guidelines are there for developers to choose what they embed and bed for technology?

Izar Tarandach:

Yeah.

Matt Coles:

And then, and that's going to be very unpopular, but guardrails.

Chris Romeo:

So, hold on, let's, let's unpack that before we even go any further, like, uh, Matt is prescribing an authoritarian approach to development, which, okay, that sounds more like, what you just described sounds more like guard, more than guardrails, right? Because you're talking about, you're making design decisions for people now. You're not giving them the freedom to operate. I think of guardrails,

Izar Tarandach:

No, no, no, no, No, no. No, it's not what he's doing. By putting guardrails, he's perhaps limiting the options of choice that you have to use somebody else's work. But he's not saying change your design.

Matt Coles:

Or which pattern or, or potentially which patterns you implement, but, but to a limited set, meaning you're, you're, you don't, I, I think it's necessary if you, if you don't put guardrails on what your choices are, whether it's component selection, technology selection, design patterns, Then people are going to just invent stuff new, which requires a lot of effort to get right. We've talked about this in past episodes. A lot of effort to get right, and jumps you right into the scan and fix problem, because now you're introducing new problems. Now, it's limiting. I am a hundred percent, like I said, very unpopular, this fan mail, hate mail, whatever is going to come along on this, I'm sure, but you know, if you don't want to keep scanning and fixing, scanning and fixing, you need to limit the ability for vulnerabilities or other issues to get introduced to your code base or your technology platforms. And this is a way to solve it. I'm not suggesting it's the way, or even necessarily you should go this way, but putting guardrails on, on, on choice is a way to solve this.

Izar Tarandach:

Yeah.

Chris Romeo:

Yeah, I think it's, it's a place we, I mean, I agree with most of what you're, what you're saying here, but the problem is the reason I, the reason I'm still saying scan and fix this pattern doesn't work is nobody's done it. We've had 20 years of SAST tools and nobody's done what you described. Nobody's made it more actionable. Nobody has, everybody said, they'll all tell you on a sales demo. Oh yeah, we, we're all about fidelity of results and we're about limiting false positives and avoiding false. Like they all say these words, but here we are still 20 years later with the same piles. There are organizations that have tens of thousands of tickets that just get junked because they come out of the tools, they go in, they get junked.

Izar Tarandach:

Chris, I think that we are generalizing a bit here. If you look at SAST... Yeah, I'm going to agree with you. Even though, to be sure, the cycle got complete. We started with GREP, basically we are back to GREP now. Of course with AI coming in. But that GREP got way, way smarter. But the counter example that I want to offer is we started with firewalls. Firewalls that did open and close. This port is open, this port is closed. Fine. Then they got smarter. We started getting packet inspection. Then, uh, security got smarter, we started encrypting packets, so packet inspection sort of went the way of the Dodo. And then there was a strategic retreat, let's get back from the firewall, let's start talking WAFs. So now that traffic is not encrypted, but before it gets to the application, let's apply rules again and check the traffic. And then the attackers got smarter and those rules got bypassed. So we did another strategic retreat, and we closed the wagons around RASP and IAST, and now we are checking at the code level. So now we have that whole, let's use SQLI, you have that whole query package, and you can look at it before you actually act on it, or you can alert, or you can have an in app WAF, and stuff like that. And that, to me, means that rather than defense in depth, we are being forced to bring the defense as close to the crown jewels as possible. And what's left to do now is, because we are doing it so close to the crown jewels, all of a sudden we have, again, so much context that we can apply on top of those rules to actually be able to say this is a good invocation of a query, this is a bad invocation of exactly the same query. So somebody asking for, uh, uh, doing a select on my table of credit cards, If it's coming from this endpoint, it's a good one. If it's coming from that endpoint, probably not a good one. If I've seen already it coming from that endpoint, okay, just added some data that might influence the way that I think about it, about it being good or bad. And this way you start building a level of confidence on top of that thing, which of course has to be fast enough so that you don't... Break the whole thing, but you start building that confidence in a way that's smarter than just coming, raising the flag and saying, this is a bad query. Everybody stop. We have an incident.

Matt Coles:

You know, this is this, this notion of, sorry, Chris, this, this notion of contextualization, actually really interesting if you think about it, SAST as a general purpose tool needs to run across many types of codebases to look for many types of issues with a lot a lot of context,

Chris Romeo:

no

Matt Coles:

as you get.

Izar Tarandach:

With no context.

Matt Coles:

a lot of context.

Chris Romeo:

no runtime, they have no idea what's happening in the

Matt Coles:

Exactly. That's right. Because it's not dynamic. It's static. It's SAST, right? It's looking at code. It doesn't know necessarily what that code is used for. It knows that, that, how the code is structured and it can analyze and say, oh, this is C code. And it does a certain set of things, but not that it's going to live within an IOT device or an enterprise server or a desktop app or mobile app or whatever. It doesn't, it may not know that. Some of the tools sort of know that at a great, you know, a macro level, but, but not generally. And so, uh, and, or, or it doesn't take action in that regard. It uses it for reporting, but not necessarily for, for analysis purposes. Uh, and, and so you need, but you need this because developers write code, code goes in so many places. You have to have these general, today we have these general tools. We could improve that by knowing about the target runtime or the target use cases and figuring out how to tell the tool, okay, this is going to be used in embedded devices versus used in an enterprise server. And my network is going to be X, Y, and Z, and this is what it's going to, and then, then you can get that context. But as you shift closer to deployment, you're gaining more and more information that you can

Izar Tarandach:

Yes,

Matt Coles:

use in those

Izar Tarandach:

yeah.

Matt Coles:

providing context. So RASP is probably a great solution. At that level, it is probably a great solution at that level, because it has a lot of context to work from. it is not necessarily universally, it's not necessarily universally, accessible.

Izar Tarandach:

We have, we have. I am fortunate to work with amazing people every day who are doing this. So, yes, it's possible and yes, it's a route. So, the point, I think, is that scan and fix by itself is not a bad thing. It may be badly used, it may be underutilized, and it may generate results that are not optimal. But if you start putting it into context by adding more and more and more understanding of where that scan and fix is happening, sorry, that scan is happening, you're going to have shorter, prioritized, contextualized cycles of fix. So I want you to separate the scan and the fix. right? And in the middle we have to put an engine that says context. As people say, context is king.

Matt Coles:

And we also need to make some other fundamental changes, like stop, stop standards development relying on simply the scan as a goal,

Izar Tarandach:

Yes.

Matt Coles:

right? So, you know, PCI DSS, I think, uh, you know, still requires scan, scan with a Thunderblaze scanner on, you know, every 30 days and patch. That's a scanner fix,

Izar Tarandach:

But that goes to the reasonable security bit, right? You do what you can at that moment. It's better than not doing it at all.

Matt Coles:

But, but that's something we fundamentally need to change in the industry because people get into that mindset of, Oh, I need to run a scanner. I'm going to take the results. I'm going to patch on a priority list from 10 to 0, right? Again, using CVSS in a way that wasn't intended. Or, or, or rather, or making an interpretation of what information you get out of CVSS scores and severity ratings, right? And, and doing this prioritization. Oh, 10 is really bad. And, and eight is not as bad and therefore I can wait on those without knowing context about that ten in, in, in place of the network versus that eight, you know, front end, I, you know, that there's additional context information that you don't have when you're making those decisions.

Izar Tarandach:

Parenthesis, better things are coming with CVSS VRBA than before, which Matt was one of the collaborators in the, in the group. So yeah, I've been there. I looked at it and better things are coming.

Matt Coles:

Public preview is, uh, is ending, well, public preview is currently now and, and hopefully it'll be released in the very near future, so available for consumption. Uh, but again, we have to, we have to interpret the results and make use of them in a different way than, than just simply taking a score and using it as a blind measure of, of insecurity.

Chris Romeo:

Let me summarize that, what I think I heard, and then I'll give you my final thought. Contextualization, then, is the argument for Fixing scan and fix. Your argument is scan isn't the problem, it's fix. And fix is a problem because we don't have contextualization, we have too many results, we have too many false positives, too many false negatives. I mean, that all makes sense, I agree with it, it's just I haven't seen anything happen in 20 years that's getting me closer to that. So like, it's tough to say. We should stop, we should, we shouldn't think about another pattern when this pattern that it has a good, I agree, that's a, that's a perfect, almost a perfect state. If you had contextualization and you could get to the point where you gave developers five things. Here are five things that are real things that are at the same level of a RASP finding. Like, that's the thing I love about RASP. If RASP detects a SQL injection, guess what? You got a SQL injection. Because it's inside the app, it's watching it. It's watching it execute and then stopping it before it can do some damage.

Izar Tarandach:

Is it,

Chris Romeo:

That's a whole other, that's a whole other, don't get me, that's, that wasn't my, my, that wasn't the focus of my point. Um, but yeah, I mean like, so like my, my conclusion though is Yeah, I agree. I would love for all those things that you guys described to happen. I've been waiting 20 years. Do I have to wait 20 more years in this industry to see it? I don't know if I got 20 more years of AppSec left.

Izar Tarandach:

Chris, I, I, I have to agree with the way that you put it. I have to disagree with the Not disagree, but I have to raise a bit of a problem here with the fact that you haven't seen anything in 20 years. In 20 years, the target has moved a lot. We are not defending the same things that we were defending 20 years ago. 20 years ago, you were defending an on prem, uh, everything in one box, server, doing something. And today you are serverless, in the cloud, multiple cloud providers, multiple identity systems and whatnot. I mean, it's chaos.

Matt Coles:

And, and, and how many programming languages do you have to

Izar Tarandach:

Exactly. Exactly. The runtimes, they just appear every day.

Chris Romeo:

that doesn't, that's not, that's not an excuse to have tools that don't, aren't better.

Izar Tarandach:

It is. It is. It is because hitting a moving target is much harder than hitting a static target, right?

Chris Romeo:

But I mean,

Izar Tarandach:

Today, let me put it like this, with what we have today, if we were to scan a target from 20 years ago, you would have a very different opinion from the results. But we are not shooting at targets from 20 years ago, we are shooting at targets from today.

Chris Romeo:

but the point is we're still doing the same. We, we still have the same approach to how we're scanning a ser a piece of code and generating a, a, a series of results. And that's, that's my whole point here. Everybody has just bought into this is how we do the, you know, what the most, one of the most dangerous things is anywhere in an organization, in a team, in anything? Status quo. is just how we do it. This is how we do static analysis. This is how we do processing results. When you have that happen, that's my whole point though, is that there has been a period of time nobody has thought about a better pattern because everybody's been like, this is just how we do static application security testing. Thanks

Izar Tarandach:

Look. I think that people who today point at Copilot and saying things, tools like that, writing code together with the developer, are going to save us from scan and fix because the code is going to be perfect beforehand. First of all, they don't know what they're talking about. Second, the code may be perfect, the design sucks. So, again, we keep looking at the silver bullet. There is no silver bullet. Nothing is going to, to,

Matt Coles:

no, finish your thought, sorry, I didn't want to,

Izar Tarandach:

no, no, no, no, no, Matt, go,

Matt Coles:

So, the, the complexity keeps going up. Again, the, the boundaries of choice is still infinite. And, and I'm not, I'm not suggesting we must change that, but, but maybe it's something we need to look at. And, uh, and so, was it garbage in, garbage out? Uh, as you, as you build more complex things, your, your scope of what you have to analyze for continues to increase. And the things you have to account for change, right? So containerization in Kubernetes is new or was new or, you know, became new at some point. And so all these things, uh, you know, need to be accounted for over time. And you don't know, I guess. As, as an organization that has to make money scanning utilities, whether you're, I guess, open source projects, you could argue, you could have a developer who is really interested in solving this problem, who could solve it in one use case and then work on the next use case, next use case, because they're volunteering time and not trying to make money off of it. Right. And, and nobody's done that because nobody's taken the effort or nobody has the brainpower to be able to do it effectively. Or maybe it exists and we just haven't seen it yet. Right? In, in scale. So I think just because we haven't seen it in 20 years doesn't necessarily mean that it, A, can't be done, I would agree with you, but B, is it valuable in doing so? Is it feasible to do so? Um, would anyone use it if they did? Uh, the other thing I would just add on the garbage in part was, uh, is if we have a concerted effort as CISA and others are trying to do, Um, more recently around doing things like memory safe languages where developers can't introduce certain classes of issues. You reduce the problem set. You start reducing that complexity and reducing the infinite choice problem, right? You cut off a class of errors. You don't have to scan and fix for those, right? If you solve design problems at design time and you architect a system appropriately, you're cutting off a slice of things you have to scan and fix for. If you start reducing the set of components or technologies that you use, you cut off that slice and you can reduce and then add context to the things that are left and now you've reduced the problem. You're still scanning, you're still fixing, but you're doing it in a much more manageable.

Chris Romeo:

if somebody builds a new pattern and comes up with something innovative, I don't have to scan and fix it all anymore. There'll be something else.

Izar Tarandach:

Hey, let's see WEs, let's see VEs, right?

Matt Coles:

If you, if you can solve, if you can solve how quality assurance works. Because this is exactly, this is quality assurance, right? We just replaced, we just replaced people doing automated testing with a scanner.

Izar Tarandach:

Yep.

Chris Romeo:

Oh, here's my, here's my challenge to all the, all the entrepreneurs out there. Dream up a better pattern and bring it to market and see what happens. I think you're going to have some, some very interesting results.

Matt Coles:

It's the same, it's the same pattern. You're just using, getting better information in the output. It's the same pattern. You're scanning with context and then you have to fix them, scan and fix.

Izar Tarandach:

But Chris, I, I, I, now now putting my professional hat. What Matt said is compounded by a lot of different things. The complexity that's going up, it's not only the security problem that goes up. It's all the associated bits of the ecosystem that come together to create what we call today systems that are put in place and have to run at five nines and whatnot. It makes the problem of scanning. More complex, it makes the problem of contextualization even more. And then we compound that by the fact that we have so many personas out there who want to use these scanners, and each one of them is expecting a different level of fidelity of, uh, uh, quality and, and of hands off work. Right. So the, the, the five people set, uh, startup. They're expecting a silver bullet that's going to tell them you're doing everything right and you're secure. The company with the SOC and all that good stuff, they are waiting for something that will help them in their processes that they developed in house, that work for them, that match their organization, and that added a challenge of being able to serve all these different parts of the public in the way that they expect to be served, and not having to tell them a story saying, here's what I'm offering you, and here's why I think that this is good. The right solution for you. It's mind numbing.

Chris Romeo:

Yeah, I

Matt Coles:

And I and there's, there's, there is a

Chris Romeo:

is I'm asking for somebody to dream up something better.

Matt Coles:

there is a

Chris Romeo:

we would, but five, five or 10 years ago, we would have never, if someone would have suggested that you could do something in the runtime, everybody would have been like, no, no, no, there's, it's not possible. But now RASP and RASP and I asked in runtime observability are, are.

Izar Tarandach:

a reason for that.

Chris Romeo:

Are a big part of what

Matt Coles:

there is a pattern, there is a pattern that solves this.

Chris Romeo:

uh,

Izar Tarandach:

No, no, no, no, no, no, no, no, no,

Chris Romeo:

MISRA. Let the record show, for those listening on audio, his, Matt's card said ADA, mISRA, formal analysis. Hm?

Matt Coles:

You you, limit choices, and you, make everything, you, you design in perfection. As best

Izar Tarandach:

Yes, and 99. 999 percent of developers out there would not be able to live in those conditions. But Chris, you say a couple of years ago, looking at the runtime, and Matt is going to correct me here and he's going to jump at it. We had very serious limitations, even at the hardware level, that would not let you effectively look at the runtime. We did not have the tools of observability that we have today. We did not have the proper side channels that we have today that give you insight on that. Right? We didn't have the tools that would let you look at runtime. Runtimes that emitted enough signals to be able to tell someone, here's what's happening inside me. We're seeing that problem with AI today. People keep saying, we don't know how we get into these results. Why? Because AI is not emitting enough signals that say, here's where my thoughts are.

Matt Coles:

And by the way, we still have this problem today with, with certain types of systems, right? Small embedded devices, IoT devices, consumer electronics, right? Who's, how do you get that observability data out of somebody's refrigerator if it's disconnected from a network?

Chris Romeo:

I mean, my, just my point was, we had a pattern, we didn't have a pattern, and now we have multiple patterns, like the observability became a new pattern. So all I'm saying is I think there's another pattern out there. I wish I knew what it was because if I did, I would just start a company and make a billion dollars and I'd be done, uh, be, I'd be retired on a golf course. But I, all I'm saying is I want to challenge people to think of another pattern. Like what's wrong with another pattern? What if, what if somebody came up with something and it was better than scan and fix? Would you still argue for scan and fix? Like, yeah, this is better, but I love scan and fix because we've been doing it forever.

Matt Coles:

Uh, let, let me, can I? Oh, sorry. Can I just

Izar Tarandach:

Sorry, Matt.

Matt Coles:

go ahead? Yeah, Yeah, go

Izar Tarandach:

risk of being, at the risk of being marketing, marketing, marketing oriented here, there is a new pattern. Security observability is coming up. Okay. There is a lot of solutions that are being set up around that space. There's a lot of solutions that are using the tools of observability to emit security signals and they're great. But at the end of the day, if you look at what happens with those signals, you end up falling into the pattern of scan and fix. because you have something, you apply to it rules, and you fix the results. So at the end of the day, scan and fix is not only running a scanner and fixing, it's basically check rules and fix. It's not scan, the scan is just the way that it happens. What's being actually done is check rules and fix, and the only way that we're going to break that, extend that pattern in a way that the fix side of the balance becomes better is by putting the context in the middle.

Matt Coles:

And the last piece I want to throw in there, uh, is think about the other things. There's one other, one other concept that we haven't introduced in this conversation yet is timing. Observability requires a system that's functional, right? And we've talked about, we've talked about as an industry over those past 20 years, that it's very expensive to fix something that's in the field.

Izar Tarandach:

Yeah.

Matt Coles:

Right? So if at the point you can do observability, you've already shipped it. Right. Or you're ready to ship it, right? So, and you can simulate, you can simulate environments and all that sort of stuff. But you're, you're, you're guessing at that point, right? What your user behavior is going to be. And I'm not talking about cloud services. I'm talking about products, systems, things that get shipped to people and run in the real world. Right. And so observability, you've already shipped or you're at the point of shipping. So it's cheaper to fix things earlier in the life cycle at design and implementation, of course, we're no longer doing waterfall model of development. By and large, but we still have a design, some sort of understanding concept phase, and some sort of implementation and integration phase, and then some sort of deploy. So, is it, do we want to break that pattern of find and fix early? Allow us to find and fix in the field.

Izar Tarandach:

But wait a minute, matt, there is, while I agree with you, there is a thing here. Observability for security goes exactly the way that you say. Fortunately, people have been using observability for way more things than security. And now we can have the happy surprise of finding out the tools of observability already deployed with all that good stuff, including IOT, right? And we can just reap the benefits of that existing. So it's one of those situations where security has to lift the head out of the box that we live in and say, what else is around here? What else can I use? And these tools exist, and they are at a very, very high degree of fidelity, of visibility, scannability. And we can use that information.

Chris Romeo:

All right, well, when somebody else comes up with a new pattern, I'm going to call you both and say, I told you so.

Matt Coles:

and when they're still doing scan and fix, we'll look at you and go, uh huh.

Chris Romeo:

mean, but I mean, and I think there are, but my whole point is like, I want to, I want to put forth, I want to encourage innovation. I want to encourage people to think outside of what we've done. And when I see scan and fix, I see. We've done this, we've done stuff this way for a long time. That doesn't always mean that's the best way to do it. It, people, people can get into that rut of this is how we do it. I just want to challenge some of the new thinkers out there, new people in our industry. Try and think of something different. Think of a different way to do that. And Matt, you took us on a bit of a journey into guardrails and we could include paved roads in that. That's probably, that could be part of a different pattern where you have more, less choice. But more secure and less and the results are you're able to build something that's more secure because you're not giving developers the ability to do anything, which, let's be honest, in

Matt Coles:

not, not suggest, was not suggesting that at all, but but

Chris Romeo:

okay. that's, that's what I,

Matt Coles:

take, that to an

Chris Romeo:

what I drew

Matt Coles:

take take that to an extreme, absolutely, that would be true. But, but, but even putting guardrails in place, again, you're reducing the problem set, so you make scan and fix a manageable activity,

Chris Romeo:

Yeah, I want a world. I want a world where we don't have to scan and fix.

Izar Tarandach:

Yeah, my point is that... You won't have a word that you can't, that you don't scan and fix, because you have to break so many other patterns before you get to that world, that the only option that you have is scan, contextualize, and fix.

Chris Romeo:

yeah.

Izar Tarandach:

dog again!

Chris Romeo:

yeah, and the dog agrees with me, by the way. Translation,

Izar Tarandach:

No, I was

Chris Romeo:

in every... No, it's

Izar Tarandach:

Abaze

Chris Romeo:

I mean, I just wanna, I just wanna encourage people to think, think big, think bigger. That's my goal, right? Like at the end of the day, I don't care where we land on this, but like, I just, I want to encourage, especially new people in our industry, think about these things, don't just accept the things that we've always done, think about new ways to do things and who knows, maybe somebody will come up with something that the three of us will look at and go. Huh, just like in threat modeling where someone says a threat and you're like, I've been doing this for a long time. I never thought of that. That's a really interesting idea. That's my point here, is I want people to push the envelope for us and so that those of us that have been around a long time can look at something and go, You know what? I never thought of that. I didn't even realize that would be possible. Exactly. So, all right, folks, thanks for joining us on the Security Table. And we look forward to another episode next week where we dive into something and get super excited and jump around and argue about it for up to 45 minutes. So thanks for listening to the Security Table.

Podcasts we love