The Security Table

Jim Manico ❤️ Threat Modeling: The Untold Story

Chris Romeo Season 1 Episode 25

Jim Manico joins Chris, Matt, and Izar at the Security Table for a rousing discussion on his Threat Modeling journey. They also learn about each other's thoughts about DAST, SAST, SCA, Security in AI, and several other topics. Jim is an educator at heart, and you learn quickly that he loves application security. Jim is not afraid to drop a few controversial opinions and even a rap!

Jim discusses the importance of static application security testing (SAST) and how it is becoming increasingly important in application security. He argues that SAST is a powerful tool for detecting vulnerabilities in software and that modern SAST tools can work at DevOps speed. He makes his case for why he believes SAST will be the ultimate security tool in the future.

Jim also talks about the potential of AI in the field of software security, particularly in the area of auto-remediation for SAST findings. He believes that with good data and models, AI-powered remediation engines could revolutionize the industry.

The episode also delves into threat modeling and its role in software development. The participants discuss the importance of identifying security issues early in the development process and the return on investment (ROI) of threat modeling. Jim emphasizes that threat modeling should focus on identifying issues that static analysis tools cannot easily detect, such as access control vulnerabilities. 

They conclude with a discussion on the "shift left" movement in software security and its potential benefits and challenges.

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel

Thanks for Listening!

Chris Romeo:

Welcome to the security table. This is Chris Romeo. I'm also joined by my co-hosts and good friends, Matt Coles and Izar Tarandach, and we got a special guest today. Uh, someone who, most people in AppSec will already already know who Jim is, but Jim Manico is joining us today. And so, Jim, since in case there's a few folks out there who maybe don't know who you are, just do a quick

Jim Manico:

My name's Jim Manico, been a software developer since the nineties. I, I'm a computer, uh, I'm a software and application security educator. Chris, I love application security. It's, it's my job and it's a lot of fun. That's me in a nutshell.

Chris Romeo:

And I, I can attest I've seen you have a lot of fun doing application security at various events and stuff over the years, but let me set up how we got to this conversation. Okay, so the year, I think it's 2022 and, uh, za and I happened to be in Austin, Texas for Lascon. A certain, uh, speaker named Jim Manko is up delivering the keynote. And we had heard in the past, we, we had heard and seen things that Jim had said about threat modeling that made us think, ah, maybe he wasn't a giant fan of it. But then a slide pops up on the screen where Jim references threat modeling and the threat modeling manifesto. And he says, oh, you know, I've kind of, I've kind of changed my mind some on threat modeling. I see the value proposition in it. And Izar turns to me. He's literally sitting next to me where we're at a round table. He turns to me and he says, did he just say what I think he said? I looked at Azar back at him and I said, I think he did. I think he, I, I think he just said he's supporting threat modeling. And so for us it was like we were, we were thinking you were in a different kind of mindset when it came to threat modeling and didn't see the value in it. And so we said, we gotta get Jim on this show. We got to unpack this and, and so we gotta go, we gotta get in the way back machine here. For those people of that liked cartoons in the seventies and eighties or the DeLorean or whatever. We're gonna get in here. But let's travel back in time, Jim, and, and just let's start out by setting the stage with this story. Like what, when you first started doing application security, like how did threat modeling intermingle into it? What was your opinion of it over time?

Jim Manico:

I started my career in nineties, started doing like security services and application security type of work about 10 years later, right in the, the early mid two thousands. And I saw a lot of consultants. Beginning to ramp up and do threat modeling, and the companies were very interested in it and were spending a great deal of money on it. So I wanna start by going back and defining threat modeling as it was 20 years ago. So I go ask ChatGPT, gimme a snarky version of threat modeling and it it came back. And I'm like, ChatGPT, You see me? Ah, threat modeling, the art of gathering a bunch of highly overpaid consultants in a room armed with fancy whiteboards and colorful, colorful markers to brainstorm all the ways the system could be compromised. It's like a brainstorming session for the paranoid, but with more jargon, less common sense. And it's extremely expensive. So real quick, we'll start with a fancy diagram of your system. Don't worry if it's not accurate. The point is to make it look really complicated. The more boxes and arrows the better. This way we can charge you more. For each box we analyze, I'm gonna keep going. Identifying threats. This is where we list all the possible threats from the highly possible to the utterly ridiculous. Alien invasion. Compromising your data center. Let's add that into the threat model. It's all billable hours anyways, right? Risk assessment, we're gonna rank threats based on how scary they sound, not necessarily their actual risk. Remember, the goal is to make everything urgent, so you'll keep us around longer mitigation strategies. This is where we recommend a bunch of expensive tools and solutions. Whether they fit that doesn't matter. What's important is they come with hefty licensing fees. And require extensive training, which by the way, we offer review almost done after all the hard work. We're gonna present our findings in a thick report filled with jargon and complex diagrams, and most of it is total fluff. You don't need to read, but it's gonna look really impressive on your office shelf. And last the follow-up. Of course, threat modeling is an ongoing process. That means we need to schedule regular follow-up sessions at our premium hourly rate to update the model with new even more farfetched threats. That's the threat modeling that I've been railing against for decades, and believe me Chris, I've seen that that's a joke, but I've seen that in the real world. So when you want me to talk about threat modeling, The most important thing for me is to start with what does that mean for you as a company? What does that mean for you as a consultant or a security team, and how do you go about it? And the reason I've been changing my tune is because I. the tools I see for threat modeling, the tools to analyze software and give me diagrams, the tools to track risk in this area have gotten extraordinarily better in just the last 20 years. And so that's why I'm changing my tune. So when you hear me snark and it's that, that snark story is what I've seen in the real world many times, which is why I was a naysayer for for many years. I'm less so now, Chris, because.

Chris Romeo:

No, I

Jim Manico:

I'm a student. I'm not a, it is not about religion. Yeah. This is about science and as I get new information and see new data and see new processes, like any good scientist, I should change my mind and change it quickly as I see other evidence, and I've certainly seen that.

Chris Romeo:

Yeah, so it's primarily the. the impact of kind of new tools and technology is, is really what drove what, what kind of you started to see, what were you seeing in these tools and technologies that didn't exist before that caused you to kind of, to to, to make this

Jim Manico:

Like I, I see. I won't, I won't quote names of tools to protect the, the innocent. Right. But I've seen some tools really extensive. Uh, threat classification built into it. So I can describe the kind of application and business I have that'll give me like some a, a really good threat classification to be concerned with for that kind of business. I used to have to pull that out of the air to figure that out. Now there's really good threat classification in various tools. Number two, I, I'll mention a few companies. I like Blast, Octo and Levo.ai. These are all the next generation API companies. They're not even threat modeling companies, but They're installing agents and services and I can click a button, let'em run for a bit, and get dramatically accurate and impressive understanding of that microservice architecture. The exact data flows between them. That used to take me 20 plus hours in a threat modeling session. And I have tools that make generation, uh, generation of diagrams a lot easier and extremely more accurate. And the other thing is just process like threat modeling used to be Cigital, Adam Shostack, and a couple others. And that set the Jim Del Grosso and that's about it. And now there's. Hundreds of thousands of professionals that participate in and do threat modeling. And the more we do it, the better we get at it. As an industry, this isn't your first threat model. Chris, you've done a few, and if I go back, if I sit in on your first threat modeling session ever, Chris, I'd have probably thrown you out the window and fired you. And if I probably watched you do threat modeling today. My guess is I'd be extraordinarily impressed with your work.'cause you've been doing this for, you're old and gray, you old man and you. You've learned. You've learned. So am I buddy. So am I. You've learned a lot. So process has gotten better, the tools have gotten better. Threat classifications gotten better. And using all these more modern things, we can make threat modeling extraordinarily more valuable than it was even 10, 20 years ago.

Chris Romeo:

Yeah, and I'm, I'm hogging the microphone here, so Matt, and these are jump in at, at will, but I'll, I'll hog the microphone as long as I can,

Izar Tarandach:

No.

Chris Romeo:

know. So, um, threat modeling made a. Manifesto. How, how what? I know you mentioned that on your slide. That was another thing that Izar and I saw up there were like, oh, that's cool.'cause the three of us here are all co-authors with a number of other folks of the threat modeling manifesto. So what was the, what was the impact that threat modeling manifesto, when you took a look at that and, and how did that, how did that kind of get into your thinking about threat modeling?

Jim Manico:

It was really good to see so many professionals that were competitors in the world of threat modeling coming together to talk about. Why, this is why this is important, right? So they define threat modeling in, in a, in the first time that I thought was a reasonable definition, right? what are the four key questions? Uh, what are we working on? What can go wrong? What are we gonna do about it? Do we, do we do a good enough job? They talk about why the threat model. They talk about Who should threat model. They talk about the values of a good threat modeling team and consultancy and like what the ethics are of doing threat modeling. They talk about the core principles of threat modeling for the first time, threat modeling in my world went from a bunch of overpriced consultants billing me$350 an hour to do fluff to Matthew's. Like, where do you get that rate from? Talk to me.

Matt Coles:

I, I don't remember getting that much for

Jim Manico:

They took modeling from. From a real fluffy hourly rate thing. This is the first time I think we're really moving a. a. We see competitors in the industry agreeing on what a more ethical cost effective and effective set of processes are gonna be. They all also talk about anti-patterns like. The, you know, threat modeling. Like get rid of the hero threat modeler, uh, you know, focus on practical solutions. Be careful about over focusing. Be careful about I, I mean, the things that I was concerned about with threat modeling, they addressed without any bullshit in, pardon my language, without any BS the manifesto itself. And I thought that was really impressive. They addressed my concerns about the waste that threat modeling could be. That's where, that's why I was impressed with it.

Matt Coles:

So, Jim, can I, uh, can I just jump in here, Chris? Uh, um, do you think about, uh, about, you know, bringing threat modeling to, to developers, you know, making it Um, part of everyone's everyday life as opposed to something, you know, you hire a consultant for to come in and, and as, as Izar likes to say, parachute in and solve world hunger, uh, for obviously a lot of, a lot of money.

Jim Manico:

All developers do not need to be a part of threat modeling. I want my developers writing code and building solutions. I, I believe threat modeling is most effective. When I'm about to build a new project, I want the architect and the lead there. Not every developer, right? Or when I'm about to make a very large architecture change, I want some of the leads. I don't want the whole team, right? Traditional threat modeling, they parade every single developer in and ask a bunch of questions. I don't think that that's a huge waste of developer time. So I want, I want, my developers and I'll have my architects or the lead do threat modeling with the security folks. Typically, this is my take on it.

Izar Tarandach:

Time for me to jump in

Jim Manico:

Got what you got.

Matt Coles:

you the hand grenade.

Jim Manico:

developers threat modeling. That's great. That's

Izar Tarandach:

So G, look.

Jim Manico:

write code while your guys all are sitting in threat modeling meetings for a day. Go.

Izar Tarandach:

So listen man. Long time fan. First time, uh, argue So basically what I'm hearing from you is that you are sort of, uh, in a way you are, uh, disagreeing with yourself. So there's that famous tweet of yours. I use that Avi Dublin uses and a lot of other people that I know use to justify a lot of effort by developers, where you point out that nowadays every developer is basically responsible for whatever securities in the company because their code is in the front line. Right? And I agree with that wholeheartedly. The thing where I go head to head with you, Is that, I think that once you put that forward, saying that every developer is writing stuff that's on the front line, to give an exception to certain developers or to, uh, elevate some people because they are architects, doesn't really agree with what we see in the field where the architect gives like a direction and the developer, when they are developing, they are forced to make a number of architectural decisions that might, and do, influence the threat model overall.

Jim Manico:

That, that's a good point. I mean, what I'm trying to say is it, it, it depends on a lot of factors. These are in some, in some organizations, they're gonna pick the architecture set architect.'cause part of threat modeling is, Establishing good security architecture patterns that developers follow. So if your company's mature, has architects and sets a security architecture document that's valuable, then I would deliver that to the developers. I believe in developer training. I want all of my developers to go through security training from one of our companies. We all do. We, we, we do a good job. So I want all developers to be educated about security basics. But I don't need to spend four hours with every developer threat modeling the authorization code flow, and pixie for OAuth two. but I do need to do that with the identity architect and some of the devs working on that. So I just don't want the entire team to do threat modeling. I want the entire team to be educated on security and go through regular developer training of some kind. There's many options, but again, I don't want to take my user interface, react developer. And have them spend four hours on OAuth. Two architecture threat modeling. That's not their world. I want them to understand how to do react security dangerously set inner HTML, ECUs sanitizer. I gotta use my types and prop variables properly. I gotta blah, blah, blah. I gotta make sure I validate URLs that enter an active context. That's not threat modeling. That's technical education. So I like role-based security where the different members of my team, Are gonna do threat modeling only if it's appropriate for the role. Otherwise, I'm being extremely wasteful that, oh, that React guy who's busting trying to get me a good ui, sitting in a long threat modeling meeting is a waste of time and don't, it's bad to waste engineering

Izar Tarandach:

Right,

Jim Manico:

time.

Izar Tarandach:

Right.

Jim Manico:

So yes, I believe in, in threat modeling. Yes, developers should do threat modeling, but my big problem is how wasteful it's been. I wanna like, I want to bring that in. So I'm doing it cost effectively, especially as the economy changes, especially as things begin to dip. I gotta be, e efficiency in security is gonna be the big buzzword for the next two or three years.

Izar Tarandach:

and and I think that basically when you put it like that, we agree. The thing is a

Jim Manico:

I like that. What, wait, what did you say? Izar? You say that louder? I didn't hear you. What? What was that

Izar Tarandach:

it's not the first time that we, it's not the first time that I agree with you. It's just that you never got to hear it. But, uh, the, the, my, my point here is that my spiel is that, uh, uh, threat model of restoring. So I want that developer threat modeling at the scope that they work exactly as you said, right? So you have a baseline that covers the whole architecture. I want the mindset of the threat of the, uh, the developer when they start doing their story to be on a threat modeling point, saying, what choices am I making here that influence the security of the architecture? And is there something that I have to add to the overall threat model?

Jim Manico:

I agree.

Chris Romeo:

I wanna, I wanna take a question back. I wanna, I, I wanna ask you this question a little bit further back, Jim.'cause I think this might inform, uh, Izar's kind of threat model every user story I. And also your point about limiting the waste that's happening here. What do you see, like, what do you recommend as far design

Izar Tarandach:

Hmm Hmm.

Chris Romeo:

for developers? Like what should they be doing in your mind from, and I'm not talking about just the architects that are, that are putting together the big view of the whole thing, but let's just say a senior software engineer whose normal job is to write feature code, what is the role of design in their world and how do you, how do, how should they do design?

Jim Manico:

If I'm doing something that I've done many times before, I'm building a UI with database interaction, or I'm, I'm doing more of the common features of the application. I don't need to threat model because it's something I've done before, over and over and over again. But if I'm about to do something that I don't have guidance on or something a lot more complicated, like suppose I want to do file upload in a queue with really dangerous file types. And I haven't built that out in my system before. File upload is a major difficult thing to write securely. It is extremely complicated with pieces like file name validation, magic byte validation, content introspection, the persistent strategy that developer should stop and do threat modeling before they build out that feature because it's new. We don't have a reference architecture for it. It's mammothly complicated interacting with the database file system and more. Now we need a stop and threat model before we build that out. And I usually see this, by the way, just in normal software development, when a developer is faced with a really complicated feature, any good developer is gonna talk this over with their peers or technical boss and the customer to get more insight in what that is. Anytime we have a complicated feature, That we haven't built out before. I'm often given requirements as a developer that I have to go back and ask a dozen questions, and even though this might not be threat modeling, now that we're adding security into our process, those complicated questions absolutely should involve an architect, a security person, and involve security if it's new, if I don't have a reference architecture or example of that done securely already.

Matt Coles:

So what you're saying is for exceptional conditions,

Jim Manico:

I'm, I'm I'm with you're doing something new and complicated and risky. Stop and design this out with your team before you start writing code. Absolutely.

Chris Romeo:

Yeah. Yeah. It's, it's a world where you've got a library of patterns and you've got threat modeling. And when you establish, or you work on a threat model to the point where a threat model can grow into a pattern, basically in what you're describing here, like you could, you could grow a file upload pattern. Where you're like, okay, here's all the things that we consider in the threat model, which translated into requirements. So if you're gonna do another file, upload with dangerous files, here are the things that you have to do. Now, you'd probably want people in that scenario to refresh that, especially if it's six months later or 12 months later, um, when they're using it, because things change, right? Like there is a pattern is not good forever. Um, Some patterns, input, validation. I'm never gonna believe that's not good. But some patterns, like a file upload, things are gonna change. New files types are gonna come out. Someone's gonna, I don't know, attach Bitcoin to it somehow. And you know, there's just gonna be, there's gonna be new, new derivatives that are gonna

Jim Manico:

we're singing the same song. If I'm doing file upload development for my team for the first time. Reference architecture threat, model that out. But if I'm on the, the other end of that maturity scale where I have a well configurable, reusable service for file upload that all my developers can leverage, it's going through massive assessment, multiple threat modeling rounds. I'm not gonna, I'm just gonna whip it out and do it. I'm not gonna go back. And threat model like so Izar it all depends on the context of, of what's available and what's what's been done in the past in that team.

Izar Tarandach:

And, and it's interesting that that meets the, the, the bit where you operate on the, on the secure, secure coding part. Like once you build that, that library of reference architectures, then you already have reference implementations, then you already have secure libraries, things that you already know that have been vetted and tested and whatnot. And then you get your developers building with the right Lego pieces.

Jim Manico:

Exactly. And then we don't need to, we've already done our work in threat modeling. It's the change, it's the big changes or the newness or the uh, I'm gonna mess with the architecture. Something That we haven't done before. That cries out for threat modeling or even better at the beginning of a project, you know, more the, the the earlier of a, before we're writing code, we have an idea, we budget writing. I am not opposed to having a large number of developers in those initial pre-product meetings, and then they can go off and do their work and then have smaller threat modeling sessions as new things come up. But to your point, Izar, especially at product conception, having like one day of talking about security risks and security duties and what we're about to build, that's a really good idea. Doing it every week on a cycle, every Friday is threat modeling day big. No. Big, no. But at the beginning of a project, I'm more inclined to have more developers participate. To your point, Izar, do you feel the love in the room right now? Do you feel it? I feel it.

Izar Tarandach:

I'm, I'm still hearing some vibes of, uh, okay, we got the, these solutions and, and we are very logical and experienced people and if somebody were to turn off their, their podcast listening device right now and think, oh my God, this guy's just solved the whole problem, right? So Jim, why is it that we keep having CVEs? Why is it that we keep having problems? Where are we going wrong?

Jim Manico:

Because security's hard. Not pe. Pe sometimes it's not threat modeling.

Izar Tarandach:

Nothing.

Jim Manico:

a SQL injection. And you didn't scan your code. Oh, believe me, there's a lot of teams that are still what's SAST I I, I I said this in a manager training once I went to a bunch of senior managers of a real big company and said, If you're managing software projects and you're not using a static analysis tool, that's negligence. Oh, and next thing I know in front of HR explaining myself why I'm condemning all their managers, but I, but I stand by that statement in 2023. If you're doing a software project, you're writing code, you're not doing static analysis. That is bleeping negligence at this point. Like, what are you doing? You, you're not doing security, and I'm not trying to sell a certain tool. I'm just saying something like keeping your third party libraries up to date. And scanning your code for security. This is the cost of doing business. And if you're not doing it, you're way behind the eight ball. And, and these are, that's why we still find vulnerabilities today. That's why we still have CVEs, because a lot of software teams are still not doing the basics of security analysis, the basics. That's still.

Izar Tarandach:

Yeah, it's, it's basically.

Matt Coles:

gotta, ask. I have to ask, I have to ask, because this came up in a previous, uh, episode of ours. How do you feel about DAST Tools

Chris Romeo:

Yes, es yes. yes.

Jim Manico:

In the age of microservices, DAST is a dead technology. At, at OWASP, we just lost ZAP. ZAP left the foundation. I love Simon Bennetts. He's a great volunteer. But I think, I think watching DAST walk away from OWASP is a sign of the Times because in a DevOps lifecycle, I like IAST, I like SAST. Hey, Jeff Williams. I like IAST I said it, Jeff, you could quote me on that. I like SAST I like, I like software composition analysis. But DAST does not work well for a DevOps work dev DevOps cycle especially, doesn't work well for APIs and a lot of vendors are gonna beat me up for this. Um, but yeah, I've given up on DAST. I don't use DAST

Matt Coles:

Well, so just, just uh, just a level set, right. Zap went over to, uh, a new new initiative called Software Security Projects in the Linux Foundation. Yep.

Jim Manico:

it's a great tool and, and for an old school web application. Absolutely I would use it. I don't see a lot of old school web apps anymore. I see microservice meshes and react and all kinds of different things where DAST is just not nearly as effective as other tools and it's, it's slow. It's super slow. I want DevOps lifecycles. I want to be able to like have a developer issue a PR and run a whole bunch of security tooling in like three minutes so they can merge. I, I don't like dast running for hours. None of those incremental scanning, but just in general, and I'm not talking about me, a large number of my customers who've made their own decisions about tooling have given up on DAST for a lot of reasons. There you go, Chris. I like it when we agree

Chris Romeo:

Well, I said it, I said it about, I mean, I, I said it a couple of months DAST

Jim Manico:

is dead.

Chris Romeo:

and. I stepped into it because all the, the, some of the DAST vendors came after me and wanted to with me

Jim Manico:

Good.

Chris Romeo:

about the viability of the technology. And I'm like, I don't even have to argue with you. I mean, I have, I have evidence in my own career of trying to use these tools, like you said, against modern applications. They just don't provide any value. Like, I don't need to know that I'm, that, that, uh, you know what the D N SS records were? For the thing you scan, like that's the top finding you're giving me, coming outta these things. So yeah, it's a, uh, it's, it's a, definitely a dead technology, but I'm with you. Like, uh, Jeff has, uh, brought me over to the IAST thinking as well. Um, I've kind of come on to, to that as a, as a thing that I think is, adds a lot of value to, uh, to the world. And so, yeah, It's good

Jim Manico:

it's last on my list. I'm gonna ramp up SAST first. I'm gonna ramp up SCA second. Maybe I'll do IAST third if there's a, if there's a good use case for it. Hey Chris. I'll go one further. Can, can I talk? Can I talk some more? Smack. You ready?

Chris Romeo:

Let's

Jim Manico:

Software composition analysis tools. You ready? They're all, they're all going away. Snyk is dead as a company and all that software composition analysis is, it's just a feature of SAST. That's where it's gonna go. And the whole software composition analysis industry, the whole SBOM industry, the whole tooling industry in that world is all going away and there it's just gonna be a feature of static analysis that's, so we're gonna end up with no SCA, it's just a SAST feature. No DAST, it's too slow. SAST is gonna rule security assessment and it's gonna get a lot better. We see vendors like Emre Schiff left and others making a lot of innovation there, and we're gonna be scanning code to do the majority of our security assessment, then do some container scanning and other secondary things. But SAST is gonna rule the world when it comes to application security.

Izar Tarandach:

So jim,

Matt Coles:

static code analysis is a slow operation today.

Jim Manico:

No

Izar Tarandach:

No, no, no, no, no.

Jim Manico:

wrong. Matthew. Look at next gener. You gotta look at the old tools like, like a check marks is, I'm a big fan of these folks. You do the initial scan, it's extremely slow, but then you do the incremental scan, it's lightning fast. You take a tool like Semgrep from returntocorp. That was built. It's a semantic rep engine. I can scan millions of lines of code in under two minutes. So you gotta look at the modern tooling in the SCA world and to pick the right vendor. You do have DevOps speed code QL built in a GitHub lightning fast. But the old tools in their initial scan mode are slow. But new tools or incremental mode does work at DevOps speed. And that's Matthew, that's why I love SAST so much'cause of that speed and increased fidelity that I've seen over the years.

Izar Tarandach:

So Jim, two things. First of all, big parenthesis here. I will not, uh, say the name of the company because I work for it. But, uh, the way you're talking, I have some tools to show you. So catch me, catch me outside. But, uh, no, it's not.

Jim Manico:

No, they're, they're they're bleeding cash.

Izar Tarandach:

I'm

Jim Manico:

in their practices for doing sales. they're, they're, they're, they're building a tool that I can rewrite in two days. And they're, they're,$8 billion company hanging off an easy to build tool. SAST is hard. SCA is not. They're all gonna be features of SAST I predict it. Watch

Izar Tarandach:

just gonna tell you that I have some integrations that you're gonna love when it comes to data, I am like a dog. Okay.

Jim Manico:

convince me.

Izar Tarandach:

The next thing, the next thing. So, uh, you've been talking about tools, you've been talking about SAST, how important it is and all that stuff. And of course, one nowadays cannot go for too long without saying anything about AI and all that good, good stuff. But now developers are working with all kinds of, uh, uh, uh, looking over their shoulders, coding assistance and whatnot, which are fed as well by AI trained and whatnot. And I'm starting to see a cycle here where AI writes code and AI checks code and there is a, a developer somewhere running around asking, what did I write? What did the AI AI write? And who's checking this? And how much can I trust this? How much do you trust code that comes from an AI system these days?

Jim Manico:

I need to go back. I'm not done with my Snyk rant. I wanna say one more about Snyk and then I, and then No, just, I'm gonna say so nice. If I had to buy a tool today, that's where I would go. Their enterprise glue is the best of any tool out there. The people that work for the company are exceptional. They're great, great, brilliant professionals, and I would buy them today. I'm just talking about the future, I think and, and that's why Snyk put out a SAST engine. That's really innovative. So I'm not trying to, like, I have a lot of respect for the company. A great deal. I would buy them in a heartbeat. I recommend'em all the time. I'm predicting out in the future that it's gonna be a hard, a hard industry to keep in. It's gonna merge into SAST That's all I'm trying to say. Alright, I'm done. with that. I'm done. I'm done. I'm done.

Izar Tarandach:

Now go to the AI part.

Jim Manico:

AI?, all of a sudden, tools like Black Duck are more important, right? Black Duck is an old school like licensing engine and it's, it's an old school. Um, Sta, uh, software composition analysis tool back in the day. So the, the tools that have the ability to look at segments of code and check for licensing is suddenly really important in the advent of AI because as a developer, I will use AI all day long for a million purposes. I have ChatGPT on my chat, a paid ChatGPT With multiple unique plugins, some that I built myself, that I use for my work to do research and similar, and to be a developer and not to use AI excuse, I got the Corona from the conference. I'm sorry, got that Corona. But, um, to be a developer and not leverage ai. you're gonna be way behind the eight ball real fast because I can work. I, I, I, it's tripled to quadrupled my productivity. And I don't, I use it for a lot of reasons. Hey, gimme the initial stubbed out code for this need done. Um, hey, here's some code I'm working on. I hope my boss doesn't mind this. Please analyze this. What can I do better? And the answers I get, and I'm not agreeing with everything AI says. I'm not just blindly using it. But it's like a copilot. And if, and I use my own discernment and review before I push this stuff live. To use AI as a developer is necessary. And if you don't, your productivity is gonna be one third to a fourth of your peers and you're out. So not only is AI important, it's now mandatory for development. If you want to be efficient. We have to be discerning and you need to have tools in place to make sure that you're not busting out licensing issues. You're, you're not breaking the licensing by stealing code from other teams. And there are tools to assist with that, that should be in place. How's that Izar

Izar Tarandach:

I, I'm coming here from a point where like the paranoid in the room, I'm thinking that something here sounds fishy and, and you as duh. Code security expert for me. Explain to me how this look of AI, talking to AI, checking to checking AI and the fact that those AIs are trained on code that is not necessarily known to be safe and secure. So where do I get some assurance in here that what's coming out is good enough, especially if developers start leaning so much on the AI that as anything else in life that gets automated and assisted. Their own tolerance goes down because they start trusting the thing more.

Jim Manico:

First and foremost, I take a big chunk of code, throw it into AI and say, do static check security of this code, and I get results. comparable to professional tools. So AI has security awareness. If you ask the right? questions, right? if you say stub me out some code they'll stubb you out some code. But if you say stub me out some code securely, it will do that and it's, and you need to use discernment. First of all, I'm reviewing any code that I generate with AIand second, before I push it live, Izar I'm doing static analysis with multiple tools. By the way, I'm doing software composition with great tools like Snyk This message is brought to you by Snyk marketing. You know, I, I'm also, you know, and I'm, I have some custom rules that I'm doing. I'm doing container scanning. I'm using other cloud security services, so I'm not just generating code and AI and pushing it live. That's nonsense. I have Multiple layers of licensing and security checking along the way. But if I have that in place, the whole DevOps security lifecycle Izar I can whip out some code fast, like Grease Lightning, right?

Chris Romeo:

here's. Here, here's, here's some other, the other kind of side of the coin. I guess when I, when I think about AI, I, I'm in the same boat as you. I think about it as enhancing. It's not, it's not replacing, I don't care what anybody says, like maybe 10 or 20 years from now, I'll just be able to tell it, write me a full web application that does this, this, and that, and I can just deploy it in five seconds or whatever. But right now it's an enhancement. It makes you a 10% to 50% better developer. But then I think about training data. I think about where, you know, Jim, you've taught secured coding for a long time. The Stack Overflow problem was AL has always been fun because people would go to Stack Overflow, they would copy a snippet of code, they would paste it in and SI scientists went and studied Stack Overflow code, figured out that there was a certain number of vulnerabilities, per example or whatever, like Stack Overflow code was found to be not very secure. Now, if somebody's, but if somebody's training an AI based on Stack Overflow, Which is one of the, you could argue it, there is probably not a better source of total number of lines of code on Earth than what exists in Stack Overflows database. But if you're training the AI to code insecurely with Stack Overflow code, how does that not get reflected right back into, and I get your discernment, but you know, not everybody's got your discernment, though. Not everybody's got your knowledge and experience to be able to, to wrestle and understand that something's bad coming outta the ai. It's

Jim Manico:

not that big of a deal. I I get outta the ai, I, get it working, and then I do security scanning and fix my vulnerabilities if they're still there. It's not, And if you're not, if you're using AI without security review, you're screwed. In a bad way. So the answer is do proper DevOps style security review with the, with the, with, with proper tooling. Review the code. You get out of AI before you push it live and make sure when you're asking AI questions, you ask for security. Stub me out, this and that with really good security in mind, and it will use that part of the model if it's even in the model. To answer the question seriously, just try it, say, give me a basic. Ruby on Rails app for a for a web app that does chat, and then ask the same question to say, do it with extremely rigorous security, and you'll get different answers. So it's about asking your AI the right engine. a as. mo The, the marketplace, Chris will eventually become the different models. So my prediction is I'll be able to buy a model of really good application security fixes across all different languages and ask that engine a lot more accurate questions eventually. Okay. Here's a even bigger prediction. Static analysis is going away. I can just use AI to do it. Oh. So static analysis past 10 years will be gone. We'll just ask aI AI will be looking over my shoulder saying, Jim, you did that wrong. You know, or, or or fix it for me, or whatnot. So, I mean, AI already is really good static analysis as is if you ask the right questions.

Izar Tarandach:

So unbelievably so we do have people watching us on LinkedIn, and that's only because of Jim. And if any of you guys watching would like to ask any questions, please feel free. Just put it in there and we trying to try and sneak that in somehow.

Matt Coles:

While, While, we're waiting, uh, for that, I do have a question for you. I'm gonna switch gears a little bit here if I can. Uh, As you know, uh, there's been a lot flu of activity coming out of, uh, US White House and, and CSA and NIST and others. And there's a big push recently around memory safe languages, switching to memory safety in languages, uh, as part of Secure by design, secure by default. What are your thoughts on that? Is it worth it? Should we move there? Is that, is that the place to go?

Jim Manico:

I've been a Java programmer since the nineties, so. I, I, only use, for the most part, I only use memory safe languages. Right? They prevent common vulnerabilities. They reduce exploitation. They what? It's actually a simpler development process. I'm not doing memory management. It's like, it's, it's way more cost efficient. I don't have to do memory fixes. Um, you know, it's, they're usually part of modern, um, software ecosystems. They're more reliable overall. So this is a great thing. But this, but like when I hear this being said, I'm like, Yeah, that I, I made that call about 25 years ago, so I'm not sure why they're talking about it now. They're probably thinking the world of like more like thick client development and like cc plus plus type development.

Matt Coles:

iot. Iot, device development,

Jim Manico:

I'm sorry.

Matt Coles:

IOT device development, embedded development,

Jim Manico:

embedded but this, this is why I, I, I'm, although it's not my world, the people that teach for me that it is their world, they tell me to push rust. They say, get away from cc plus plus move. Move that kind of development into the rust world. And a lot of the memory problems you see in c and c plus plus largely go away. I don't know if that's true, but I think it's extremely reasonable. to push towards. rust to use memory safe languages and stop doing memory um, uh, manual memory management. And I made that call about 25 years ago. One more thing,

Chris Romeo:

All right, we got somebody.

Jim Manico:

back to

Chris Romeo:

So

Jim Manico:

AIreal

Chris Romeo:

go

Jim Manico:

real quick. You know why I like ai, Chris is because it lets me build, um, gangster rap about my friends. That's what I use AI for mostly. Well, Verse one. You ready, Chris? Real quick. Chris Romeo on the mic. Security Pro drop of knowledge everywhere that he go from the boardroom to the streets. he's the one to beat when it comes to AppSec. He brings the heat now. Chris, Romeo Romeo, Securities Romeo. He's guarding The gates never moving slow from the east to the west. He's the best. No contest. Chris Romeo. Romeo He's Romeo. One more. with The hacker's mind and the teacher's soul. He's patching up systems and making them whole from SQL injection to buffer overflow. Chris is the name that the hackers know. So that's my favorite use of. ai.

Chris Romeo:

there we go. I'm honored. I'm honored. So, Tony Quadros. Tony Quadros had a good question here that, uh, I wanna get your take on.'cause we talked a lot about SAST and AI and how these things are gonna come together. So Tony says, uh, what about auto remediation for SAST findings? Is that legit? Like, what are your thoughts on that?

Jim Manico:

Yeah but you need the right model. So here I had this conversation at at DEFCON with a few vendors. So there. are a couple vendors out there. who've been doing scanning and security testing out of giant scale. think like edge scan, think like the white hats of the world, those who do security a service and they have decade and edge scan's. My favorite right now. But basically the, um, think about the millions of vulnerabilities they discovered. Developers tried to fix them and then they went and reassessed that the fix was proper. So if I can pull like a 10 year model. Of all the software security fixes that worked. I'm gonna be able to have a really good remediation engine, but I have to be very careful how I train that engine. I have to look at each vulnerability and each fix and be really clear that that's a proper fix to add to the model. the data. is out there. Human beings I think will need to sort through the data to, to, fill a model. But once that's done, I'm predicting a year or two out remediation engines are gonna light up the industry. And the things we're gonna be doing is we'll be SAST SCA which are gonna merge, I believe, and remediation engines suggestions are gonna be a really big thing in like about a year or two. So I'm I'm with you, Tony. I think remediation and AI And, and proper data sets and manual curation of that data sets Remediation will change the industry when it comes to auto remediation. Go build it. Someone's gotta build it.

Chris Romeo:

I just saw a good question pop in about, uh, let me read it. Um, this, this might be, I'll let Jim, we'll let you take it first, but this might be a Matt and Izar question. What considerations should we have when performing threat modeling for applications that use artificial intelligence? I.

Jim Manico:

that's, that's a really good question. Um, I'm, I'm actually just building my AI security class now, so there's, there's, gimme that question one more time. I'm I'm gonna light this up in.

Chris Romeo:

So what considerations should we have when performing threat modeling for applications that use artificial intelligence?

Jim Manico:

That's a really good question. I'm I'm gonna get help on this question because I want, I want to be really detailed in my answer, and so

Chris Romeo:

So wait, so hold on. So you're, so Skynet is basically providing you the answer to this

Jim Manico:

no,

Chris Romeo:

about

Jim Manico:

No, it's just, chat GPT four's data model with the web crawling plugin and a few other technical plugins that I wrote to gimme good answers. data poisoning, model inversion, adversarial attacks, model stealing, model explainability, data privacy. Infrastructure security. um, supply chain threats to all different tools are relatively new. Bias and fairness about how you're training your model robustness in generalization feedback loops where a, where AI models influence the data they later consume. Consider potential dangerous feedback loops, resource exhaustion. AI takes a lot of horsepower, reproducibility. model drift as it learns over time in the wrong direction. Simple access control, regulatory and compliance. That's coming up So I. There you go. So when it comes to like threat modeling, ai, there's a lot of good information already out on this topic specific to the different AI engines, and we can, I'll pick one. Like model stealing, right? A threat actor might be able to replicate a proprietary model by using the public API asking a lot of questions. and extracting parts of the model out for their own use. Gotta be careful about that or model inversion. Attackers might attempt to reconstruct the training data by querying the AI model revealing sensitive data. They shouldn't be revealing data poisoning. We saw this in some of Microsoft's early ai. a bunch of racist began to like train the AI with a lot of really horrible ideas. and the AI engine itself became extraordinarily racist and they shut that thing down. So ChatGPT is doing and, and, I've seen other AI engines that someone plugged into it. my life is difficult. Here are the challenges I'm facing. and the AI engine said you should kill yourself. This is dangerous to human health and that there's liability for that company. So there's a lot of really rich information out there already on how to Threat model an AI BA software that's

Chris Romeo:

Yeah. And before Izar and Matt give us their thoughts on this, I just want to put a plug out there for the OWASP top 10 LLM project. Uh, Yeah, Steve Wilson and, uh, the rest of the team. That's, that is a, that there, that is a case study in how to build an OWASP project at top speed with 500 volunteers. Like there's a conference talk to be found in that. But that, that's something I look to as a source of threat as well when I'm, uh, and understanding what are the issues in dealing with LLMs. There's a good example there. Matt and Izar, you got anything you want to add on the threat modeling AI side?

Izar Tarandach:

So first of all, yeah, thanks.

Matt Coles:

I was just gonna say briefly, uh, actually you've covered everything I was gonna cover. Uh oh. The OWASP project is, is definitely a go-to reference. I put them in chat folks, if they're not familiar, they can go ahead and take a look at those links. Uh, you know, this is, um, it's the same activity, right? Threat modeling is threat modeling. You're just looking at a different set of threats. Uh, and so, uh, continue as your work. Sorry, Izar. Go ahead.

Izar Tarandach:

So I, I just wanna second everything that Chris said about the, uh, the OWASP top 10 for LLMs. But the, the one thing where I think that I step away from what a lot of people are talking in terms of threat mode of, uh, LLMs and AI is, I, I'm not focusing so much in the system that's doing the LLM and the AI. I'm more worried about how that thing is interfacing with a lot of other things. So how that those are becoming interfaces and in some cases they are getting common authority over a number of other systems and we are getting those things that are not completely well understood, that are not completely predictable, and we are giving them privileges and powers beyond what they might be advisable to have right now. So I, I'm, I'm putting my focus more on how those things connect with the real world and with whatever comes down on the pipeline from the results that they generate, then, well, actually, in this specific case, prompt injection would be a very, uh, important thing to look at. But I'm, I'm not as worried about all the poisoning of models or stealing models, like I'm not focusing on the model itself. Basically because I can't understand the math. It's way over my head. The only math that I try to understand is calls, and even that's hard sometimes. So, uh, I, I, I'm, I'm, just trying to say it's, it's, these are again, parts of a big continuum and I'm putting my eyes on what comes down on, on that pipeline, not on the other.

Chris Romeo:

So almost the trust boundary. You're thinking about the trust boundary around the AI artifacts,

Jim Manico:

LLM, I

Chris Romeo:

are coming into it. So yeah, that's a, uh, that's a good, a good thing to consider. So we're almost out of time here. We've, we've cleared any questions I

Matt Coles:

there was

Jim Manico:

Quick note,

Matt Coles:

uh, around

Jim Manico:

just a real, really quick note. The, the OWASP top 10 for LLM, I think it's good, but it's really basic. There's not a lot of details there. I wanna recommend a resource. The, and this is something that Gary McGraw has put out. He got a team of PhDs together. It's called the Berryville Institute of machine learning They have their own. And this is like PhD backed research, PhD level articles. Some of'em are hard to read, so I think the OWASP top 10 for LLM is a good place to start. But like any top 10, you read it once and let it go. The re I think deeper research you'll find like variable into machine learning and a couple other think tanks, that are diving deep. The OWASP top 10 for LLM is surface. It's a good way to start. And I, and I, and this is like berryville iml.com, it's run by Gary McGraw, who is a PhD and a team of PhDs. I like that. And, and, and I needed the OWASP top 10 to get me started. And then I'm, now I'm reading all of his articles and getting a way, way deeper perspective to help me be a better professor. You know? So if I'm gonna be in front of students, I need more than the top 10 to be, to be legit, just.

Izar Tarandach:

Right. But, but, but Jim one call out the, the work that, the amazing work that Gary and

Jim Manico:

mind boggling.

Izar Tarandach:

team are doing. It, it's, it, it goes deeper into the model side and building, building models and, and protecting models. D o s I think focuses a bit more on how you get to use this thing securely. So while I agree with you that the difference in that is amazing, I, I, I think that they work side by side and not

Jim Manico:

No, I don't think so. Side by side, I think OWASP top ten first'cause it's very little detail and then put it aside and then focus on the more PhD level papers.'cause it's just OWASP top 10. For LLM, it's really basic and I need details to be a good, to be a good, professor. So not side by side one. Then the other is my take.

Izar Tarandach:

oh, sorry. As a professor. Okay. I, I've, I missed that qualitative. Okay, cool.

Matt Coles:

So the, there were two actually, there are now two questions, uh, on the list for us. I think the first one, uh, is probably a Jim, Jim-targeted question around, uh, it's a two-parter. Uh, this is from Max. He was asking, uh, showing the ROI of Threat modeling A and part B correlating results of Threat modeling with say, SAST and other activities throughout the lifecycle. So maybe, maybe the first part you could tackle the ROI of threat modeling. Like how do you demonstrate that? What's your, just a, and I know we got a just a few minutes

Jim Manico:

left here... Mean, ROI of threat modeling, this, I don't do threat modeling consulting and that, that's not even an interesting question to me, but I know that for those of you who do threat modeling as part of your job, it is a, it's, it's a lot more important. So again, I'm getting assistance here, like define the costs, ongoing costs. Quantify the benefits, uh, prevent, does it actually prevent security incidents, reduce, reduce remediation cost, improve security posture, and I, how do you even study that? I bet the cost to study if your threat modeling was useful is like the cost of threat modeling itself. I don't, I don't have a good answer to that, but Matthew Za and Chris, I bet you do have a better answer than I'd have about proving the ROI of threat modeling. Any thoughts from you three?

Matt Coles:

Well in the

Izar Tarandach:

should have an episode on that.

Matt Coles:

What was that, sir?

Izar Tarandach:

We should have an episode just

Matt Coles:

We should, uh, I'll just call out the, in the manifesto we do call out, of course, the ROI, the, the fundamental ROI of threat modeling is that you get meaningful and valuable results out of, out of the activity, right? So we wanna focus on the, the results that we get. And for the level of effort put in, and as you, as we talked in the very beginning of this episode, uh, you know, you high, you highlighted rightly so that you don't want your entire development staff, uh, part of that activity, that that doesn't maximize value. What maximize value is focusing on the hard things, the things that are unique to the system or things you're innovating on. And or getting everyone together at the beginning of a project. So everyone's on the same page as the, what they're gonna do moving forward, and then focusing on the, uh, the differences beyond that. And that's where you maximize, that makes knowledge. You maximize your value outcome, uh, from doing threat modeling without necessarily directly measuring that value.

Jim Manico:

and, and the Matthew really well said. I'm, I'm gonna steal that. I'm really impressed. ROI though. It's a total benefits minus the total costs divided by the total cost. So there's a and I and just the problem is to measure the ROI of a threat modeling session is itself mammothly expensive. So I don't, I don't, other than what Matthew said, I don't have a good answer.

Chris Romeo:

Yeah, I mean I think you gotta look at the number of issues that come out of it there. That's something that you can metric, and I've seen that be successful before thinking about. How do we let, let's measure the, the number of things that were detected from a, from threat modeling sessions, because then it gives us some type of, we, we, we can, we can all agree it doesn't matter what form you use. That, and I don't care whether IBM came up with it or not. It's gonna cost more to fix something in

Jim Manico:

I agree.

Chris Romeo:

Than it does early in the process. Right? And, and so there is a return on investment in finding issues before they get to production. And so I can use that as a soft ROI to say, okay, we, we detected, we found, uh, five issues during this threat model that would've cost us five x. To fix as rework in six weeks or two months or two years down the road. And so that's kind of my general approach and I'd love to provide more data to it, but you kinda have to get into the individual company to

Jim Manico:

Let me add one more thing to that thought. Chris, if I use threat modeling and I've discovered SQL injection in a, in a pattern, that's not a good use of threat modeling.'cause I,'cause I can just catch SQL injection and static analysis with really good accuracy right now. So the, the, the, the question was relating to static analysis. Threat modeling should be identifying things that static analysis can't find and can't find Well to really measure ROI and if you do that

Chris Romeo:

Yeah, I mean, you're talking business logic, right? Like there's not a SAST alive that can find business logic flaws now.

Jim Manico:

Static analysis is useless at access control. That's just a business domain. Static analysis is useless at access control, which is the number one thing On, the OWASP top 10 broken access control.'cause it's a business domain. Everyone has their own policy. So threat modeling, complicated access control systems, I think is a really good use of time.

Izar Tarandach:

Yeah. On, on the subject of ROI I, I'm just going to add one personal thing here. I, I have what I call question four A. So question four on the four questions is, did we do a good, good enough job? To me, question four A is asking the participants, would you do this again? If they are able to come back to me and say, this had value for me and I will do it again. I just proved ROI of it.

Jim Manico:

And if you bring pizza to the meeting, that will go up.

Matt Coles:

Yes, proven. Proven over the

Chris Romeo:

that's allowed. That's allowed. Alright. Well folks, we're uh, we're coming towards the top of the hours in sight here, Jim, I got one lightning round question for you'cause I've been, I haven't talked, I haven't interviewed you in a couple of years and I haven't heard your take on this. It might be kind of a hot take, but that's okay. We've had plenty of those throughout the, the, the episode here. Where do you stand on the whole shift left thing?

Jim Manico:

I don't, I mean, I can, I, I can debate either side. I can debate shift left. John Steven, um, is giving me a lot of reasons to wanna shift, right? Actually let developers crank and deal with it later. So I, I, I see emerging research and emerging intelligent discussion on both sides of that. I, I'll, I'll say this, I generally like the idea of shift left, but I'm not tied to it. There's a there's a lot of good research and processes that don't, that don't believe in that, that are still successful. So I think It's one modality that can be really good for application security, but I'm not religious about it. There's other ways to go about things successfully. There you go.

Chris Romeo:

That's a great, that's a great answer. It's a great way of describing it. Um, I, I think I fall into the same category of we can shift left, but we can also shift right. It's, and, uh, yeah, so I like where you landed there. So, uh, folks, we're gonna, we're gonna wrap up this episode. Thanks for, uh, to Jim for being a part of this on the, and joining us on the security table. Um, this'll be available as a recording both in podcast form and our u on our YouTube channel so people can go back and listen again or share it with other folks. Um, you can also find Jim, Jim, uh, didn't even mention it, but Manicode Security is, is what Jim does, is in his day job as well as advise lots of other startups out there. So check him out from that perspective. Follow him on Twitter, find him on LinkedIn, and, uh, he's, he's a, a wealth of knowledge and I always enjoy an opportunity I get to, uh, interact with you, Jim, and learn from you.

Jim Manico:

It is my pleasure. I'm a big fan, Chris and Izar and Matthew. Thank you. for having me on the show. It's been a, I had a great time. I.

Matt Coles:

Excellent. Thank you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Application Security Podcast Artwork

The Application Security Podcast

Chris Romeo and Robert Hurlbut