The Security Table
The Security Table is four cybersecurity industry veterans from diverse backgrounds discussing how to build secure software and all the issues that arise!
The Security Table
Debating the Priority and Value of Memory Safety
Chris, Izar, and Matt tackle the first point of the recent White House report, "Back to the Building Blocks: a Path toward Secure and Measurable Software." They discuss the importance of memory safety in software development, particularly in the context of critical infrastructure. They also explore what memory safety means, citing examples like the dangers of using C over safer alternatives such as Java, Rust, or Go.
The debate covers the effectiveness of government recommendations on software development practices, the role of memory safety in preventing security vulnerabilities, and the potential impact on industry sectors reliant on low-level programming languages like C and C++. The dialogue highlights different perspectives on the intersection of government policy, software development, and cybersecurity, providing valuable insights into the challenges and importance of adopting memory-safe programming practices.
Helpful Links:
BACK TO THE BUILDING BLOCKS: A PATH TOWARD SECURE AND MEASURABLE SOFTWARE - https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf
Dance Your PhD 2024 winner, WELI, Kangaroo Time: https://youtu.be/RoSYO3fApEc
FOLLOW OUR SOCIAL MEDIA:
➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel
Thanks for Listening!
Hey folks, welcome to another episode of The Security Table. This is Chris Romeo. I'm joined by my friends Izar Tarandach and Matt Coles. The crowd goes wild. Our one fan in Australia goes wild. Whoever you are out there, we're just sending a shout out
Matt Coles:and that guy at the gym. Who And that guy at the gym.
Chris Romeo:Yeah, the guy at the gym
Izar Tarandach:The guy at the gym, yeah, don't forget the guy at the gym.
Chris Romeo:yeah. We're really moving though, we've got, you know, two fans now, so.
Izar Tarandach:But hey, interesting thing, interesting, I learned about Australia this, uh, this week. So apparently there is a, uh, a contest every year. Dance your PhD. So, like, people who get their doctorates, they're supposed to make a video clip of them doing some kind of dance PhD. And this year's winner, this year's winner is a guy from Australia that actually worked on finding out that every kangaroo has different personalities and that leads their social behavior and whatnot. And the clip, I swear you have to see this thing. So go look for
Chris Romeo:send it, we'll put it in the show notes here so that people can tune in to this
Matt Coles:I kind of wanna, I want, I kind of wanna see what somebody who did a PhD in cyber, in cybersecurity would look like. Jumping through holes. Uh, attacking, attacking people.
Chris Romeo:No, they'd be laying on the ground crying, that's, that would be their
Izar Tarandach:I think it's called the Swan Lake. And it has the one where all the swans die in the end or something like that?
Chris Romeo:Hey, dude, spoiler alert! I haven't seen Swan Lake yet!
Matt Coles:or, or Nutcracker or the Nutcracker.
Chris Romeo:Uh,
Izar Tarandach:cracker!
Chris Romeo:I never, I'm never willing to go watch that just based on the title. I haven't really researched it, but I'm
Izar Tarandach:Way too personal.
Chris Romeo:what that, what actually that entails. Well, we should probably talk about something in the realm of cybersecurity. here comes the awkward transition back to the building blocks, a path towards secure and measurable software. So this is a, uh, document that was released, uh, around the end of February, 2024. And it's a statement of support for software measurability, which I don't really know what that means, and memory safety, which I feel like I know what that means. And so let's, let's explore this topic a little bit and see what, uh, we might have to, that's good measurability right there. So, we can deal with the easy one first. I think we can all agree that there are more risks and more threats on the internet each and every day. And, bloody blossom, we can skip over any introductory material. And we can even skip over the history lesson where they talked about the Morris worm. I remember that one like it was yesterday because I was in middle school when that happened. I don't remember it at all. Slammer Worm, I do remember that one, 2003. I was working incident response at the time. And that was a, that was a challenging couple of days, let's put it that way. And then even Heartbleed, which we all lived through. They said Heartbleed in 2014? No way.
Izar Tarandach:Oh, no way.
Chris Romeo:that? Fact checking. Heartbleed was way before 2014, wasn't it? Before
Izar Tarandach:feel like it was like, yesterday. Let me see, I was at EMC, so yeah, that sounds like it.
Chris Romeo:Yeah, maybe it's right. I don't know. All right, let's talk about memory safe programming languages. So that's one of their, their guidance things that they're providing for us here is that, um, we should, we should use memory safe programming languages. And I'm just looking for a definition. Unfortunately, didn't give one of what a memory safe programming language is. Uh, but when you guys think memory safe programming languages, Matt, you have a definition. You always have a definition for me when I'm in peril here in the middle of an episode.
Matt Coles:Yeah, so, so memory safe, memory safety in general is, uh, a computing, a part of a computing system that is, has certain guarantees about when it comes to memory, memory management and buffers. And so memory safe languages. are things that enforce boundaries on, uh, on buffers such that, for example, let's take a example, a good example here is C versus C I'll say, I'll go out on a limb here, C is a memory unsafe language. Uh, I probably could use Java as the safe language too, but I'd rather use C for now. It's not, it's close, close enough for this purpose. So C is, well, for the most part, so C is, is memory unsafe. When you allocate a, when you allocate a buffer in C, you get a pointer, and that pointer points to memory, and you're expected to know how big that memory is, and what your bounds are, and how to do pointer math, right? And when you write strings to it, There are, there are string functions that, like getS, that will just randomly or just arbitrarily put data into, into that buffer and it will overflow, if you allow it, as opposed to something like, uh, the standard template library for C which has a string function, or a string object, which, which we call C which knows how big the memory buffer is and puts boundaries in your way, uh, in theory, right? In our way. Still break that, of course, but because you can get ultimately get to the C string underneath, but, but the, the, the library itself gives you guardrails. Right? We like to use that term guardrails, uh, so that you, as a developer, don't have to worry about and don't have to, don't fall into the trap of, um, putting too much memory, put too much, putting too much data into a buffer that doesn't have memory for it. And so the theory is that a memory safe language would have a natural construct, meaning it would have a type, say, of string, that was automatically doing that bounce checking for you, you wouldn't have to use a library function, like in C and that the compiler, or the framework, or interpreter, or whatever, would treat all of that for you, and take care of that, so you can avoid memory exceptions, memory faults, and the whatnot. So that's, that's the general gist of a memory safe language.
Chris Romeo:So what are those, what are those, what do we consider memory safe languages today then
Matt Coles:Yeah, so the NSA, the NSA has a list, I think, that they've been working on. Um, the big ones, I think, are, are, um, the big ones, with some caveats, I believe, are things like Rust, Go, Java, Python, Ruby, uh, and, and like I said, C can be Can include memory safety. Again, these are all with caveats because many of them still will allow for direct memory access and have certain things that can allow the violation of memory safety. But some of them have better boundaries, better guardrails than others.
Chris Romeo:And I, and I think I think something we can, we can draw a parallel to as we're facing these memory safety issues that just came to mind is when I think about SQL injection. Right? We've known about SQL injection for a long time, and we've only recently gotten to the point where frameworks are doing good things to make SQL injection harder to add to your code. It's hard to add a SQL injection function, but there are ways, even with ORMs, Object Relational Managers, to
Matt Coles:Object relational mappings.
Chris Romeo:mappings, I
Matt Coles:Mark Mappers or something like that. Ha ha
Chris Romeo:thank you. Thank you. I'm always I always want to give it its own its own special name That just I call it but the point is you can X you can still execute raw sequel Most ORMs have that capability. So it's like you have, but you have to choose to do it. And so it's not, I wouldn't say that using like ORMs are secure by default. They're secure by, they're almost secure by default. But once again, you have that, you have that need to use the functionality that sometimes you have to execute raw SQL because the ORM is just too slow to make the request.
Matt Coles:Right. And the same with memory safety, right? The whole reason why C is still a popular language is because you need to be able to do memory, you know, memory manipulation. You know, say if you're on an embedded processor, for instance, and you have limited function capability, limited memory and processing power, you may need to do that manipulation yourself. The memory safety, I don't know the statistics on this, but just thinking about how languages work, I imagine memory safe languages add overhead.
Izar Tarandach:Okay,
Chris Romeo:They have
Izar Tarandach:parenthesis here. Can I call bullshit in this whole thing?
Chris Romeo:Beep! Oh, oh, I forgot to hit the beep, sorry.
Izar Tarandach:Because, listen,
Matt Coles:you can.
Izar Tarandach:I, yeah, I, I, I, I was born inside of C, right? I, I, I, I, I was, I, I, I came to the world of computing with a C compiler in my hand. And back in those dark ages. There was already Libsafe, Valgrind, Mem God, I forgot already how many different ways there was to go and attack the memory problem. Because everybody Purify came later and became other things that we use today. But everybody knew that that was like the thing. It wasn't even the skeleton in the closet. It was the corpse sitting by your side as you wrote code. Right? So people knew all that.
Matt Coles:I still have a copy of the Ten Commandments of C programming, and one of them is you'll not poke
Izar Tarandach:play with the no pointer! Yeah, but my point here is that exactly what Matt said. All these languages, that they are like memory safe, and yes, as languages they are memory safe, but many of them let you play with the byte codes that they create themselves, as part of the language. An obscure part, but they let you do that. Then you went to C and you said, well there are libraries that make it memory safe. Yes, there are, but C also has libraries that make it
Chris Romeo:Yeah, but people didn't ever use them.
Izar Tarandach:No, no, wait, wait, wait, wait, wait, wait, wait, wait. But then I go back to your, to your example of the, the ORM, and when you need to go and do the, the raw query. These memory safe, uh, uh, languages, most of them, and I think that's fair to say that today, still, sometimes they are too slow for what they need to do. And that's when people get smart and they go one level lower, and lo and behold, memory problems are there too. Not to mention that the whole framework that they're built on, the compilers, interpreters, whatnot, at some point it's written in C, so as those things keep being developed, or C C, the problems are going to keep appearing. I don't remember which CVE it is, but there was a buffer overflow in a Java interpreter not, not, not a long time ago,
Chris Romeo:But that's infrastructure, like, you can, you can separate that into infrastructure. used to build the software versus the final product that
Izar Tarandach:But wait, wait, the point, the point that I was trying to make is, is this really important? with all the other stuff that we have many levels above the memory corruption. Is this really where we need the White House putting their fingers in?
Matt Coles:so, on that front. So let's just unpack this just a little bit. So, first off. Should the White House be getting in the middle of what, what often is a religious warfare among, among developers about what language to use,
Izar Tarandach:It won't be the first time they get in the middle of religious warfare, so they have the expertise.
Matt Coles:So, so, but I mean, should, should the, should the government be it? So I guess on the one hand, this is a recommendation, not a requirement, but it's a strong recommendation to Do something different, right? Here, industry, we recognize there's a problem. Do something different. Here's a way to do that. Is that, is that the right thing to do? That's, that's number one, right? Maybe let's answer
Izar Tarandach:Here's a bucket,
Matt Coles:Well,
Chris Romeo:if, I mean, what do you, it'd be one thing if we could disagree with the guidance that they're giving as, as multi decade security professionals, all of us sitting here, it's one thing, but there, there is, there, there's really not a lot to disagree with here. Like, I mean, do we, outside of the political issue of, should they be in this lane? Do we fundamentally agree or disagree with the guidance they're giving, which is, which is move towards memory safe languages for new stuff,
Matt Coles:so I'll just point out that they stopped a little short. Right? They said use memory safe languages when you can and then provide an alternative if you can't, right? Moving to memory safe hardware and something called formal methods. Now, memory safe hardware Right is taking what the software could do and make and enforcing it on chip, right? Such that you have a controller, physical controller, hardware controller, that handles memory,
Chris Romeo:which we have some of that, right? Like there's, there, there's been some innovation in the last 20 years with like, uh, the no execute bit in CPUs, right? You can mark parts of memory as no execute so that technically you could have a hardware protection. And so I
Matt Coles:you don't have ASLR, you don't have ASLR in hardware, I don't believe.
Chris Romeo:Yeah, but the MX is like a processor instruction though. Like it's, it, it, you can, you can carve out areas of memory that you specifically say cannot, you can't
Matt Coles:Your heap memory, your heap memory more
Chris Romeo:And so, but like things like that are
Izar Tarandach:You, you mark them in software, wait, you mark them in software, but you check them in hardware. The problem is that they
Chris Romeo:Hardware enforces, you set the policy in software and hardware enforces the policy
Izar Tarandach:So the moment that you've set the policy in software, in theory, you can intercept, intercept and change the policy before it reaches the, the hardware where it's checked. So it's, it's not 100 percent
Chris Romeo:But at that, I
Izar Tarandach:best of my memory.
Chris Romeo:at that point, the, you know, the, the, the figure from the horror movie is already inside the house.
Izar Tarandach:So that's exactly my point. Here, when, when
Matt Coles:call is coming from inside
Chris Romeo:Yeah, the calls coming from inside the house. That's what I'm saying. If you're, if you already have that level of access to an executable to the running, a running process in memory, I don't need to worry about the policy. I'm already inside. And so, but my point is that protection is at the gate. So if you've hurdled the gate, there's, there's nothing else. I mean, then, then we got a lots of other problems. There's lots of other ways you can hurdle the gate.
Izar Tarandach:but, but my problem, my problem is different. You, you, you look at the depth of the thing. I'm trying to look at the breadth, perhaps apart from math. Do you know many people on your day to day interaction with developers that write C C No,
Chris Romeo:Apart from Matt. You said apart from Matt.
Izar Tarandach:I know what he works with.
Chris Romeo:Yeah.
Matt Coles:Yeah.
Chris Romeo:I know. I know. I do not. I do now. Now, granted, I used, I spent 11 years of my career at Cisco. And in those days I would say, yes, I know
Izar Tarandach:Yeah, totally. Yeah, of course.
Chris Romeo:that.
Izar Tarandach:But again, it's the same thing with the people that Matt works with, right? They're writing close to the metal. Last I know, what was it? I don't remember the last time I saw a chip that could interpret Java, but I know that such a thing exists.
Matt Coles:Uh, they exist on smart cards
Chris Romeo:Yeah, true.
Izar Tarandach:we know how smart those cards are. But But my point is, when I'm trying to educate developers in stuff that happens closer to where users interact with systems, uh, in threat modeling, in, uh, uh, interactions between systems, all that good stuff. I can count in the fingers of one short fingered hand the number of people who would even consider memory problems.
Matt Coles:because you're in a high level language that doesn't have, that naturally addresses
Izar Tarandach:high level world.
Chris Romeo:Yeah. But that's the whole point. That's the whole point of this is why not just give developers those, those newer, newer style high level languages that already provide the memory protection
Izar Tarandach:We already did. We are not even teaching people C anymore. We are already in this world where 90 whatever percent of developers already live in this world where they
Chris Romeo:Well, we live in a web world. We live in a web world, right? Most development is being written in JavaScript these days, using various frameworks, like we're not, we're not interacting with metal. Some people are, but the, the, the vast majority of the 30 million developers on earth are not writing C or C plus
Matt Coles:And so this actually brings up an interesting, sorry, so this actually brings up an interesting point. So there was a, there was an article that somebody wrote in response. Uh, they posted on Hackaday, uh, where they talked about this being a red herring insecurity. And for that exact point that Izar raised, you look at the, if you look at the top CVEs, like the, if you look at all the CVEs that are getting created, and you look at the weaknesses that those CVEs are coming from, a very, very, very, very small percentage are memory safety problems. They are mostly broken authentication, broken authorization, web related issues, SQL injection. Yeah, well OWASP top 10 or CWE top 25, with the exception of the buffer related items, there are still those because they do happen. And when they happen, they are really severe,
Chris Romeo:Yeah.
Matt Coles:right?
Izar Tarandach:let's revert that question. Will this solve any of the OWASP top 10 problems?
Matt Coles:probably not the OWASP top 10, but certainly the top 20, CWE top 25.
Chris Romeo:Well, I mean, there are, you can write web applications in Go and Rust. People
Izar Tarandach:Yeah, but when was the last time that you saw someone say, hmm, I'm going to write a web application? Int main open char asterisk asterisk argv. It doesn't happen anymore.
Chris Romeo:It's possible, but
Matt Coles:Well, but, but, but also, but also what, I mean, when was the last time you saw memory safety or memory corruption as a, as a problem in a web application?
Izar Tarandach:I can't remember. Oh, I was skipping this one for the whole podcast. Okay, so
Matt Coles:So,
Izar Tarandach:No, seriously, I can't remember.
Matt Coles:so there was a quote, there was a quote in the White House article that they, or sorry, there was a quote in the NSA, there was a follow up article from the NSA talking about memory safe languages, like what are memory safe languages, and they, they provided some, some statistics that Microsoft and Google were reporting that memory safety was something like 70 percent of their issues. Now, those are not translating to CVEs, those are translating to, obviously, if you run static code analysis across a code base, you're going to get a ton of stuff. that hopefully never makes it out in the light of day. So is this addressing publicly reported vulnerabilities or is this reducing developer effort and workload? By moving to memory safety.
Izar Tarandach:I would say that it's reducing possible future CVs, again, I posit that there are exactly three people in the whole country who read this thing and went, Whoa, that makes sense. We should stop using C C
Chris Romeo:Nah, I disagree. I'm going to disagree violently. So,
Matt Coles:Okay.
Chris Romeo:not violently, but just, I'm just going to disagree. So, but when you think about CISA and the White House, what are the, what are the categories of technology that they're most concerned with? Is it web applications? Is that what keeps them up at night? No, it's critical infrastructure and it's ICS,
Izar Tarandach:take me there,
Chris Romeo:but it's
Matt Coles:ICS and OT,
Chris Romeo:what is ICS, what, what is most of the code for ICS developed, the products that are running our power grids,
Matt Coles:and assembly,
Chris Romeo:they're written in C and C today. And so,
Matt Coles:assembly
Izar Tarandach:the way, is memory safe.
Chris Romeo:Okay, but then also think about this, okay? If you're driving a Tesla or any automobile, what is the bulk of the code that's running the safety systems in a car built, you know, that was, that you bought this year? Is that, you're not writing that safety code in, in Java. Heck no, I don't
Izar Tarandach:just explain,
Chris Romeo:I'm hit my
Izar Tarandach:just explain why I'm not driving a Tesla.
Chris Romeo:Yeah, but I hit my, if you wrote all my safety software in Java, it would be so slow. I hit the brakes and nothing would happen for 10 seconds while I plow into a tree. And then it pops up a message that says you got to upload your J or update your JD, J, whatever the JRE
Matt Coles:Java, Java has Java has real time, has a real time extension, right? I mean,
Chris Romeo:Would you put your safety, we could run an experiment where we run a car controlled by Java with real time enabled driving towards you and we'll hit the brakes at a certain point and you can decide whether you want to. Test that, uh,
Matt Coles:I mean seriously, it is primarily going to be, I would imagine, I don't know for sure, but I would imagine it's a lot of C and C code, right? But it could also be Go, it could also be Haskell, it could also be Rust. We don't know, but something that's performance
Chris Romeo:but I think we got to get out of the web world. I think, I think we're kind of, we're putting web blinders on for a second. And I don't think that's what they're, I don't think that's what they're focused in on is the
Matt Coles:Yeah,
Chris Romeo:They're focused on the things that they care about the most.
Izar Tarandach:Rust and Linux kernel. I don't know a lot about it, about the details. It's been a
Matt Coles:It's, it's recently introduced.
Izar Tarandach:but it's recently
Matt Coles:Yep.
Izar Tarandach:and apparently it's making a big splash. Apparently
Matt Coles:Good. In a good way, right?
Izar Tarandach:yeah, yeah, yeah. Apparently you can even write drivers in it. So that issue of like memory mapped and shared and all that good stuff and things that you could overflow all over the kernel, uh, might go away. And I hear that people are very, very enthusiastic about it. So, that is a move that, first, I wholeheartedly support, even not knowing all the details, but the little I know already makes sense, but, uh, do we need a White House memorandum to do that, or we just needed a bunch of smart people who decided this thing in a kernel and said, hey, this is a good, good thing to do, let's do it.
Chris Romeo:I can't believe I'm arguing on behalf of the white house at this point. Like what's happened to my life where I'm like taking the side of, I mean, but it's, it's a, it's about just spreading the word at this point. Right? Like we've had so many of the same problems that have existed. These class of bugs is what took out the, was what the Morisworm, right?
Izar Tarandach:So next week we're going to, to have a memorandum saying don't write raw SQL queries?
Chris Romeo:That'd be nice. I'd, I'd read it. I'd be happy about it.
Izar Tarandach:No, I mean, where do you go from here? Are we You see where I'm going? Like, is the White House now competing with OWASP?
Matt Coles:Somebody's smoke alarm going off.
Chris Romeo:that's mine. Keep going.
Izar Tarandach:Uh, do you need us to call 911 for you?
Chris Romeo:Now I have somebody upstairs who's working in the kitchen who I hope is on top of it. So.
Izar Tarandach:I think somebody heard your comment about the White House. But For our
Chris Romeo:And we're clear. We've
Izar Tarandach:we love
Chris Romeo:level. Safety level zero has been reachieved in the house here.
Matt Coles:So, so Izar, are you saying, so I guess, are you saying on the point of should the White House, should the White House or the government in general be driving this behavior? Sure.
Izar Tarandach:No, no, no, no. I'm cool with them driving this behavior. I'm I'm just asking for is this the right behavior to be driving? Like, shouldn't we go way, way, way, way, way above?
Matt Coles:well, so, I mean, if you take, if you take Chris's view, which I agree with, right, that their focus is critical infrastructure and military systems and, you know, national defense and national security, the water supply and the electrical grid and these sort of things, and these are all written in low level languages today, right? They have to be performance, real time performance, correct, you know, fully functional, and Fail safe and fail resistant or fault tolerant, right? And so that's their focus primarily, right? And in that regard, they're putting a position out there and others can take advantage of that information as the technology has become better. Right? If anything, the government has always been in a position of driving innovation and R& D that trickles down to the commercial sector,
Chris Romeo:If anything, the government used to be more involved in the security side the seventies and
Matt Coles:With DARPA and NIST
Chris Romeo:far as pushing the envelope, that's where a lot of guidance really came from in the early days of our industry. We don't, we don't often look back on that and reflect on it, but that's where the bulk of our guidance came from.
Matt Coles:right? That and partnerships with universities,
Chris Romeo:Yeah, true.
Izar Tarandach:let's take a peek at that for a second. Let's, let's say that the, the reason behind this, this move here was infrastructure. You guys know that the life cycle of infrastructure is way, way, way longer
Chris Romeo:This is a 50 year
Izar Tarandach:normal software. So, okay, so we're going to, we're going to reap the benefits of this in 20 years, while for the next 20 years we still live with infrastructure that we know it's completely
Chris Romeo:I got, I got a little hung up on that too, but yeah, but, but I mean, so like, that's kind of a,
Izar Tarandach:and I'm willing to, I'm willing to bet, I'm willing to bet that the new versions of hardware and software that are coming in the market now. that are candidates to substitute the infrastructure, the broken infrastructure, that's in place, already come with enough guardrails and safeguards to be at a much better space than, oh my god, somebody's sending me a packet, uh, a ping of death.
Chris Romeo:yeah. I mean, but at this point, like, we can't look at it and go, Oh, you know what? It's, you know, it's, it's, we can't take the Eeyore approach. We're like, Oh, everything's terrible and there's no solution. So let's just, let's just sit here and do nothing. right? We got to do something. Is it, does it solve, but yeah, does it solve the problem today? No, I don't, this doesn't do, this doesn't solve the problem today. It doesn't solve the problem in five years from now. The hope is that by 10 years from now, this shift in policy, as far as what languages people are using, are causing products to make their way towards memory safety and maybe you draw a line at some point in the future, like 10 years from now and say, 10 years from now, we're going to replace all of this crap. So if you don't have new products that do this now, okay, maybe that's what should have been in here. Maybe, maybe
Izar Tarandach:I would feel much better if they said, you guys have five, five years to rip everything out and put everything that's memory safe in place. Then I would sleep much better.
Chris Romeo:who's going to write the check for trillions of dollars to pay for the, the replacement of all of our critical infrastructure technology stacks.
Izar Tarandach:Don't you have a printer at home?
Chris Romeo:Your print money. Yeah,
Matt Coles:don't, I don't want the treasury department showing up at my door.
Izar Tarandach:Again?
Chris Romeo:they already show up enough. Um, yeah, so I mean that, but, but I, I would say we don't, in my mind, we don't not do something just because it's going to take a long time. Like we got to get this ball moving in the right direction,
Izar Tarandach:that, that, that's true. But what, what, what, okay. As a mitigation, what makes more sense? To go back to the infrastructure, the broken infrastructure that's in place today, look at that black box that controls the flow of water from the reservoir. and ask yourself, hmm, does that thing need really to be publicly accessible? Perhaps I can use some other network that, uh, perhaps most of people can't get into,
Chris Romeo:most of that
Izar Tarandach:or rewrite it from zero.
Chris Romeo:It's not publicly accessible today at this
Matt Coles:boy, we hope
Izar Tarandach:Then what's the problem?
Chris Romeo:I mean, attackers, I, I have known to enter systems and move laterally and make their way through, especially nation state attackers that are highly
Izar Tarandach:If it's laterally, then it's accessible at some point.
Chris Romeo:Well, everything's accessible at some point. That's
Izar Tarandach:no, you could say, you could say the internal, the internal threat, and I would wholeheartedly agree with you, right? But if
Chris Romeo:Which, uh,
Matt Coles:for, uh, for centrifuges is
Chris Romeo:yeah, yeah, it was the
Izar Tarandach:Oh, Stuxnet!
Matt Coles:Yeah.
Chris Romeo:stuck snap. There was, but it was malware that was, that broke the air gap on a USB. Cause somebody made a bad decision and plug something in. And so like we know as security professionals, there is no such thing as secure. There is no, it doesn't matter unless you, unless you disconnect it from the internet and bury it in your backyard. Without a Wi Fi and power like you bury it in the backyard with no internet connection Then you can tell me that thing is secure. But other than
Matt Coles:but also,
Izar Tarandach:it's
Matt Coles:but also is
Izar Tarandach:going to unbury it and get it, but the point is that
Matt Coles:but is there your, yeah, go ahead. No, you finished your thought. Are you finished?
Izar Tarandach:that would be addressed internally by the government. It's between them and their providers. This is out in the public for everybody to consume. Sorry,
Matt Coles:I, I, I, no, no, you're making, you're making an interesting point. I think that one thing to consider is this, of course, should not be read in a vacuum, right? Put this with the, with the executive order, you know, when it was a 14508 or whatever that, whatever the number was, right? Where, and, and things like CISA secure by design, secure by default guidance, which, you know, they're all starting to synergize now. But they're up here, but they're meant to be looked at as a comprehensive part, right? And so, memory safety is one of those building blocks that you add when you So you do threat modeling, and you figure out what your attack surface is, you make sure you have authentication, and then when you get to the core critical function that's
Izar Tarandach:You put a parachute and you dive all the way down to memory safe? That's what you do?
Matt Coles:So, so what you're suggesting is that the White House maybe jumped into the pit before that they, uh, addressed the latter?
Izar Tarandach:in a threat model do you catch yourself saying, oh wait, you guys are sending this big, uh, a string? Are you sure that you are checking for length? Because, you know, at some point there might be a buffer. When was the last time that you raised a buffer overflow in the threat model?
Matt Coles:I do.
Izar Tarandach:don't count. I told you that you don't count,
Chris Romeo:design sits above the threat model, right? Threat modeling. So this is, this is one of the things I've seen people, I'm not saying you're struggling with it, but I've seen the industry struggle with this. And that is that secure by design, they equate secure by design and threat modeling. That's the same thing. And I say, threat modeling is a, is a vehicle, is a supporting mechanism of Secure by Design, but it is not, you cannot say, I do threat modeling, so I am Secure by Design. Remember, like, choosing a language is an architectural choice. which should happen at a certain, certain degree of threat model. If you're talking, if we're going into the room, the three of us with an engineering team, and they're building a brand new product from scratch, we are going to ask that question in some of the early meetings before we even draw a picture, we're going to be asking about what language direction you think you're going in. Oh, we're going to write this and see,
Matt Coles:Well, even if, even if we're doing threat modeling, even if we're doing threat modeling, I always ask, what language is this written in, because it can change across a, across a system, right?
Izar Tarandach:But not, not only that, but when we go into that, we are exactly looking at eliminating the class of vulnerabilities or flaws that's called memory management, right? Yeah. and lo and behold we've been doing this for years and years and years without a memo from the White House.
Chris Romeo:So you're not
Izar Tarandach:incorporated this into what
Chris Romeo:not arguing with the memory safety. You're just saying you don't think we need a, you didn't, we didn't need a memo to draw our attention to it.
Izar Tarandach:Yeah, I'm just saying, hey, we are up here in security by design, all of a sudden now you're talking to me about memory management, and what's the next one that I can look forward to?
Matt Coles:Well, so the counter argument,
Izar Tarandach:But, uh,
Matt Coles:Oh my god, let's go back to Tempest from 50 years ago? But anyway,
Izar Tarandach:Which by the way, actually, if you, if you could look at it, you could look at the latest work they've been doing about, uh, EMF from CPUs and stuff, and even the parallel memory reading, we're back at Tempest. So yeah,
Matt Coles:I don't disagree with you, yeah, I, I mean, the, the argument back to you might be, who, who in the industry would have, would have launched this, this, um, this discussion around memory safety, and why hasn't happened before, right? And because it, but it, but it has on a per, on a per, like the individual companies have
Izar Tarandach:no, that's why we have so many tools that deal with this stuff.
Chris Romeo:Yeah, but we've, I mean, there's been conversations about memory safety. This isn't the first time anybody's kind of raised their hand and said we should use memory safe languages. This has been happening for the last couple of years as Go and Rust have matured and people have started to say this is a good alternative.
Izar Tarandach:am I
Matt Coles:in the IETF, if you look at IETF and ISO, do you see, do you see calls for memory safety?
Chris Romeo:What would IETF, what would either of those organizations
Matt Coles:IETF at low level protocols, if you're dealing with actual packets and buffers and like doing, you know, pulling bits off of the wire.
Chris Romeo:but they don't prescribe. They, they, they don't, they don't prescribe how you build implementations in an IETF standard. They,
Matt Coles:But they
Chris Romeo:it, what it has to do, well that's
Izar Tarandach:they could, but they don't,
Chris Romeo:that's a whole
Izar Tarandach:wait, wait, but even, even the move that we had way back when that we had XPC and RPC, and it would create these nifty stubs and pass data from one side to another. That was still open to memory, uh, to memory issues, but then came Protobuf, Protobuf.
Matt Coles:Protobuf.
Izar Tarandach:And everybody jumped into that N-G-R-P-C and they have implementations for CN, for c plus plus. So that thing is there already use. Use the right function parallel. Use the RRM and you. You got shielded right.
Chris Romeo:All right. We gotta, we have to wrap this conversation up for today. It's been, uh, it's been feisty, which has been good. So hopefully our audience has enjoyed, uh, our back and forth debate. Uh, but this is a great, it's a great issue. It's, it's going to be fun to watch how this progresses over time. Thanks for listening to The Security Table.