The Security Table

Open Source Puppies and Beer

Chris Romeo Season 2 Episode 1

Chris, Izar, and Matt address the complexities of open-source component usage, vulnerability patches, civic responsibility, and licensing issues in this Security Table roundtable. Sparked by a LinkedIn post from Bob Lord, Senior Technical Advisor at CISA, they discuss whether software companies have a civic duty to distribute fixes for vulnerabilities they discover in open-source components. They also examine if there is a need to threat model every third-party component and consider the implications of certain licenses for security patches. This is a discussion that needs to be had by anyone using open-source components in their code. Listen in and engage as we learn and think through this important issue together!


Links:

Bob Lord’s post about Open Source Responsibility:
https://www.linkedin.com/posts/lordbob_just-a-quick-thought-on-open-source-if-you-activity-7146137722095558657-z_RI

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @SecTablePodcast
➜LinkedIn: The Security Table Podcast
➜YouTube: The Security Table YouTube Channel

Thanks for Listening!

Izar Tarandach:

I can give you opera. I

Chris Romeo:

is something that's required on the security table. What a way to start an episode! Welcome folks to the security table for our first episode of 2024. My name is Chris Romeo, joined as always by Izar Tarandach and Matt Coles. Um, I need to think of better nicknames for you guys, but it was a holiday season and You have one. Izar is your, that's not really a nickname though, that's kind of a, an actual name. Um, but I do have to think up some, I'll, I'll work on that with ChatGPT, uh, before the next episode. Uh, what we want to dive into is a LinkedIn post. So Bob Lord is a guy who works at CISA, his title is Senior Technical Advisor, and he wrote this interesting note about a week ago from when we were recording this, so right before the first of the year, right before New Year's, and here's what it says. I'll just read it to you verbatim to set you guys up here so you can then be prepared to react to it. And so here's what Bob writes, and I quote, Just a quick thought on open source. If you are a software company and you use open source components, you own the QA, quality assurance, and security ramifications for using that component. You didn't write it, but you still own the customer security outcomes. If you find a vulnerability and create a fix, it is your civic responsibility to ensure all users of that open source library enjoy the added security. Civic responsibility. That sounds big. That sounds big. All right, Matt, what I see you kind of, you're pondering, I can see some pondering in your eyes here. What's your, I mean, what's your first reaction to this, to this statement?

Matt Coles:

Um, so there's a bit to unpack here. Let's, let's, let's take each part. I mean, Bob is asking for, Bob is suggesting, uh, I think more than just that last line. The last line is the important part, but the, but there's more to it. Right? Um, so if you use a, if you use an open source component, you own the QA, the quality assurance and security ramifications of using that component. That is true, right? If you build software and you use code, whether you're writing that code or you're inheriting that code. The collection of all that code and the thing that it creates is that company's responsibility. That makes, that makes sense, right? Um, if you find a vulnerability, I guess, and create a fix for it, the question is here, there's a part of when you create a vulnerability, oh, sorry, when you find a vulnerability, should you report it upstream? And we can talk about, about that reporting upstream into open source projects. Uh, but should you report it or should you try to fix it? And I think what Bob, and then the key part that Bob is calling out here is if you're using an open source component and you fix it for yourself, should you distribute it to the community? Should you push it back upstream so that all users of that component, uh, should, should be able to take use, make use of the fix that you created. that you're now going to be benefiting from as a software vendor. And this gets into enough

Chris Romeo:

Bob calls that a civic responsibility.

Matt Coles:

Civic Responsibility, yeah. So this becomes a question I think from a, at least from a, I think in corporate worlds, it becomes an IP creation slash sharing question, right? So there's some legal implications potentially. There's certainly more things. Whether it's a civic responsibility or not, I guess this goes down to, again, responsible disclosure practices and, and Um, and what the nature of open source components is, who's responsible for making fixes, uh, and distributing those fixes to users?

Chris Romeo:

So Izar, I want to get your take first and then I want to back us up into that first, I want to dive deeper into the first section, um, the QA and security ramifications, but I want to get your initial take on this first.

Izar Tarandach:

I wish I had a pope hat to wear here because it's so easy to pontificate, right? But, uh, first of all, a shout out to CISA, you said? He works at Bob Lord. And, uh, hey, I love your guys work. And we even wrote letters to you guys. But, uh, great job there, great

Matt Coles:

No, no, not hate, not hate mail.

Izar Tarandach:

no, no, definitely not. All constructive. But, uh, yeah. So, I would say that there's no way not to agree with the first part, that we're going to go back, which is the one where if you use it, then, hey, you own the QAN security. I don't think that's a given. But the second part, it goes away from the technical, it goes away from the, uh, even from the, what was it? The civic responsibility. That just screams The ethics of it, right? And the IP generation, of course, brings, as the comments did, brings us straight back to, hey, what license are you operating under? But, to me, the main point here is, the moment you put civic responsibility in there, the first question in this context that comes up to me is, You and what army? So,

Matt Coles:

Who's you in this case? Who's you in your, in your

Izar Tarandach:

person who's telling me that I have a civic responsibility. So, yeah, I fixed it. Why do you think that there's a civic contract here that means that I have to go and offer the fix to other people? Especially if you take into consideration the costs involved in terms of, uh, forward looking, and as an open source maintainer, I think that Matt is also going to agree with me, sometimes you get some really, really good Patches. Sometimes you get some patches that, uh, are less good, right?

Chris Romeo:

You just have to say thank you. Thank you for, thank you for this input and then you like delete.

Izar Tarandach:

indeed, and I could go on for hours on that. Frankly, my experience has been mostly positive by a huge march. I learn from every patch that PyTM gets. But, um, my point is that, uh, Okay, so you fixed it. So now you exercise your civic responsibility and you push that up. What do you do tomorrow when another user comes and says that your patch borked his edge case? You stop what you're doing now and you fix your patch? Because now you also own that patch.

Chris Romeo:

Yeah, it's like a commitment, right? It's a commitment forward. It's more than a patch.

Izar Tarandach:

There was a, there was a, uh, a comment here by,

Matt Coles:

Well, so while you're looking up that, that

Izar Tarandach:

by Michael Scovetta, sorry, Michael

Matt Coles:

go ahead,

Izar Tarandach:

Scovetta saying open source software provides tremendous value, but it's only free as in free puppies.

Chris Romeo:

Yeah.

Izar Tarandach:

So we left free beer now,

Chris Romeo:

yeah, it's Wait, I thought it was the other thing. I thought it was It's, yeah, it's free as in puppies, not free as in beer. Yeah,

Izar Tarandach:

No, free as in beer, not free as in speech.

Matt Coles:

right, free, free, free from, free from cost versus free from, from, uh, responsibility.

Izar Tarandach:

Yeah, but the free puppies here,

Chris Romeo:

Yeah, hidden cost. Which is like the puppy, right?

Izar Tarandach:

and this thing got me viscerally, because first, I love puppies, who doesn't? And if you don't, I don't love you. But, uh, but frankly, I mean, it's a puppy, you brought it in, you're taking care of it. And it's going to be with you, I hope, for a long, long, long time. It's going to give you joy, but you have to feed it and clean it and clean after it and take care of it, right? So that, to me, in some cases, may easily trump the civic responsibility,

Chris Romeo:

Yeah, I

Izar Tarandach:

which is a strong argument.

Chris Romeo:

I want to come back to that. Let's, I want to unpack the first section of this because I know you guys are both like, yeah, yeah, yeah, this is good. This is fine. I don't know that this is as clean as we're making it sound like. And so,

Matt Coles:

We will deal that.

Chris Romeo:

We're company, I love that, what are you guys, the puppy patrol here? You're like, I made a noise, like I whistled. All right, so if you're a software company and use open source components, you own the QA and security ramifications for using that component. You own the QA, the quality assurance. So, In reality, that statement sounds like something that should be easy to agree with. But when we think about the reality of how people use open source, you own the QA. So that means I gotta write unit tests? gotta do regression testing?

Matt Coles:

I would, I would be, I would be careful not putting reading too much into that. You own, you own the, you own the usage, you, you own the usage, right? It is up to you as the, as the software integrator. In this case, you're integrating code, right? Integrating somebody else's code.

Chris Romeo:

Okay.

Matt Coles:

You are, I mean, you should have a development life cycle that includes QA for the thing that you're building and any components that you include. should be covered, like, code coverage, right? Should, should include the code paths that, that come in from that open source component. In that respect, there's nothing, there's no differentiation, I guess. It's, it's part of the whole system, you test the whole system, and if there are side effects that come out of that, um, you're, you're, you should hopefully be in a position of discovering that. You shouldn't, if you embed a web server, you know, Tomcat or something like that, or Apache, and, and it has extra features that you keep enabled in your product. That's on you as the software vendor, right? You should, you should close those off...

Chris Romeo:

I own, so wait, I own the vulnerabilities that exist in that software,

Izar Tarandach:

No, wait, wait, wait, wait. We're talking QA now.

Chris Romeo:

No, no, no, I'm trying to, I'm trying to walk you into a path. I'm, I'm, I'm leading you down a path that I'm going to, I'm going

Izar Tarandach:

Look, it's simpler than that

Chris Romeo:

to flip the table at the end. But.

Izar Tarandach:

It's simpler than that, okay? When you're bringing in something that's open source, basically you are bringing in a promise. I promise you that this thing that somebody wrote, sorry, they promise you that that thing that they wrote does A, B, C, D, and E. It's your responsibility to make sure, by using QA, that it does what you need, no more, no less. And that includes the security clause. It's your responsibility to make sure that you're not bringing risk by incorporating that component.

Matt Coles:

inherit, you inherit what comes along with the component And by, and by the way, the promise, this is where licensing, I think, actually is critically important. If you pull a component and it has the MIT license or the two clause or three clause BSD license, you know, where it's basically use it as is. And you're free, you know, you're free to use it as, as it comes, and there's no warranty implied. That promise is, as a, as a component developer, we believe it works. But it's on you to figure it out, does it work for you? As opposed to another license that may say, we get, we, we validate, we warranty its, its function, or fit for purpose, or whatever you, your language you want to use, that, you know, if it's a problem, come back to us and we'll fix it for you. That's a different promise. In

Izar Tarandach:

sorry, Matt nails it when he says that. When you inherit the thing, people forget that inheritances have taxes on them too.

Chris Romeo:

Hmm.

Izar Tarandach:

So you're

Matt Coles:

states.

Izar Tarandach:

yeah, you're not only bringing that thing and running around free and carefree saying, Hey, I just found the, the, the, the component that does everything that I wanted. No, you have that tax you have to take to pay care. To what Matt is saying, you have to pay care to downstream. What does that do you have to pay to, to pay attention to How long is the thing going to be around? Right? So there's a tax to be paid when you

Chris Romeo:

let me, let me ask another. Revealing question. So it'll be a leading question too. Cause watch the first one, this will be the lead and then we'll come back around. So do you use, does either of you use open source components in anything that you do?

Izar Tarandach:

In the last five minutes,

Matt Coles:

Yeah, I use open source applications or components, yes.

Chris Romeo:

In things that you've built, in things that you've built the last month, do you use open source components?

Izar Tarandach:

constantly.

Matt Coles:

Yes.

Chris Romeo:

Okay. So do you have, have you created threat models for each of those components?

Izar Tarandach:

In my mind. Yeah. My mind is a very threat model place.

Chris Romeo:

But that would be, so that would be the security, that would be security ramifications for using that component would technically be, so we should technically have a threat model for every third party component

Matt Coles:

whoa!

Chris Romeo:

Inside. of

Izar Tarandach:

No, no, no, no, no.

Matt Coles:

Timeout, timeout.

Chris Romeo:

Security ramifications for using that component! I'm reading it off the page!

Izar Tarandach:

No, no!

Matt Coles:

That's not. So we, so as a person who uses a third party component, an open source, let's open source, right? Or generically any third party, anything that comes from a third party, whether it's a desktop application or a software library or fragments of code. You certainly should be threat modeling it for your usage. That does not mean you're threat modeling its entirety. Right? And it doesn't necessarily mean it's a formal threat model. So for instance, if I use, um, let's say I use an open source password manager. know it has certain security properties and I'm going to put that into my personal threat model. Now I don't have that written down anywhere, but I have an understanding of how that impacts me. What its security posture is and how it impacts me. Similar process may happen for, for a, for a software application, but I'm not writing commercial software applications. If I was a commercial developer, or if I was doing this for PyTM, for instance, for as an open, as another open source tool that was, that was inheriting open source components, then it should go into the, it should be factored into the threat model for PyTM or for the commercial application. So,

Chris Romeo:

let's do a little thought experiment. Go ahead, go ahead, make your point

Izar Tarandach:

No, no, no, You you, you go No, no, you go.

Chris Romeo:

So the, the, the way he wrote this, if you're a software company, so let's just pretend we're not threat modeling superstars, right? Cause we have, we kind of have an unfair advantage cause we've studied this thing. You guys have written the book that what I think of is currently the book on the topic, right? So software developers are not in, like a software developer at a software company does not have the superpower of threat modeling. Maybe they do, but they don't have the knowledge and experience that

Izar Tarandach:

We want them to

Chris Romeo:

We want them to have, yes. But, so, I guess, does, how does that change the conversation, though, about how they use a component and consider the security ramifications of that component?

Izar Tarandach:

That's a wonderful question, a wonderful thought experiment. And I'm happy that I shut up and let you introduce it before, because now I can go exactly to where I was going before. Let's imagine the, the, let's respond to your experiment with the, the quintessential open source component that everybody uses, crypto. Aside, if you are doing your own crypto, you shouldn't, unless you are a cryptographer. So, a developer looks at the crypto component that they decided to use, okay? Their threat model assumes from the beginning that whatever algorithm is in there, the implementation is good. Implementation is well done. He doesn't have, and I wouldn't have, the, the mathematical acumen to look into the code and decide from that, that this is a sound implementation of a crypto algorithm or not. But in their threat model, they are going to say, okay, what happens with the keys? Where do I store them? What's the size of the key? What's this? What's the cipher? What's the mode of the cipher? And so on and so forth. But then comes the one question. that every developer should be able to put in their minds, even if they don't know all this stuff, what happens if that black box that I am unable to understand and to measure and check the goodness of it, what happens if that black box is broken? So they should be able to design their systems. With what even has a formal name, Crypto Agility, where you should be able to just snap it out, put another one in, and get exactly the same things and move on.

Chris Romeo:

We're talking about trust. You're talking about,

Izar Tarandach:

Not, not,

Chris Romeo:

an element of trust that the developer has for a crypto library that, because they don't have the ability to test it themselves and analyze it, which I don't either, I couldn't analyze a crypto

Izar Tarandach:

library. Not, not quite, not quite. I'm thinking in terms of, what if you had a car, you have absolutely no mechanical ability whatsoever, and one bright day you're driving, and you start hearing clunk, clunk, clunk, clunk, clunk, clunk, clunk, clunk, you don't know what it is. You drive into a mechanic, they open the thing, take out the engine, put another engine, and you happily go on your way. Might not be the cheapest solution, but it is the one that's going to solve whatever that clunk, clunk, clunk, clunk, clunk was, and not going to give you problems down the road. So the open source responsibility of developer here is to assume that things could go south and be ready to snap the thing out and put something else in.

Chris Romeo:

But there's a trust, there's a trust element to this, in

Izar Tarandach:

But that trust is assumed, because nobody is, the licenses, the licenses and the authors, they're not promising you, trust me, they're promising you, hey, I think that for the things that I tested, for the itch that I decided to scratch, this piece of code solves it. But that's it.

Matt Coles:

By the way, it's in the, it's in the bringing that component in that, that those assumptions, we talk about this all the time about threat modeling is you make certain assumptions. Hopefully you write them down. Hopefully you validate them and hopefully you're prepared if they become invalidated. But you make certain assumptions based on the nature of the component, right? If the component comes from somebody in a questionable location, you know, a single developer, okay, if you, if you try to use a crypto algorithm that's implemented by a single developer in a, uh, in a questionable country, perhaps, or uh, or who ha, who, you know, there, there may be some other, other, uh, issues there. You may make certain assumptions about the fitness of those algorithms and have mitigations in place or have at least considerations in place for what to do about if something, if things go south. Right? So there's a, there's a risk, presumably, there's a risk discussion that should happen. There's assumptions that are decided during the threat modeling exercise if you're doing threat modeling, or they're implied. I bought the, I got this component. I'm gonna use it. I hope nothing bad happens. I'm making a lot of implied assumptions there, right?

Izar Tarandach:

Now, on the side of those assumptions and on the side of trust, since we already went that way, uh, there is the, I think it's the OSF scorecards project that does try to give a measure of, uh, uh, perhaps trust is the wrong idea, the wrong word, but

Chris Romeo:

more assurance. It's not trust, it's assurance. Because they're

Izar Tarandach:

Right, but I think that the same way that severity got changed for risk, I think that in this specific case, assurance becomes trust.

Matt Coles:

Assurance enables trust, right?

Izar Tarandach:

Yes. Yes. That's what I

Chris Romeo:

I grew up learning about assurance in the late nineties and looking at evaluating products using government standards and stuff. So for me, assurance is evidence and proof that allows me to get a more warm and fuzzy that security and privacy have been handled correctly. It's not fail. It's not foolproof. It's,

Matt Coles:

it's not foolproof, assurance drives and enables trust, and the more assurance you have, the better trust you can have,

Izar Tarandach:

Yeah. Yeah. Yeah.

Chris Romeo:

Better you can sleep at night with the fact that you're saying, yeah, I trust this thing. Why do you trust it? Well, because I've seen their threat model. I've seen their SDL that they apply to it. I've seen their test cases, and I think they're doing everything they possibly could do. Okay, now that's assurance, and that breeds trust, and then I can now say, okay, they're doing everything they can to make this component at the highest quality and the most secure and considering privacy as possible.

Matt Coles:

Izar, you have a smile on your face. What's your

Izar Tarandach:

No, I'm thinking, I'm thinking here to myself. We, we, we are treating trust as a transitive value when we don't really have a trusted route of trust. I mean, for example, the scorecard, we are just trusting the OSF and what they decided to measure in the scorecard,

Matt Coles:

It's turtles all the way down if you go there, Izar.

Izar Tarandach:

it is, it is, but I'm looking for the biggest, I'm looking for the biggest turtle, the one on top of the elephants.

Matt Coles:

I guess, so what you're bringing up is actually an interesting point, and we're straying off topic here from Bob's original blurb, but I think it's an important topic that you're bringing up. How do you trust, how do you gain trust from assurance if the process of assurance is not trustworthy?

Izar Tarandach:

So let me bring Bob into the conversation. How do you have that trust knowing that just anybody can submit a security patch? And, and, and I say security patch, differing from a patch, because that security patch probably has a bigger risk associated with it than any other patch. Because it's treating something that is a security condition, a security check, security case.

Matt Coles:

So I If

Chris Romeo:

you're going to go there though, just real quick, you, you have to separate the open source universe into projects that Matt created on this weekend and uploaded to some source repository somewhere and people are downloading versus open SSL. None of the three of us could get a security patch through OpenSSL's process at this point. Because we're not trusted members of the team, we're not,

Izar Tarandach:

oh,

Matt Coles:

oh, no, no, no, no, no, no, no, we need to, that's a Important area, but I think, I think we're on the, we're working up the wrong tree there, right? We We should not, we do not have the ability to direct commit to mainline on OpenSSL, 100%. But we could probably get a patch into OpenSSL. We could either do it by tricking a maintainer. Or having a good patch that maintainer reviews because they have a reputable process for reviewing commits, right? Or patches, right? And so, yeah, go ahead Izar. I was going to say, so there's a precondition here. Somebody finds a vulnerability, and for the moment let's assume it's a legitimate vulnerability, finds a vulnerability and decides to create a fix. Versus finds a vulnerability and reports it. and waits for a fix to get created. So first off, somebody finds a vulnerability and reports it, or decides to try and fix it themselves and then push the patch upstream. Right? And now it comes down to the project, um, to how their, what their practices are for, how those patches make it to their mainline, to make it to commits to out to the, to their release, uh, through the release train. You know, if you look at OpenSL, you look at FreeBSD, uh, I'll, I'll highlight because they have really good documentation on their website. Um, they have dedicated maintainers who have been validated and vetted. They have and they have tooling and, and, and a system and a pipeline in place that has all the right checks and balances and, you know, automated QA runs and, and regression testing and the whole works. And that establishes trust.'cause you have assurance in the process that allows you to establish trust that what patch gets pushed

Chris Romeo:

not at the front door,

Matt Coles:

throughout the, throughout the pipeline, throughout the

Chris Romeo:

where you're trying to get your, your PR, you're submitting your PR as a, as a, let's call it a nothing burger in

Matt Coles:

Yeah, as an untrusted, as an untrusted individual, but somebody's going to go review

Chris Romeo:

But they, but my point is in a project like Open SSL as as, which has been funded now, I mean, I remember the days when there was two guys that worked on open SSL in their free time and everybody built their products on top of it. So

Izar Tarandach:

So that means that at some point OpenSSL was a weekend project.

Chris Romeo:

It is, but now it's, but now it's become, it's got investment, it's got full-time employees. My whole point

Izar Tarandach:

because it was foundational.

Chris Romeo:

Yes. And, but my whole point here is Matt, if you try to submit a PR to fix a vulnerability. They're going to scrutinize the crap out of it because you

Matt Coles:

And they should.

Chris Romeo:

haven't, but you're, you're not a known quantity. My point is that you will, I don't think you can get a, I don't think you can sneak a, a PR past them easily.

Matt Coles:

Good.

Izar Tarandach:

so here's the thing. Why? Because, as Matt said, they have these huge, tested, well known, respected operators exactly for that threat model, exactly for what if somebody tries to sneak a patch in here. And that's what makes it different from PyTM that's developed on the weekends. That's the only thing that makes it different, by the way. But, uh, the, the, the point here is that If we are honest about it, the processes in place to put in a patch, especially a security patch in OpenSSL, FreeBSD, Linux, things that we work with every day and that are foundational, right? They are much, much better than many companies I have seen. In a company you could get that patch in, and many times I have seen places that don't have that in their internal trust model. They could end up with, like, code that, who wrote? I don't know who wrote it, it just showed up one day, right? But as you said, on OpenSSL, that would be much harder. But then comes the question, and here we have to go to the source of all wisdom, which is XKCD, and remember that the drawing that it had of like the blocks, and right here at the end, somebody in Nebraska tirelessly working on that little model that everybody relies on, right?

Chris Romeo:

Yep.

Izar Tarandach:

So let's go back to Bob, the civic responsibility. Where does that civic responsibility emanate from? Does it

Chris Romeo:

is, what is civic responsibility? Cause like when I think of that, I remember the only time I've ever heard it is two times in my life, civic, my civic responsibility was to vote and it was to sign up for the, what do they call the draft here in the US?

Izar Tarandach:

The reserve service,

Chris Romeo:

selective service, which I don't even know if they do that anymore. I'm, I'm so old.

Matt Coles:

I that, I think that they do.

Chris Romeo:

That, those are the

Matt Coles:

we're old enough, we don't,

Izar Tarandach:

until 55, but you see, you touch exactly in the two points that basically define civic responsibility, right? You are participating in the social contract that makes this community that you live in a community. So you are committed to participating in how it is, uh, how it is led. and how it defends itself. Without those two things, you probably don't have a community, you have something approaching an anarchy or something like that.

Chris Romeo:

Mm hmm.

Izar Tarandach:

So the civic responsibility of somebody participating in, if we translate that to the somebody participating in the open source ecosystem. Let's forget for a second about the voting, because I suppose that you could say that by bringing in a component, it's a form of voting. You are saying I'm participating in this thing that's called OpenSSL.

Chris Romeo:

Mm hmm.

Izar Tarandach:

But then comes the second part, the helping defending it. By adding a security patch into something that you decided to adopt for use, not for development, for use, do you have a social responsibility to defend everybody else that made the same decision?

Matt Coles:

So can, can I answer that with a question?

Izar Tarandach:

Please do.

Matt Coles:

Um, and I, I, I'd like, I'd like to look at examples sometimes as, as models for answering other people's questions, because sometimes it's good to make those correlations. So let me ask you. You, you started, you, you were saying, should you, if you make a patch. for a vulnerability should you distribute it back to the community. Make a lot, help the community defend against the thing that you found and fixed. And I would ask, just bring that up just a little bit. Which side do you, which side do you fall on when it comes to, do you find a vulnerability? Do you sit on it as a zero day or do you report it to report it responsibly upstream? So ignore whether moment you're going to fix it because you may not have the technical capability to fix it.

Chris Romeo:

I guess

Matt Coles:

But should you report it?

Chris Romeo:

I was assuming we were talking on the side of good in

Matt Coles:

I would hope so.

Izar Tarandach:

Okay, so let's keep on the side of good. Bug bounty. You just found a problem on a third party component as a participant in a bug bounty of a product. it's not a bug bounty of that component.

Chris Romeo:

You're getting paid, like you can, you, you're gonna get you're gonna get re you're gonna register the vulnerability before, and then maybe you have, you should go make a fix after it or whatever. But

Izar Tarandach:

but now if they have the civic responsibility of pushing a patch for that, you just killed the N other bug bounties that you could run to and get paid for the same vulnerability.

Chris Romeo:

yeah, I mean, that's a whole other We, we can have a whole episode on bug

Izar Tarandach:

so no, but no, no, no, no, no, no, no, it's the same question, because think, because think, if those guys have a responsibility to patch and solve and distribute, then you, by pointing out the thing that gives you, that gets you money is basically killing the golden goose. So you have to, you have to report to everybody at the same time. And now you have a race to the top to who creates a patch and gets it in. So now you are DOSing the open source component because they are going to get 20, 30, 100 patches

Chris Romeo:

if, hold on, hold on. I want, I want to take what you just said, and I want to, I want to throw a curve ball here. What if people had bug bounties for open source fixes? Not for breaking, but for building. What if you had a world where you could go and you could get paid$2,500 to fix a critical vulnerability in OpenSSL? And you were somehow vetted, like the bug bounty platforms have that vetting where you can say, I want only people that are kind of vetted and real people and we know they're not criminals or whatever that are involved. Like, why don't we

Izar Tarandach:

Isn't there something like that? Aren't there some projects that do get funding to, for other people to patch them?

Matt Coles:

Yeah, there used to be. I don't know if they still exist, but I

Izar Tarandach:

I don't know.

Matt Coles:

one or two. I remember a few years ago.

Izar Tarandach:

I would love to see it, as Chris seems to be implying, at a grander scale. Somebody with the resources to help those developers absorb those patches, and test them, and do all the good stuff that OpenSSL does. As an infrastructure, I would love to see that.

Chris Romeo:

mean, this is back to the breaker versus builder challenge that I've been yelling at about for years and years now. Like, our focus as an industry is all on breaking stuff, and Bug Bounty just proliferates that. It's about how do I break it? Now imagine if you had to submit a vulnerability and a fix as part of the bug bounty. So you can't just break it, you actually have to fix it as well, so that the company could

Izar Tarandach:

Oh,

Chris Romeo:

then and apply that, right? Like, at from an open source component, but,

Matt Coles:

Yeah. I mean, that's, that, that's a, that, I think that would be an insurmountable challenge for a lot of people, but it, you know, it may improve the quality of findings, right? Um, because you'd have to really understand the code or really understand the architecture to be able to support something like that. But I think that could get pretty, you know, pretty challenging. Is it sufficient to drop a threat model? Like, here's how I'd fit, or here's a new design. Here's how I would fix it from a design standpoint if it's a design flaw, versus actually writing code and putting a PR in place, right? So there's probably some challenges there, but I want to call out one thing really important. Sorry Izar, I know you have

Izar Tarandach:

No, no, no, no, go for it.

Matt Coles:

uh, but what Bob is then also calling out as civic responsibility is You make it for, you make it for altruism, is it altruism the word I want to use? It's where you, it's where you, you contribute back to the community at large, because it's the right thing to do, not because you get something from it. You get

Izar Tarandach:

And you just got a puppy.

Matt Coles:

well you get benefit from it, but you

Izar Tarandach:

No, you got a puppy. You just got a puppy.

Chris Romeo:

Well, you get the benefit because you made the fix. Like, you're improving your own security because you use the component,

Matt Coles:

But you're, but you're also distributing it to others, so, but without expecting anything in return from them. I

Chris Romeo:

We can't go any further until we address the licensing issue, right? Because

Izar Tarandach:

No, no, wait, wait, wait, wait, wait, before we go further than that

Chris Romeo:

they can't do this.

Izar Tarandach:

before we go further than that, the, the builder versus breaker, two things. First, do you really think that we have that amount of people who are both able to find and fix stuff? And second, can we have a whole episode on that, please?

Chris Romeo:

Yeah. Let's do, I, I, yeah, I don't need, I'm going to, I'm going to save an answer to that into a future episode, because I think that's, that's somewhere we can dive

Izar Tarandach:

Okay. License.

Chris Romeo:

got to deal with licenses here because one of the challenges here, this all sounds great. If you create a vulnerability and create a fix, it is your civic responsibility to ensure all users of that open source library enjoy the added security. But what about the license? And Matt, you've already touched on some of the license challenges that exist here. There are licenses that don't even allow me to Like, aren't there licenses that open up Pandora's box if I do submit a security patch? Aren't I then I don't know. I mean, it seems like, it seems like there's a licensing challenge here where some licenses are just not going to be friendly for my company to be able to make a, fix a security issue. Okay. That

Matt Coles:

is a really important statement. There are some licenses that if you make a, if you make a fix, if you make a change, you need to distribute it back upstream, right? GPL, I think is the big one here, right? If, if you make a fix, you have to distribute the fix. You have to distribute the code change. Right? By the license. If you, if you're under BSD and you make a code change, I don't care, right? BSD doesn't care. Or, or, uh, MIT license doesn't, I think, doesn't care, right? That, that, that you make a code change. Use it as is and free to modify in any way, shape, or form. Uh, so, there are some licenses that require you to push upstream, and there are some that require, that, that don't care if you push upstream. And then there's some that, and then there's some that are, uh, I, it's copy left and there's some other license types, I think, in there. I'm not, not an expert in licenses and stuff, but, you know, we should find a lawyer who is, um, that, uh, that is, um, you know, that, that actually exposes more of your IP than just the change that you make. And that's, yeah.

Chris Romeo:

the one I was concerned about was the, the exposure. Izar, go.

Izar Tarandach:

Two things. Well, I'm very two today. Push up stream or make it available. That's one.

Matt Coles:

Fair enough. Yes. Yeah. Yep.

Izar Tarandach:

and second, because those are different.

Matt Coles:

They are different, yeah, sorry, I used the wrong, wrong term there. What,

Izar Tarandach:

are you guys seeing this? Look at this! Ha ha ha ha! I just went like this with my finger, what, what, what the

Chris Romeo:

using a Mac? It's a Mac feature. It's a Mac feature. I'll tell you how to turn it off later after we finish.

Izar Tarandach:

I HATE MACS!

Chris Romeo:

I love it. There's,

Izar Tarandach:

I went like this with my finger.

Chris Romeo:

there's various, there's various things with the, with the, one of the new releases of MacOS where you, gestures get translated into

Izar Tarandach:

I'll give it a gesture soon.

Chris Romeo:

Let the record show that Izar is now doing a gesture that is not resulting in any type of, uh, balloons flying around

Izar Tarandach:

Balloons, yeah. Anything coming out of this? Anything? Anything? No? Okay.

Chris Romeo:

feel

Matt Coles:

Do, Do,

Chris Romeo:

yeah, I feel like you had a point

Matt Coles:

and see.

Chris Romeo:

that. There was a point there somewhere. You said two things. First one was the, the upstream versus

Matt Coles:

versus

Izar Tarandach:

make it available.

Matt Coles:

right?

Chris Romeo:

what was the second one?

Izar Tarandach:

I forget. I got ballooned out of it.

Chris Romeo:

Uh, that's all right.

Izar Tarandach:

This is ridiculous.

Chris Romeo:

all right. We'll, uh, it'll, it'll come back to you. Once we start talking about something else. So, but I think, I think it sounds like we're all agreeing though. The licenses. Could be a challenge,

Matt Coles:

Could

Chris Romeo:

for, with this

Izar Tarandach:

No, no, no, no, Now I remember, now I remember. The IP thing. Yeah, yeah, the IP. If you are putting IP into patches that you would send upstream or make available, something is very wrong.

Matt Coles:

sorry. Your, when you create, when you create a patch, that is your IP, that is IP, that, that is intellectual property. You have, you have taken your brainpower and written it into code.

Izar Tarandach:

Yes, because it's a line that one of your developers wrote. But is it IP?

Matt Coles:

it trade? as IP? is it trade secret versus not? Yeah,

Izar Tarandach:

matters? Yeah, right?

Chris Romeo:

So there's secret versus not, but there's also valuable versus not.

Izar Tarandach:

yes, things that matter and

Chris Romeo:

IP.

Izar Tarandach:

what I'm saying is, anything that you would write into a patch that you send upstream, by its own nature, should be something that you could print out and put it on an outdoor in the middle of the street,

Chris Romeo:

I guess, is it a competitive advantage to have a patch to a vulnerability that you create yourself?

Matt Coles:

I would

Izar Tarandach:

only if you're in the business of exploiting other people.

Chris Romeo:

Which, almost said

Matt Coles:

Well, or if you want a pro or if you product, want a product at work, let's say for a moment that that pa, that that patch is to close a hole that causes a permanent denial of service and your product will work and everyone else's won't. Right? I mean, that's a competitive advantage,

Chris Romeo:

Which would not comply with Bob's statement here, though. Like, you would not be being a good member of the community

Izar Tarandach:

are not exercising your civic responsibility.

Matt Coles:

right?

Chris Romeo:

Yeah, I mean, I don't know. I think that's a good place to leave this one for today. I think, uh, I think, well, I want to, I want to reiterate something Izar said at the beginning. Um, we love CISA. We love what they're doing. So this is not a, we're not, we're not attacking this statement. We're just trying to unpack it and understand it and, and think about it and, and ultimately help more people to, to, to hear what Bob put into this post, because we thought it was really powerful statements. And, um, we thought there was a lot to unpack there. As you saw, it took us 40 minutes to unpack it. So, um, once again, thanks Bob for putting that statement out there. Thanks for all the things you're doing at CISA. We're, we're definitely big fans of secure by design, secure by default, and all the work that's going on there. So, uh, thanks folks for tuning into another episode of the security table and stay tuned for whatever the heck we decide to talk about next. Ha ha ha

Izar Tarandach:

for Izar and more Mac wonderment.

Chris Romeo:

ha!

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Application Security Podcast Artwork

The Application Security Podcast

Chris Romeo and Robert Hurlbut