Brief summary
As the infamous SolarWinds attack showed, it’s no longer sufficient to just write secure code, you need to ensure that you understand the security risks throughout your entire software supply chain: whether that’s compilers, containers or the tools used to manage deployment pipelines.
Full transcript
Mike Mason: Hello, and welcome to the Thoughtworks Technology podcast. I'm your host, Mike Mason. Today's podcast is going to be about securing the software supply chain. I'm joined today by two guests, Mike Ensor, who is a solution architect at Google, and Jim Gumbley, who is a tech principal at Thoughtworks with a focus on cybersecurity. Hi to both of you, and can I get you to just introduce yourselves briefly, Mike?
Mike Ensor: Sure. Hey, thanks for letting me come in and chat today. My name is Mike, I work at Google. I work as a solution architect, which allows me to go in and talk to a lot of different companies, a lot of different situations, and then try to capture that and then turn that back around and give that advice back to many of our other customers at the same time. The most recent thing I'd have been working on here has been tackling the supply chain, and how do we secure that, especially given some of the more recent breaches and more recent news that we've seen.
Mike M: Jim.
Jim Gumbley: Hey, Mike, and Mike, thanks for having me. Yes, I'm Jim, I'm a tech principal at Thoughtworks, London office. My focus is very strongly on cybersecurity. We work with our clients, we've got more higher-end risk profiles, healthcare, finance, government, that kinds of thing to secure software, really. That's my focus.
Mike M: I'm going to throw it back to Mike because you said you've been working on this securing the software supply chain. What is the software supply chain, and what do we mean by securing it?
Mike E: Sure, basically, earlier this year, I published a paper that talked about the supply chain, and where we have vulnerabilities in that. The supply chain defines it as when you go to build software, do you have the trust about the entire-- from the moment you start coding all the way through the moment that you actually produce it and push it out into somewhere that something gets consumed, whether it be a production, or whether it be consumed by another customer or another program.
Securing that supply chain is all about how do you manage and govern that trust, and how do you ensure provenance and know where your software comes from, what your software is doing, what vulnerabilities you have, and then be able to share that with both upstream and downstream.
Mike M: What's interesting about the software supply chain is people think a lot about the code that we write, but that's only a small part of the supply chain. I know, we've seen some high-profile stuff in the news recently, like the big SolarWinds hack. I think was labeled as a supply chain hack because it wasn't directly about the code, it was more about the path from the intended code all the way through to production systems.
Mike E: Yes, we have to think about the supply chain as not just software you write, that is a good portion of it. That's where I spend a lot of my time is coaching and helping developers and engineers understand aspects of security that are applied to the software. In that particular case, we looked at-- there was vulnerabilities in the CIC pipeline, in the tooling that actually does the build processes. If you think of the entire continuum of the supply chain, you have different vulnerable points pretty much all the way through the entire supply chain. It's not just software, but it's the tools that we use to build. It's the tools that are used to build the build tools, et cetera.
Mike M: The tools that we use to build the build tools, this is getting to the, "Hey, I want to run GCC on an old machine, and it has to bootstrap itself five times from whatever the C compiler is that was originally on there." I'm not sure I thought of it in that wide essence. Jim, is this something that you kind of run into with clients as well as the need to secure more than just the codebase?
Jim: Yes, absolutely. It's really multifaceted. I think the word supply chain makes you think it's one thing and you can maybe tick a few boxes and it'll go away, but it's a big hairy monster, really. There's a couple of reasons. The software is updated, not only we're pulling loads of dependencies in from left, right, and center, but also we're we're updating our software where we're we're making changes to that. We need to make sure that the changes are all trusted as well. I don't know if you think of it like a tree, so every change that goes to production, what all goes into that change, the amount of different things that you're trusting from laptops, at your suppliers, and there's a classic XKCD comic.
I don't know whether folks remember dependency. This shows that any better software is actually dependent on a library that is maintained by some random guy in Nebraska since 2003. Yes, it's a huge attack surface.
Mike E: It's funny that you mentioned that. A recent conversation with some folks at GitHub. They went through and did some data analysis and they found out the top 1000 open source projects-- Just take a guess. How many people do you think have contributed to the top 1000 open-source projects? Just what does that tree look like? Turned out to be 76,000 people. Do you trust all 76,000 people who were in that dependencies?
You've got dependencies, you have transitive dependencies, you have dependencies, others transitive dependencies, and you keep following that down those projects. There's a lot of people who are involved in software development of all of those. That's where we have to start. When you think about the supply chain, it backs all the way up to that point. Another aspect of that also things about is where was this built? Mike, you had mentioned that there is a compiler who said, "Hey, maybe I have to go in and change the GCC."
Way back I don't know, it was somewhere in the early '90s, GCC was actually the original writer of the compiler actually put in code that changed the GCC itself or changed the compiler itself. That was one of the earliest additions of a supply chain hack itself. Is your compiler even secure? You have to think about that. You can get a little tin foil on this and go a little bit too crazy. You're going to have some levels of trust somewhere that you're going to believe in things, but when we think supply chain, it can go all the way down to the compiler or even to the firmware that a compiled application is run upon.
Mike M: What are the most common or I don't know, where to start, for example, common things to begin worrying about? Thinking about open-source libraries, I remember there was something fairly recently where someone basically, they did deliberate name collision between-- they had somehow figured out internal library names that people were using at various organizations, Apple, Netflix that kind of thing. They created public repository versions of those with a slightly higher version number.
What it did was it actually tricked people's internal build systems into downloading that public unofficial version of the thing with the same name. I think these were not actually malicious versions, it was just a proof of concept kind of a thing. Presumably, there's something there around looking at your artifacts repositories and where you're pulling libraries from. Is that the first place to start or where do you start on this?
Mike E: I would say that -- obviously taking a moment to look at your dependencies and where you're pulling from and starting to look at the governance-- I'm sorry, the provenance of your own dependencies, I think that's a great place to start. It's not the easiest way to solve. That's probably one of the more difficult areas to be at. At Google here, we've been pushing this open-source concept called SLSA, S-L-S-A.
Basically, the idea of what it does is it tries to say that when you go to use something, you need to validate and verify that is signed-- that you can verify a signature of a particular type of artifact that you're working with. That's the first level. It goes much deeper into how do you share that later on when you get to level twos and threes. Just in its most basic, it's looking at a signature. Back in the day or it's still even today, the whole Maven Java dependency, that was one of my first introductions into the idea that there is security in supply chain or in software because I had this crazy idea of I just want to publish a Java library to Maven Central so I can use it everywhere.
I can get in my resume and make me look cool. It was 2010 or 2012 or something. I ran into the first thing. I'm like, "Well, you don't have a signature." I was like, "I don't know what that signature is." That started down the understanding your asymmetric keys and all the other stuff went up, right? Java had some of this ecosystem built-in, but a lot of the other programming languages don't have our package management tools don't necessarily have digital signatures. Then the validation, how do you even obtain the key and publish that key out so you can verify those?
That's where a lot of the industry really needs to start getting back into is like, can we somehow collectively put together a system where you can publish your public keys? I can do validation against that in my supply chain before I actually start to use them. It becomes a big snowball. Yes, to answer your first question. Yes. First is, go in and make sure where you're pulling from.
If you can limit down the repositories that you're working with, either do it via a proxy or just do it very, very direct and say, "Here's my allowed list of repositories to pull from." That is at least a very good start.
Jim: I've got an insight to add to that, Mike, which is limit your independencies. The less dependencies you've got, the better. It's not always possible but there's the classic the-- I think he was in Ruby or Node, there was an LPAD.
Mike E: It was EPM, Java. That was at JavaScript, yes. I jokingly went and built rightPad right after that, but that's a whole another story.
Mike M: Tell me about this. I don’t understand the reference. What was this one?
Jim: This wasn't even a malicious attack. This was an accident, but there was there's a library that did something very, very simple, and it's essentially a path for a string. The developer-- it was dependent on I think tens and hundreds of thousands of other libraries. The developer just made an error basically and effectively took the internet out for the day.
Mike E: They pulled the library off of the public repo and tens of thousands of systems went down because of that because it was a small thing. I think it was the 20, 30 lines of code type of a thing. It shows how critical that one dependency just happens to be in there.
Jim: SLSA I think is great actually and I think it's commendable that Google are putting some shape into that space really. It's fantastic. My observation is that it's not a simple problem to solve. It's a little bit like setting up a social network. You can build a social network, but you need everyone to be on the social network for it to be a valuable thing. SLASA, it's an empty social network at the moment, the non-trivial problem is getting the adoption across the millions of dependencies that are out there.
Mike E: I think I had a discussion about this just the other day with whatever product managers specifically on this. It's more that first capture your own organization, develop yourselves SLSA Level 1 within your own organization alone. When you have dependencies, you're digitally signing them, you're validating in your own CI/CD toolsets or your build process, et cetera. Once you have your own level of established trust and you can pass it around and you can validate it, then it starts becoming, how do you share that around with the other groups?
I brought up the idea that it's really-- if you were to have some future state where you can publish your keys out so you can validate everybody else's signatures on their containers and stuff like that, that sounds really awesome. I think what it really comes down to that, all of a sudden, quickly becomes a honeypot and a target for malicious attacks to say, "Hey, if all the keys are in one spot. Well, if I can get in there, I can update a key.
Now I can sign my malicious code, push it in. It'll look as though it came from somewhere else." That part of the rest of the world network doesn't exist. It might be really challenging for us to get to that point, but we might end up starting seeing much more of I would say little consortiums of groups that are producing these and then sharing the trust amongst them.
Mike M: I know you talked about policies around the dependencies that you can have and the artifact repositories you're looking at, that kind of thing. In the past, I've been on the receiving end of some architecture groups who were approving all of the libraries that we used, and it was this slow and onerous, not very fun process. Are we in danger of having the same thing from the security line? I think at least it's justified in a security perspective, but is this relatively streamlined, or is this adding steps to what we do?
Mike E: I was going to make that comment earlier as well. I feel like when you get into limiting or having an approval board for the particular dependencies you work with, it really slows down development. Quite honestly, what it really does is it just really just pisses off developers and is just like, "Why can't I use whatever?" A lot of that does come down to education. That was one of the points I wanted to make even in the conversation today is that a lot of the software development we do comes down to security shouldn't be presented as a rigid, strict processes that are put in place.
It should be put into, let's educate why we need to do these things so that way, you can start doing them yourself as we're understanding it. When you look at these libraries, the education should be more, do you know where this library is coming from? Have you validated? Have you looked at it? Do you know how frequently it's being updated? You can go through this very simple checklist as a developer, and then if you feel like those are good, then you try to bring it in. If you need to start putting policies around that where you work with your artifact repository to say, "If I don't see an update in the last year, it's automatically rejected," or, "If I don't see that at some point, obviously digital signatures and stuff like that would be great."
You can start putting some limits around that. I don't think it's inappropriate or a bad idea to have a proxy for your dependency management tools as well. You can reach out to the general public and pull down from the public repositories into your proxy, but now, on your proxy alone, you limit your development to only work with that proxy. That at least gives you a point where you can go in and stop something, should you need to, and at least identify it.
It also gives security an opportunity to say, "Let me focus on those dependencies in there." Really, focus your CVE tools, your scanning tools on that and get your provenance and whatnot from that central location. I do remember back in the day where every single time we went to go to use a library, it had to be validated by a security team. I know this is a generalization, but a lot of the time, security organizations, they're not in touch or in tune with the software development.
They look at it as more as a series of checklists that they want to go through and say, "It doesn't have this, this does not have that," but don't understand the context of the use of where you're developing the software. They're more likely to say, "Well, it's not on this. It's not 5.3, 0.2. I can't use it," and then throw it out without understanding how and where this is being used.
Mike M: Well, of course, that's one of the big shifts in software, in general, over the last decade or so, which is this shift to continuous delivery and releasing much more frequently. That's part of the problem with a traditional approach to security is that it simply can't keep up with a team saying, "I'm going to deploy this thing five times a day," because the reason that the team can deploy something five times a day is they have all of this automation around it, including testing that the thing is still working. Presumably, security techniques also need to move into the realm of automation. Is that reasonable?
Mike E: Absolutely. I would say the key thing that is underlying, that you didn't quite mention, but that CI/CD pipeline that you're building that allows you to push 5 times a day or 10 or 20 or whatever, it's because you have trust and you have the inherent trust of that system. You continually keep refining your continuous delivery process. Every time there's an error, bug, something doesn't work right You go back, you refine it, and you fix it. You're effectively trusting that pipeline more and more.
What happens when security is not included, then you get to the end of your trust. Security is not a part of that process, therefore, you have to do the hard reset back to the beginning, if there's any type of a vulnerability in that. Obviously, anybody who has touched that code up to that point in time will get frustrated. Bringing up security in as early as possible and being a part of that software development and decision-making throughout that bakes that trust into your pipeline or into your delivery process. I shouldn't say pipeline because I believe that we now should start using multiple pipelines, but that's another statement there.
Mike M: Well, I think Mike, I definitely made the point that the pipeline is something that needs to be secured in and of itself, right? We've just talked about quite a heavy problem, which is how do you make sure your dependencies are up to snuff and all the developers who work on it and all the build systems they use and all the laptops they use and the rest of it, which is intractable. If your team has got a build server, then you can implement access control on it and that's just table stakes.
You need to be doing the basics of securing your pipeline because otherwise, that really is what an attacker is going to take advantage of. Commonly, pipelines have got permission to do things in production. It's an obvious point of attack. Actually locking your pipeline becomes really quite important.
Jim: Actually, I have a couple of points in there. I think, one, you're absolutely correct. I can share a story in a moment that gets me down this, but I also believe that there's more than one pipeline that should exist within a system. You can have a general basic pipeline that builds your artifact. Again, if I think continuous delivery, my goal of the continuous intergration side is getting to an artifact that I can release at some point and give it a version.
Then the deployment side is taking that version with some contacts and deploying it and using it somewhere. If those two separations often they're automated together but there's two separate actions. In that first portion there, that can be run in one type of a pipeline whereas the deployment can be separate You can give different login credentials or give different visibility and role access to those different types of roles. Then even within that, there's a series of if it's going to development I can give more. If I'm going to production, I can give you the less et cetera.
The other thing we can think about too is that a lot of these pipeline tools now are using containers. If those containers like have you validated and built your own containers in your build tool rather than just using open-source tools or open-source versions of anything like Docker Hub or something like that. It's very easy for an attacker, whether it be malicious or not, inadvertently can push something to a public repo eventually get it out. Maybe it's in Alpine or BusyBox or what a lot of developers use in the background for that CI/CD pipeline.
You have to make sure that those are actually coming from where you want them to so you don't inadvertently add a malicious actor into your pipeline without understanding it. Probably the third part I was thinking too is that since everything has code, pipelines has code, it's a great tools that allows us to automate our process better. It gives us better repeatability and reusability. At the same time, thinking about that, since it's actually code, you can run software development or software tools against your pipeline code itself to make sure that are you validating your pipeline?
Does your pipeline fit a specific format? Does it have certain types of tags? Does it have code analysis, et cetera? It's code as well just as like our infrastructure is code. Anything that's code should be running through this exact same type of a process we're talking. You want to run it and validate that it is what it is and have it complete a pipeline itself with some trust at the completion of it.
Mike M: Jim, you work with a lot of high threat organizations. You mentioned healthcare and government and stuff like that. Is this thing the same for those organizations? Obviously, you can't tell me everything but are there extra things that those folks are doing, or is it the same as we would do cross-industry?
Jim: I agree. Every situation is different but the value of intellectual property, financial flows, personal data, it's targeted. You only have to read the news and read some of the technical accounts of what malicious actors are doing. It's really, really targeted stuff. I was reading around SolarWinds, how they're listening for MSBuild processes and then poking into it, finding a visual studio solution, looking for a particular line of code in order to inject software at a particular line.
What I hear from talking to folks in different industries is that some of the recent breaches are making organizations across the board realize that things that were nation-state capabilities assumed today several years ago are now things that the businesses, particularly multi-national businesses and businesses that safeguard large amounts of sensitive data, personal data needs to be considering. Absolutely need to be considering.
Mike M: Let's talk about that for a minute. We've mentioned SolarWinds a couple of times. I presume people will be roughly familiar with that. The noise level on this I think has gone up quite a lot recently as you were alluding to, Jim. SolarWinds was the big, I don't know, claim to be Russia hack of US infrastructure. On the one hand, the headline was bumbling intern leaves password blank on Jenkins server or something. Jim, you were saying actually there was a lot more to it than that.
I think recently, we saw a US pipeline went down due to ransomware and people were actually-- this was a big deal. People were left cold and without fuel to heat their homes. Clearly, this is ramping up in terms of importance. There was even a White House executive order on that. Mike, can you tell me more about that executive order and the background there?
Mike E: The executive order, we can say Google was one. We helped consultants over that. Obviously, there were many other individuals who did as well. Really, what that was is talking about, if you have a contract with the US government, then you need to follow a series of rules that are now being put in place in order to try to prevent some of those SolarWinds.
The SLSA stuff that we've been talking about, a lot of that has been baked into that executive order in order to really help push the hand of anybody who works on governmental software to understand where the provenance of these are, and then actually start applying digital signatures to the artifacts on which we deploy them so we can validate that they came from where they came from. I think SolarWinds was the one that hit the news just because it was large, but it's not like that was the very first one ever. There's been supply chain hacks that have been pretty pervasive.
I was going to mention a little small personal story that helped me again with why I'm so interested going down this path of making this secure. I cannot confirm nor deny that maybe a Friday at 3:00 AM phone call comes in and say, "Hey, the entire system's down. Can you quickly fix it?" My ask was, "Hey, I need production access to a database." They're like, "Well, we can't give you that." I just changed the CI-CD pipeline to print out the production password to the database, logged in, fixed the problem, took out my change, and went back to bed.
If I had that level of control back in the day when I was doing these things, that's the thing we have to be able to be protecting as well. Part of the process of supply chain and just as what Jim was saying, giving the right appropriate access to your pipelines, but also validating your pipelines to make sure that they're doing what they need to do in protecting the output of those in terms of blocking out sensitive information, that would help those types of things. Those are very, very easy things. That was mid-2000s that I did that. Supply chain, things like this have been happening for 30, 40 years. It's just now starting to become popular.
I do think that somewhere around 2015, 2016 is when the DevSecOps movement started, the idea that maybe we'll start bringing security back in. I used to say around that time that there were two entities in the world that would stop a software development, or at least with an organization, stop a software development organization from pushing out a change. It's going to be their legal saying, "Hey, we can't do X, it's illegal," or it will be security saying, "Well, we can't do that because it's going to compromise stuff." Those are the only two. They held a big stick at the very end of all of this.
Hundreds of developers all putting all this stuff together and at the very end of the day, one person is like, "No, that can't go." That was very, very frustrating. The DevSecOps movement was starting to say, "Let's bring those decision-makers back in." If I think of the CI and then CI-CD, DevSecOps, the only, last thing is to start bringing legal into these decisions very, very early as well. That'll be a while before that happens.
Mike M: Funnily enough, throughout my career, I'd always experienced security as being the gatekeeper after the fact people with batons who'd come and tell you no. I worked with one client where their security expert was much less black and white than that and was actually more like a lawyer rather than saying, "You can't do that, or you can't do that," was saying, "If you do this, this is the risk. We need to have a conversation about risk and decide whether we want to take this on or not."
Mike E: That's great. At the end of the day, security is still just a business decision. You're still coming and saying, "Do you want to secure this?" Well, it's a business. If the cost outweighs the risk, maybe you don't make that decision. Compliance is still just a business decision. Sure, if you don't adhere to it, you might go out of business but it's a business decision you have to make. If we think about the risk rather than, yes, no, is what is the risk and the cost and the reward for making that decision to use or not use or improve or whatever, it's still a business decision.
Jim: I think it's interesting to hear Mike talk about legal actually because one change we've seen in the UK and actually in Europe is data protection legislation, the GDPR. That's changed behavior with clients, particularly, the clients in London that I'm close to because there's real liability. It has made that pivot to be risk-based. You can't prevent bad outcomes. These decisions are increasingly made on a risk basis, which can only be a good thing if you ask me.
Mike M: Well, let's talk about that liability thing. I remember reading a lot from Bruce Schneier, who's one of the industry luminaries on this. Publishes a cryptogram once a week with interesting security stories in it, and he's long maintained the position that industry is only going to actually start securing things when there's liability. If there's liability, maybe you need to get insurance for it, and if you need to get insurance for something, maybe you need to actually show that you're doing something about it in the first place. Certainly, your comment about GDPR and liability, that seems to indicate that that is helping people move in the right direction.
Mike E: I completely agree. I think that's right. Right now, the largest risk that they have for liability is brand management for a lot of these organizations, unless they hit something that's a specific compliance rule. We do need to start increasing those compliance rules and getting some liability, whether it's laws that come into place, or just as you mentioned, maybe becomes the ability to be sued easier for these. Therefore, you have insurance does it.
Either way, at some end, we got to start making that push to it. I truly believe that 99% of the world out there that does software development, not malicious and we often don't wear the hat of a malicious attacker, or a bad actor in the process. We just don't think about it. Supply chain attacks really, I think become more prevalent now because people started thinking about this 10 years ago, 5 years ago, and really started trying to exploit those holes that we have in the vulnerability because we as other developers don't think about it.
Now part of this act of bringing supply chain and really talking about this now it's our responsibility as developers to start thinking about security from day one. From the moment you actually start putting your hands on that keyboard, such that, think about how somebody might exploit what you just wrote, or how they might use it to a different advantage or in a different way. That will help you think about how you develop it and that should help derisk. I really don't think most developers think that way.
It's not until more recent, so stuff like this podcast, things like all the news that's going out. There should be a quick reminder to all of us as developers that we also play a major role in that security process and we can influence that significantly. Then to all the security folks who are out there, the CSOs and whatnot, use your power to come in and influence and coach and teach and train and change behaviors of the development teams.
You're going to get much better responses out of supply and your supply chain, rather than saying, "Here's a baton, if you get to the end, I'm just going to whack it." That's not what I want, get in there as early as possible and coach and teach and change those behaviors so it doesn't become a really restrictive development organization, which means you'll start losing developers from that. Education's a lot of this, and behavior change and don't punish people if there are security vulnerabilities.
Teach, go back and treat this just as any other Agile software development process out there. Collect your feedback, write your tests to make sure that you're not making that same mistake again. Then blameless post mortems, and then get them out and fix it, and then keep going down the road.
Jim: I agree with all of that. I think that incentivization is a difficult challenge with security. Any organization will have pressure to deliver value and often, that's a business outcome rather than a security control at the end of the day, I think the threat landscape is changing, you just have to read the newspapers, or whatever the internet equivalent of that is to observe that. I think there's an intrinsic motivation with developers, maybe I'm being a bit idealistic, but I think developers want to get it right at the end of the day.
I think the more that we can make it clear what good looks like, because security is part of that actually some times. The more that we can make it clear what the most important thing to do, what has the most impact in terms of securing software, I think developers will do that. I can see that happening in the industry actually, which gives me some hope.
Mike E: This gives me one of the things I really most like tried to stress in my paper is that you want to give early fast feedback. Policies that every single tool that you use if you target the source code itself, we think about this as a process in which you're building so it's touching the source code management, linting, all that stuff happen at a source code level. You talk about creating the binary, you release in the binary, et cetera, and all as you go further up the chain, they become more and more expensive to make changes and fixes.
Just like any continuous delivery process, when you throw that security ion, get that in as early as possible. Where possible, in fact, I would love to see this on every toolset, having some sort of a codebase policy where you can have thresholds and you can have allow lists and automatic rejections and stuff like that, basically, that can be controlled by an outside group or like another entity within the organization that will give those rule sets. It would be great to have every tool have that and then give that feedback and probably even most preferably, give that feedback with some sort of remediation steps.
If my linting failed, I shouldn't just say, "Hey, you checked in source code that didn't look good," or whatever. It should have some sort of a notification, say, "This line, this thing, this policy, go read the policy if you need to but here's what you." At Google, that's where a lot of our software and our process works is like you have a failure and something that goes on, it gives you a very, very specific context of where to go to read that policy about why something failed, and then what are some remediation steps.
In our pipelines, we should be doing that almost immediately. Go work with the CSOs and the security organizations to say, "What is required at each of those different stages, let's fail as early as possible." Then giving that context developer have them go fix that without having to wait for too long. I know that I can look at my own code from three months ago, I'm like, "Man, who wrote this?" Then quickly look at the log and be like, "Oh, that was me, this is horrible." I don't have the same context even eight hours later after writing sometimes.
Sometimes you have to really jump back in and get your head wrapped around what you were doing thinking at the same time. That early feedback gives you the opportunity best to make that change as quickly as possible and more efficiently as possible.
Jim: I think, from our intros all three of us were part of that DevOps movement. God, how long ago was it now. 15 years ago, and the rest of it really. One of the really exciting bits there was the tools that came in, but some of those tools that came in, we'd just never seen anything like it at the time. I think that movement has started a little bit in the security space but the tools are not mature yet. DevOps wasn't just infrastructure people like suddenly learning how to code, it was software developers getting into the infrastructure as well. I'm sure there's a lot of big people out there who are maybe software developers who think security is boring, or something like that.
I can see that movement happening. I would just encourage people to get involved to build those tools. Some of the ideas that are going to solve these challenges haven't even been thought yet. I just really double down on that call to action really.
Mike E: Completely agree. I saw a really large push in about 2015, 2016 for a lot of these open source tools where the gauntlets and the BDD securities, and a lot of those tools. Somewhere, a couple of years later, they kind of just died and we haven't seen a resurrection of those yet. There's been a few other tools that are just starting to come out. I agree, that's a call to action to all of us as developers. Remember, we all have the responsibility to make sure things are secure.
If you've got something that doesn't work the way you want to, or there isn't a tool that fits the gap that you need, go out and build it and share it and get other people to contribute on it. My only ask is, again, make sure you add policy as code to those toolsets such that we can configure them and change them in a pipeline, or inherit them.
Mike M: Well, great. I'd like to thank Mike Ensor and Jim Gumbley for being my guest on the podcast today. Mike, where can people find you on the internet? We'll put some links in the show notes but if you can let people know.
Mike E: Sure. You can find me on LinkedIn as MikeEnsor, all one word. I'm also on GitLab. That's where most of my contributing goes to. I'm a big GitLab fan. You can go find me there. Go find any of the projects that I work on, typically touch on that. Just Mike-Ensor and, of course, Twitter, I think. you can go on those [crosstalk
Mike M: Excellent. How about you, Jim?
Jim: I'm on Twitter. I suppose my sense of humor is sort of take it or leave it. I'll probably upload my article on Martin Fowler's website as well if people wanted to write threat modeling. I put a bit of energy during lockdown into writing an article. Please take a look.
Mike M: Mike, Jim, thanks very much. Thanks for listening, and we'll catch you on the next episode of the Thoughtworks Technology podcast.