Close

May 26, 2021 • RBS

Categories: Videos

Dan Nurmi, Co-founder and CTO at Anchore, joins Jake Kouns, CEO and CISO at Risk Based Security to talk about containers, how shift-left practices can improve the DevSecOps process and defend against supply chain attacks, as well as President Biden’s latest cybersecurity executive order.

Being relatively new to the industry, containers have proven valuable to developers, allowing them to execute complex applications regardless of the environment. Their ability to enable reproducible builds and unique digests allows for unparalleled flexibility during application development. However, those in the security industry understand that containers add a new dimension of risk to an organization, even more so if they are unfamiliar with the technology.

How can security teams make better risk decisions when it comes to containers? For that answer, it helps to understand what they are and how they fit into the CI/CD process. Join us for this informative episode of The Right Security where we dive headfirst into this emerging technology.

The Right Security

In The Right Security, join leaders and veterans in the security industry, as we tackle the biggest issues impacting organizations today.

Check out The Right Security series on YouTube, and subscribe to the Risk Based Security channel to see new episodes in your feed.

Show Notes

0:40 – Welcome and speaker introductions
1:46 –
What the heck is a container?
5:25 –
What is Docker and are there other types of containers?
7:11 Container orchestration systems and what they do
9:34 – The CI/CD process and how it works with containers
11:26 – DevSecOps and containers
14:24 – The security benefits of shift-left
19:12 – How shift-left can defend against supply chain attacks
22:40 – How to start implementing shift-left practices
25:55 – Executive Order on Improving the Nation’s Cybersecurity
28:28 – Are containers “tech-only”?
30:44 – Anchore’s story
34:47 – Anchor Enterprise 3.0
38:44 – How to start a risk-based vulnerability management program
40:33 – Closing thoughts

Further Reading

Containers and Supply Chain Security – Episode Transcript

Jake Kouns: Today I’m joined by Dan Nurmi. Dan is the co-founder and the CTO of Anchore. His primary experience includes research and applied work in Enterprise infrastructure, containers, cloud computing, operating systems and high performance scientific computing systems. He is passionate about large scale distributed systems – from wires to whitepapers (he’ll tell us about that).

In the old days you might also find him participating in the Capture the Flag tournament and maybe you might even find him in Defcon – don’t know if he wants to talk about that. And you might even find him potentially surfing and singing. Dan, welcome to the show my friend.

Dan Nurmi: Thank you very much for Jake, it’s great to be here. Thanks for having me.

Jake Kouns: Perfect, well today I want to spend some time talking with you about containers and supply chain security. When we first started working together in 2019, it was great to meet you and your team. It felt good that you understood the importance of vulnerabilities and having complete vulnerability data. It is great working with Anchore with VulnDB. That is awesome. I do want to start today with some level setting, if you don’t mind. My thought is I will throw some words and concepts at you, and for our viewers, you could give us some level setting/education on how you view these things. First off, I’m going to say what the heck is a container and is that the same as an image?

Dan Nurmi: Yeah, that is actually a great question. These are important concepts that are really related to a lot of what is going on in modern development processes. To your question, “what is a container”. Well in the earlier days, I’d say 10 – 15 years ago of open source development and software development, there’s a lot of notions of taking your application source code and building it into something that can execute. That is an important step, taking source code to something that executes. 

In the old days, that was basically it. We take source code, turn it into an executable and run it on some machine. That was kind of the end of the day. Then Linux came around with distributions and vendors that really took a look into the open source ecosystem and said, “there is a lot of source code and a lot of people need to execute this stuff”. But, everybody is building it in different ways, so they started to actually package and curate large collections of software together. They put release numbers on them and that is what became Linux distributions. That was really compelling for a long time. 

Then in the 2010’s or so, some really smart people started to observe a new kind of complexity – where it’s not just about OSS, but everything else that is required to execute and share a more complex environment. No longer were we sort of running an application that was a single executable, but new patterns like Micro-service oriented systems or distributed systems, or complex things like Hadoop, or complex databases weren’t about just a single package of OSS. It was more like an entire environment. That is configuration, binaries, all sorts of stuff that communicates. These folks came around and centered this around a OSS project called Docker, and said, “we can take all this and put it together into an executable environment”. This really changed the game from a download and install and figuring out how to execute, configure, and run – into a just download, direct to run for a much more complex set of applications. It’s a really easy way to be able to do that. That is what containers are. 

They are an encapsulation of a bunch of software and configurations that are ready to execute in the same way that a production environment you may be able to execute as, say, a developer. They are portable; they are storage efficient. You get cool properties like reproducible builds and unique digests that are associated with them. They are super sharable. Developers can share these complex applications between each other. They’re easily composed together to make more complex applications. In a nutshell, that hopefully is a good overview of what containers are and why they are important.

Jake Kouns: That was really helpful to tell us about containers. But I’m going to ask some more questions, hopefully it doesn’t annoy you. But it will help us figure it out here. We know Docker is a company, but you hear people say Docker containers. Are there lots of different types of containers? What is Docker and what types of things we should know about?

Dan Nurmi: Docker is a company, but it was also the name of an open source project that really propelled the notion of containers. They’re super important in launching the container idea in technologies. Both interchangeably used with containers, some people say “Docker images” where they could absolutely say “container” images. That would mean the same thing. But when it comes to actual format standards, it’s an interesting point. 

There aren’t really multiple container format standards of note. One of the most propelling reasons why I think containers took off is that there is a single way and it is important for portability. You got a laptop that can run a container and your production environment might have an entire, really large scale complex way to execute container images to actually deploy applications. Both can run the same container image. That is a really important element of why it became so important and adopted.

Jake Kouns: I got a couple more for you. We hear a lot of times, Kubernetes, maybe Amazon ECS. What are people saying when they are saying these things and how do these relate to containers?

Dan Nurmi: Yeah, so those are the names of container deployments or orchestration systems. I do think it is an important distinction and I think you reference it in one of your first questions. People say container images, that’s pretty much where the software rests. You can think of it as an RPM or a Zip file, it’s data and an application bundled up into a file, or collection of files that you can store in a registry or repository and download these things. You can copy them, put them on your hard drive, and etc. 

Now, when it comes to actually running the application, containers are a little bit different from a normal application. You don’t really just run it, you run it through a container runtime system. That takes the image and makes an instance of that application or that container image that is now running. That can open network ports, it has various resource constraints and things like that. So how these container runtime systems work is you install a bunch of software on a bunch of VMs or Bare-metals or whatever. The software most commonly used today is called Kubernetes to install that platform on top of your physical or VM resources. Now you can deploy containers and Kubernetes will orchestrate the execution of those containers. You can say, I wanna run a MySQL container over here and I wanna run Apache over there. I wanna run Hadoop too, and the orchestration system will take those incoming requests and decide where to run it optimally, get them all set up, stitch networks together and connect them to storage and all that infrastructure stuff. To your second question, terms like ECS and EKS are the Amazon web server names for their various container orchestration systems.

Jake Kouns: In past shows, we talked a little about the CI/CD process, but we always have people asking for more education on this. When you think about the CI/CD process, how do containers come into play with this?

Dan Nurmi: Yeah! That’s a really interesting question and something I’m happy to talk about. That is something that is new and fundamental to where Anchore came from inception. A CI/CD process, what that acronym stands for is continuous integration and continuous deployment. Essentially, what these are, are pieces of software that will help you take source code and every time there is a new change – like a developer will commit a bug fix or new feature, or security update, whatever a developer will be committing into their source code, the system will wake up, see that there has been a change and go through a procedure. That procedure is meant to take that change, build it, and perform automatic tests. 

That’s where continuous integration comes in. Whenever there is a change, it kicks off and makes it an executable application. The continuous deployment side of these systems will take that executable and potentially automatically deploy it into production. If you get all this stuff right, organizations are putting a lot of time and effort into making this process completely automated. You can have a situation where developers make bug fixes and within minutes, that big fix is instantiated, running, and is customer-facing. That is possible from a functional perspective in a CI/CD system. That is the tooling in between that makes it possible.

Jake Kouns: I’m going to ask another similar question, because I want to dive a little deeper into this. I think you and I will both agree that containers are really important to evolve and have this modern DevSecOps development practice. So if you’re trying to explain to someone, like how they are using this a little deeper, or compare and contrast the old school way in how we did it versus the newer container school. Why should they care? Can you talk a little bit about that for us?

Dan Nurmi: Yeah, totally. That’s a really important point. How do containers play into this situation? CI/CD have been around for a very long time and they fundamentally do the same thing – they take source code and make it into an executable and then deploy. Turns out though, that the CI part, a lot of organizations have had that. That’s the part where we are doing continuous integration; making executables. But actually doing continuous deployment was kind of difficult. That goes back to the fundamentals of why containers came to be in the first place. 

It is really hard to sufficiently in a hard complex environment, to take a bunch of executables that your CI is producing and deploy them automatically. Turns out, it is not just about the executables. There’s so much stuff. We are not talking about just one application. You might have a team that has thousands of applications and thousands of updates happening every hour, who now have to interact and be compatible. 

It puts a huge burden historically on sandbox and functional testing and trying to reconcile the fact that your production environment is not the same as the development environment. This is where containers have really moved the needle. Now that we are able to encapsulate an entire execution environment, everything the application needs is all inside this container image. We can achieve this actual notion of CI/CD even in the face of an organization that has thousands of applications that have to share the same resources. Their configurations don’t have to be compatible.

Container technology actually takes care of that by encapsulating the entire execution environment allowing you to run application “A” next to application “B” and they can have entirely different OS environments, configurations, and software dependencies. They don’t have to know about each other. That is where containers came into the CI/CD play. Instead of just building an app executable, modern shops are building container images and using that as the unit of deployment.

Jake Kouns: That makes a lot of sense and I think when people start to better understand that you can have lots of containers that you pulled from places or “layered” containers or however you want to describe it, you start to go “Wow, that is super powerful”. As a security person, you go “Wow, that’s a lot I don’t understand. I also don’t have much visibility into”. We’ll get into some of that. 

Let’s talk about shift-left. We’ve been hearing shift-left for a long time. Tell us what you think shift-left means and why that is such a big security benefit.

Dan Nurmi: First you know, the notion of left and right. Hopefully, if you heard our previous discussion so far, you can see this whole process lines up as a spectrum. Left hand side is like source code and the right side is Kubernetes or wherever applications are running. Even if you’re not talking about containers, there is a similar kind of spectrum. Left is source, right is running and CI/CD is right in the middle. It’s the thing connecting left and right. 

This notion of shift-left security is something that’s been around in the industry for a while, but it’s really just starting to come into sharp relief and focus now because of how much automation has been added. You get all the benefits that we’ve already talked about. Like you say, security teams start to take a look at what is going on, you have a lot of software environments, super rapid updates – your software surface area has just exploded. 

In terms of what versions of software, builds that actually candidates for or are executing in runtime, and a traditional approach prior to this level of automation was to essentially do runtime scanning. Look at your network, watch for anomalies, see what is running, even perform security scans against static artifacts on runtime systems. That is something that is still important, but with the amount of software coming through these automated CI/CD systems, I think a lot of the industry has observed it is very expensive. If we have a security incident at runtime, we know that it is expensive to remediate. We have to talk to customers, it can be a very public event. It is difficult, expensive and complex to handle. A lot of us in the space started to ask the question, “What can we do before run time to try to prevent as much as possible?” 

Actually building on this momentum on containers, we have more information now, there is more software and we have actually more information. These container images are encapsulated in all these environments. All these data files supporting dependencies are all bundled together before it even runs. Imagine if we have 100 things to check, our security at runtime checks to make sure it is secure. But imagine if we can check 80 of those things right in the CI/CD. Huge win if we can prevent security things before they ever execute. It has obvious value if we catch it in CI/CD. That potentially never becomes customer-facing. Far less time consuming, expensive and dangerous. The idea of shift-left security is to really evaluate every security control, policy, and practice that we have and see how far left into this development process we can push the checks to make sure we can catch these things as early as possible.

Jake Kouns: It’s funny, shift-left I believe sort of came up as a buzzword when we were talking DevSecOps. But we have been using the wording of shift-left to talk about basic vulnerability management. You know, a lot of people still think of vulnerability scanning. We’ll say to them, “if you know there is a vulnerability in Apache 1.0 and you’re running it, why do you have to scan for this thing?” You can shift it left, compact and understand. I think people are starting to understand that now – compact the timeline, understand, and make those risk-based decisions quicker.I really appreciate you explaining that for us. We did a lot of definitions and talked about containers. Maybe we have beaten that a little too much. Now let’s take it into application. Right now as you know, the supply chain. It’s HOT. We would argue it’s been hot this whole time, but now it’s getting more attention due to Solarwinds etc. If I were to say to you, “Alright I heard all this shift-left, heard all this container stuff, how do these new modern shift-left security practices work, how do they defend against supply chain attack?”

Dan Nurmi: We definitely are, and this kind of comes down to realizing these supply chain attacks really have two important stages to them. I think this gets washed out in the security coverage or more public coverage of these incidents. There is one stage where the attacker will compromise the supplier, in the Solarwinds case, being Solarwinds themselves. But it’s not so much an attack, the malicious actor isn’t actually getting anything from Solarwinds themselves. 

The second stage is the actual consumer. That is where malicious actors are getting their value. They’re going in and attacking the consumer by compromising the supplier. This idea of shift-left in our opinion, really affects the suppliers more than the consumers in this particular scenario. 

The supplier is the one actually generating a software artifact that the consumer is downloading and executing. It turns out that the consumer is doing everything they are supposed to do. They are getting the latest updates, getting security patches – exactly what they are supposed to do, but yet, they are still getting compromised.

The thing to look at is the supplier themselves. As software suppliers, we need to be putting our own shift-left security practices in place to make sure we are doing everything we can. It is not just about moving the security check, it’s about spreading it out. We do that security check multiple times in our CI/CD systems. Instead of just running applications at the end of our CI/CD system, we are delivering an application to the consumer. The same idea of shift-left when it comes to an internal application or production app applies to a software supplier. It’s just that instead of executing we are making software a consumer can download.

Jake Kouns: The other thing I think about a lot regarding supply chain attacks, we have been talking about this a lot at RBS – the systemic risk. I also think that I love the concepts of containers, but I also think about vulnerabilities. Let’s say there is one vulnerability in a container, and everyone says, “I’m going to use that” and it’s used everywhere. One vuln can lead to lots and lots of problems. It’s eye opening in that regard when thinking about systemic risk. 

Like you said, you really need to evaluate and understand the vendor and the products used. I think that is where we both agree. We are doing our ratings, you are doing your deeper analysis – I think we are going to see a lot of evolving things in this space. Another question for you! From time to time, people will see our interview series and say, “that sounds great, I love it, but wow, that seems really complicated and overwhelming.” What do you say to someone who feels that way? How do you recommend a company gets started in implementing these shift-left practices into their DevOps or whatever it may be. How do you take it from, “Wow, Dan seems super smart but I am going to go along the rest of my day”, to “I need to take this and apply it?”

Dan Nurmi: I think it’s clear that it’s incumbent on all software suppliers to say, “What can we do?” I’m a very pragmatic person when it comes to infrastructure and software development myself. So I ask these questions all the time. The good news is that in this space, containers in particular have these nice properties of being encapsulated environments and portable, and very few formats. New tools like we’ve built and continue to build at Anchore, you can download and try very quickly. 

Anchor itself has open source tools, one of them is called Syft, which is a tool you can download. You can run it in a matter of seconds against a container image and it will generate a software bill of materials. It’s a very valuable piece of material that you can use for forensics. You might need to supply that to a consumer of your software; we’ve seen people upload SBOMs right alongside their deliverables in public places.

We have another tool called Grype which is also open source; available to download very quickly. You can point that at a container image and get back a vulnerability result on your screen. To your question about how to use these tools, we work with a lot of community members and organizations that will take these tools and instantly put it into their build pipelines. You can put these tools in place and start generating information right away. We of course have other open source tools and Anchore’s main product will take that information and instead of putting the burden on your side to build and rationalize, we have products and services that take the information. We have all sorts of policy and remediation features. The point is that there are tools you can bring in and without modifying your infrastructure and changing your CI/CD processes and you can plug them in and start extracting information and putting it in place.

Jake Kouns: Recently, there was a newly published executive order on all things cyber. What are your thoughts on it and you brought up SBOM, you had to do it! We’ve had all sorts of episodes on SBOM and that sort of stuff, but any particular thoughts on the elevated emphasis of SBOM on this order?

Dan Nurmi: This is an interesting one, this just came out last week and when I was reading through the executive order, something kind of struck me. I’ve been involved in companies that supply software to other enterprises or federal governments, and it’s pretty common for large organizations when buying software to ask the supplier for something like “Hey, we are going to use your software but in the contract it says we need you to supply all the open source licenses that are included in dependencies for your software.” Sometimes you’ll even see in a contract, “Hey we need you to describe or point us at your internal security policy to do a review on our side”

When I was reading this order, this is really interesting, in the sense of formalization of that common industry practice. What I think is so interesting is that it’s both mentioning the need for the consumer, in this case the US federal government, to get practice and policy about development practices from all suppliers. But it also calls out specific information that is going to be required to transmit in the form of a SBOM. Because there are so many software suppliers that have the US federal government as a common customer, I think this might actually move the needle towards a more industry standard or industry adopted collection of processes. SBOM and other information in some standard forms that we might actually start to converge around. I think it’s a really exciting time.

Jake Kouns: Listening to all the “what is a container”, how does it apply, government approach, the technology perspective, I want to ask you, do you think containers apply or affect particular industries more than others? Is this just a tech software building thing, or do you see more adoption in other industries?

Dan Nurmi: The interesting thing about containers is that I think we are only at page two or chapter two in at least a ten chapter story. I think up until now, a lot of the tech industry has been involved and started to agree on containers and using this technology to achieve this high degree of automation and efficiency in the design process. But software everywhere, there are companies that 10 -15 years ago wouldn’t have had their own software developers, but now it is kind of rare to find any vertical that doesn’t have some sort of in-house developers or contracted developers writing in-house tools. 

In addition with containers, we are starting to see industries that might not be developing their own software, but still have a CI/CD system for deploying software. A great way to do that in the tech industry is using containers. We are starting to see containers as a more and more used delivery mechanism for software, not just in software development. I think that this will only increase.

Jake Kouns: I agree. We have a lot of clients that they don’t want to scan or are worried about security tests that can impact productions. You think about the medical and manufacturing industries. Being able to test a container in a safe manner and deploy things, I think we will see people start to understand the value. I appreciate it. I want to talk a little bit about Anchore here. You are a founder of the company, it’s not the first company you started, so that makes you a special kind of person who wants to go on another ride like that again. You have some open source roots, you mentioned these free offerings, tell me how it started and a little bit about the journey.

Dan Nurmi: I’m happy to. Around 2015 or so, myself and Saïd Ziouani, the other co-founder of Anchore, we worked together in the past. We came from large scale open sourced infrastructure careers and interests, and when we got together in 2015 and kind of looked at what is going on in the industry. No question, containers were on the upswing when it came to momentum. 

In my open source experience, any technology that is experiencing (good tech) this kind of momentum you need to start thinking about carefully. As soon as a community builds up and developers get interested, you need to see if there is going to be ramifications and impact; the global software infrastructure story. When we took a look at where container technology was at the time, we kind of assumed and predicted that big companies will be able to handle the storage, the network – the orchestration kind of ideas. 

Google, we knew were already doing a lot of stuff with containers and eventually came out with Kubernetes. But it was more in our minds about imagining if containers were everywhere. What are the real challenges at that point? What looks different other than just the infrastructure? It was this stuff that we have been talking about. This rapid, everything automated, no humans, developers having ideas and those ideas manifesting in production. That vision of what the world is going to look like looked very different from where we were in 2014 – 2015 without containers. 

One of those big impediments we predicted in getting there for enterprises and regulated industries and governments was absolutely the security and compliance piece. So much software is being generated in their own unique combinations and ways, that is where we thought there were going to be significant challenges. New technologies, tools, and mechanisms were needed to attempt to meet that challenge. That was the start of the idea for Anchore technology. Back then we thought, we need to build a really serious data analysis system. 

We’re talking about a huge data problem, lots of variance. There are a lot of things you need to do. You need security and compliance checks – best practices enforcement. What does that look like? That’s where we started and that’s what we are still building today. We are building a system that can take all that information and churn, and you need to specify your security and compliance policies in a simple way and then apply those policies to the dramatically larger set of constantly churning software that’s being pumped in your CI/CD systems. In a nutshell, that is where Anchore came from and we are still helping our customers and users resolve.

Jake Kouns: It’s great you have free offerings and Anchore Enterprise. I remember when we first started working together and looking at the vulnerability stuff, a lot of times we would say, “Look we are collecting more vulns than anyone else” and people may not have wanted to believe it. But I think it was September 2019. You guys took Anchore Enterprise and looked at six images, and the testing of the difference between what Anchore would find versus what VulnDB would find, was stunning. The world since 2019 has only continued to morph and change but has gotten worse with the huge amounts of data, and for people to figure out what to fix, when and how to prioritize. It’s a challenge. I’ve seen you guys do amazing things and I love it. You guys just released a new version about a month ago and if I remember, one of the biggest things I like about it was some of the remediation tracking and i think you call it actions, but it allows you to say, “Great there is a lot of vulns” and then figure out what to do. Any thoughts on that new version or maybe what you guys have coming next you’re excited about?

Dan Nurmi: Thanks for bringing that up, we are really excited about version 3.0. The core of what Anchore Enterprise and our open source tools do is take data one side and then run applications and do various security, compliance, and best practices enforcement. We have a good platform for that, we have been doing it for years. 3.0 is what we wanted to do in the next step. 

We were working with very large enterprise customers and the federal government uses Anchore as well for many of their container efforts. Now that we have been working with customers and observing, yes, the data is important and those scans are important, surfacing vulnerabilities, leaked credentials, and malware, the reports being generated are very important. We started to observe common patterns about what the next thing the security, DevOps, or developer will want to do with that information. We leverage the fact that with containers, there is uniformity on how containers are built and how they are assembled and we can take some of those security findings and not only report on it, but say, “Hey, here is the recommended next step on how to fix”. 

We can also help track and figure out where in the pipeline that next step should happen. That is a part of 3.0 in our Enterprise product and we are actually going a lot further in our future products. We are starting to take a really close look at every element of the CI/CD system, including a little bit of runtime and source code and seeing what our platform can inspect in terms of gathering data, providing an even richer remediation scenario and assessment tool. Not only can it be used in surfacing information but we are starting to break into things like forensics and doing a deep impact assessment – those kind of important security operations.

Jake Kouns: Sounds exciting. I know you guys also published a new whitepaper, we’ll throw a link into the show notes, it’s called Prevent Software Supply Chain Attacks for Cloud-Native Apps. I hope people learn some new stuff. I have one last question, it kind of pulls it all together, maybe some things we already covered, but maybe if you can concisely give some people advice. 

Anyone out there that is already struggling with too much work, too many vulnerabilities, and now we are saying there are containers we have to worry about and all these other stuff. What advice do you give someone in a security role at their organization to start with that risk-based vulnerability management program to make sure you are not missing out on this new technology.

Dan Nurmi: That is an important idea. I think the magnitude of the problem, if you are a security person or engineer, and you are starting to see containers and your developer is doing stuff, it looks kind of nerve wracking. There is so much data. But the realization is that yes, there is alot of software variance but the way it is being packaged is what tools like Anchore and other tools can handle. There are good solutions, they are not hard to put in place. You immediately are able to get information in a way you are familiar with. It is easier than you might think. There is good technology out there. It’s what we spend all day building here at Anchore. Take this, hook it up in your CI/CD and you start getting vulnerability reports. You don’t even have to worry about the variance. The biggest thing I recommend is yes, this is a problem. But there is technology that can help address this.

Jake Kouns: Dan, thank you so much for your time today. This was extremely insightful.

Dan Nurmi: Thank you very much Jake. It was great to be here and talk to you.

Our products
The Platform
Risk Based Intelligence
Learn more
VulnDB
Vulnerability Intelligence
Learn more
Cyber Risk Analytics
Threat Intelligence
Learn more
YourCISO
Risk Management
Learn more