Sign up for our podcast on iTunes or subscribe to our feed with your favorite player.
In this podcast about IT security, hear Puppet founder Luke Kanies talk about how he built security into Puppet from the beginning; how his thinking on security has evolved; how locking down systems too tightly can actually make your IT infrastructure less secure; the security challenges of containers and the Internet of Things; and more.
Puppet engineer Chris Barker brings his experience to the table, talking about how customers actually do security, and why the data security model you use for machines isn't appropriate for humans.
It’s a great listen as Luke and Chris go back and forth on security and Puppet, and the evolution of Puppet's handling of security. Go on, enjoy it now! (You can also read the transcript below.)
Beth Cornils is a senior product manager at Puppet.
Transcript
Beth Cornils, Puppet senior product manager : Today I'm super excited for our security podcast. I have the honor of having Luke Kanies and Chris Barker here. So thank you both very much for taking the time to talk about security and Puppet. Let's kick it off by Luke, tell us a little bit about yourself, a little ditty.
Luke Kanies, Puppet founder: I'm the founder and CEO of Puppet, and I started Puppet 11 and a half years ago. Prior to that, I was a sysadmin by trade.
Chris Barker: I've been at Puppet for a little over four years now. Before that, I was kind of an IT consultant for hire everything from small businesses to Fortune 500. So I spent a lot of time either doing small business therapy or technical work.
Beth: All right. Excellent. The reason why I have both of you here is because this is a Thunderdome situation. You didn't know that when you walked into the room, and so good luck. No, it's really not. The idea is
Luke Kanies: Well, there's three of us here, so that would be complicated.
Beth: It would be complicated
Luke Kanies: Wait, is that
Beth: but I'm on the other side of the desk.
Luke Kanies: Does that make you Tina Turner?
Beth: It does. It does. [Laughs]
Luke Kanies: Since I get to be Master in this situation, then I get to ride on Chris' shoulders.
Beth: The idea is just to talk about, Luke, from your perspective, why you built in security, how you built it in, and then, Chris, sort of the realistic when you're going in and you're talking to customers. That is why I really have you. There's no fighting. I mean, you can fight, but that's up to you.
We're going to kick it off with you, Luke. When you first started Puppet, there wasn't the same level of concern as there is today from a security perspective. It was a known thing, but people kind of pushed it off to the sides. What did you do to include security in Puppet?
Luke Kanies: There's two ways to think about the security question, and one is "How secure is Puppet as a system? How secure is the platform itself?" And the second one is "To what extent can Puppet be used to make your infrastructure overall more secure?"
On the first side, everybody in security now knows that there's usually a tradeoff between how big of a pain you make it for the user and how secure you end up. You can make your users change passwords every week and have them be 400 characters long and all special characters, but you're going to end up with a less secure system, not a more secure system, if you do that.
So one of my early priorities in Puppet was "How do we end up with a hopefully highly secure system while asking that user to make the smallest tradeoff possible?" One example is, when I started, I wanted to use a security system that everybody already knew, so people pretty well understood X.509 certificates.
In particular, I was moving from CFEngine, where we only had symmetric pairs they used SSH style certificates for both authentication and encryption. The problem with SSH is it's all mutual trust, so if you want to have somebody talk to somebody else, they have to have preshared their keys, and that key presharing was a huge pain in the CFEngine world, and that pain meant people did weird things to avoid it.
One of my goals was "How do I avoid that pain?" Well, X.509 fixes that, because you only have to trust one certificate authority, and then everybody else, from there, you're fine.
There are still some other rules you have to figure out, because, in the world of SSH, presence of a key is kind of equivalent to "I trust them to talk to me," and in this world, you have to build other authorization layers on top of it.
But once I figured out, "Okay, this is my basic security. My authentication model is going to be these certificates," now I have to make the certificates themselves incredibly easy. At the time, we only had one certificate authority that pretty much the whole world used, which was CA.pl.
And even, I think, today, if you want to go use that, it requires you to fill out all this information that, in general, doesn't matter at all to anybody, and you still have to fill that all in, and it doesn't make any sense. So I said, "Why don't I just build a CA that doesn't ask any questions at all?"
That was one of my goals, was get to the point where you can go through the process of building an entirely secure, entirely trustable system without asking the user a single question about security. In fact, most people, unless they have a problem with their certificates, shouldn't be even exposed to the fact that they are using certificates or there is a CA in there.
You end up learning it almost by osmosis, because you have to go through this "Hey, a machine would like to get into the network. Are you willing to sign its cert?" You learn that from there. That was a huge part of "How do I reduce the tradeoff? How do I make it so you're going to end up with a secure system, but don't have to make this big sacrifice to get there?" which again drives people to make insecure decisions, if you ask them to make a big sacrifice.
The second was my experience at Puppet was heavily influenced by tools like Titan, and there's another one.
I think it was Titan and Satan were the two tools. These were tools that were, I think, originally developed for the Solaris world. I know Titan was a Solaris tool. It was essentially a huge collection of Shell scripts that would entirely lock your machine down. I remember having in 2001 or so, I had a Solaris box that Nmap was like, "This might be a Linux machine," because I had done so much to the core configuration to make it more secure
That this idea of building on shared security practices that were essentially removing as much as possible from the system, and then whatever was left, locking it down as much as possible, was a really good idea.
One of the things I'd say I tried to do, but I don't think I did as well as I did the first one, was design in the ability to use Puppet to secure your system really well. I think we've got the platform does really well. What we haven't built is that shared practice.
It's not like you can download the hey, if you're running this version, here are the 10, 20,000 checks to do or switches to toggle to make your system significantly more secure.
Luke Kanies: Those are the two areas that I thought a lot about in the buildup, and I think some of that, we still have a lot more work to do, and some of it, I think we've done pretty well, actually.
Beth: Okay. Excellent. Chris, can you talk a little bit about some of the things that you've seen in the field regarding security. I know, just based on some of the stuff that Luke just mentioned, I think it's interesting that we've already kind of thought about baking things in.
I think that helps with some of the disconnect between DevOps and security. But what are some of the things that you're seeing in real life?
Chris Barker, Puppet principal solutions engineer: The biggest areas I've seen are either kind of people using tools such as Puppet, I'd say, as a post-audit failure remediation task, where they've been given a list of "These are all the things I have to change to get back into a compliant state to pass that" and then kind of leaving it in that state, going forward, and Puppet just becomes a monitoring tool in some fashion.
Usually, it's kind of the first step into using Puppet. The other area I've seen, though, is, going back to Luke's point of if you make it harder for users to be secure, they're going to just be less secure. They're going to take the path of least resistance. The password rotation process pretty much means that everyone just has a password with a number in it that's incremented, and that password is then on a PostIt note underneath your keyboard, and that's pretty much how everybody does it.
So if you want to root a system, you just find the EA's keyboard and flip it upside-down, and then that person has because they're the EA, they have the president's email account address and all those other wonderful things.
The thing is getting it easier to do security, which really, as DevOps started taking off, meant that, pretty much, there was still the security was the last thing people ever looked at, because it was so hard to do security.
Trying to build a process and pipeline and tooling to make it easier for people to do security has always been something I've been watching, really to not much success. I've seen a lot of larger vendors come out there with this idea of "Well, just let your developers go off and build a box, and then what we'll do is, we'll wrap it in a hard candy shell of" it used to be "We'll put it behind a firewall," and now it's a
Luke Kanies: Firewalls are magic.
Chris Barker: Yeah. Now, with software defined networking, we'll put the network driver on the machine, but we'll still just encapsulate all the traffic inside the box.
Luke Kanies: There was that period of time, too, where everyone got their own firewall, and it was like, "Oh, a firewall isn't the answer. Many, many firewalls is the answer."
Chris Barker: Well, that's pretty much what SDN is now, is like, "Every box gets its own firewall that talks to all the other boxes," but it's still in this like we don't actually know what the hell people are doing with the box or what the system state is or what the configuration is. We just know that these two boxes have to talk to each other and no one else, and we were going to try really hard to do that.
But that doesn't mean that they're actually going to monitor the traffic going into the box, which means, "Great. That web server can only talk to the database server." Someone just owned that web server, and you're still going to let it talk to that database server and retrieve all of the credit card numbers.
Chris Barker: Oh, yeah. It's totally okay.
Chris Barker: So I think that we've seen a lot more people moving towards having tools like Puppet deploying the workload, so they know what it's doing, and then using Puppet as part of the security implementation of it.
There's still some struggles with it, but that still is providing a lot more transparency than before. In some ways, the easiest adoption has been the security team and the other people working together at least have a language to talk about all these things, whereas before it was security guy had a Perl script that he ran on the box to audit it, and then
Luke Kanies: If you were lucky.
Luke Kanies: It was often a Word doc.
Chris Barker: Yeah. Well, no. He had a Perl script he ran and then took the output and put that in a Word doc and then sent it to your manager, which you then got two weeks later and then you had to go fix by hand.
And then a month later he'd come back and run the Perl script again.
Luke Kanies: Where I was, literally, the security guy had a Word doc, and he went around typing the commands from the Word doc into a Shell and then would take the output and turn that into a report.
Chris Barker: That's actually been stuff I've put into demos for people, is taking the command from the Word doc and putting it in that fact to then show that
Luke Kanies: It's like magic.
Luke Kanies: Yeah. You brought up an area that I didn't bring up when I was talking about the design of Puppet, and that's the principle of least privilege, which is, essentially, you want to, as much as possible, basically give people as little privilege as possible.
Any machine in your network should have the absolute least amount of privilege. One of the big controversy isn't maybe the right way to put it, but disagreements about how CM tools should be built is should you distribute all of your code to the edge nodes and let them do all of the work locally, or should you do like Puppet recommends? Now, many of our users don't do this, but have all your code stored centrally and then do all the work there.
00:11:01 And one of the major benefits of putting the code centrally is that you're able to give much less privilege to the edge nodes, and your edge nodes are unquestionably and by edge in this case, what I mean is your web server is your database server is your mail server; it's all those things they're the least secure systems in your network, because they're the systems that your customers are actually directly talking to, and that means they are exposed to the outside world, which means, if your firewall is working well, those machines are available to the Internet.
00:11:27 So if you're able to give them less privilege, that means, if somebody roots that box, they can't read all the code that exists. They can't read the code for how any other machine in the entire network is configured. But even better, when you think about how many systems your configuration arrangement system has to have access to, to correctly compile a complete configuration, everybody who has access to that code needs access to all those systems.
Now, instead of doing either a single or a small pool of integrations, of connections between your Puppet masters and all of these core systems, whether it's your auth. systems or things like that that's, I think, the most secure model.
In this model, now every edge machine needs to have direct access to any data source, that it needs not to apply its configuration, but to compile its configuration, just to figure out what the valid configuration is. So now you've got essentially end-to-end authentications. Every core service is accessed by every machine in your network, which means every machine has, frankly, a good chunk of privilege.
So anything you can do to avoid that is inherently dramatically more secure. Reducing the ability for one machine to read another machine's configuration, and then also making it so that the machines that have access to core services are isolated from the machines that the users have access to, gives you two additional walls of security, just in the architecture, which I think makes a pretty big difference.
Chris Barker: Yeah, and also, actually, in a situation like that, even distributing to the edge nodes, those services do end up building some sort of RBAC service or some sort of service broker.
But that becomes the idea of well, we're going to move the work out there, but now we have to have a governance system, and now all of a sudden the governance system has to extend out to every system as well. And the problem with that is, if the governance system has been extended out to every system, that is now the biggest, juiciest target in your infrastructure. That is literally the governance system that says, "Okay. This node can only talk to this other node," as long as this governance system says so. Well, that's your prime target now.
Because node A has to talk to it to figure out who it can talk to, well, then if you compromise node A, you now have a launch point and attack surface into the governance system. In some cases, there's the argument of "Where do I want to put my CPUs to work?" for the centralized versus decentralized model, but then the funny thing is the decentralized model still has to have a central authority configuration.
Luke Kanies: Right. And one of the things we've worked hard at over the course of the last decade or so is trusting even less of the client machines.
Part of the core architecture of Puppet is hey, went you send facts up, you're just sending data. You're not sending code. So we're not going to let an edge machine produce code that runs on an individual machine.
In fact, we're not even going to let the central server generally write code that runs locally. As part of your configuration package, when you get your catalog from the server, there's very little in there that we trust to run, and if there's an exec in there, we work pretty hard to make sure that's relatively secure, relatively straightforward. You're not going to hide information on an exec not to say you can't, but we work hard to make it complicated to.
Well, over the years, we've tried to have even less trust of those facts along the server side. We want to make sure that you can't inject code into the server by sending data up from the client not to say that it's impossible. Again, there's always some edge in there somewhere, but it's an area where I think we've invested a good chunk, and I think we're close than we've ever been on that front.
I'm relatively new in the security space, and as the product manager, I'm learning a whole bunch about different security products. One of the things that it seems like is security is bolted on. That's one of the things that I really like and appreciate about PE, is that you baked it in. I mean, we've talked about some of the benefits. Are there other benefits?
But then what are some of the things that you might like to see change, or not that you would like to see change, but that you see the entire industry is changing and how Puppet is following that along, or Puppet may already be there? What would be some of the things that you experience, either you, Luke and then you, Chris?
Luke Kanies: There's probably three or four areas I can think of off the top of my head where, I think, if you look up in two or three years, we'll be doing things differently, or I hope we'll be doing things differently.
One is we tend to encrypt in transport, but not on destinations, so the connections between two machines are entirely encrypted, but once you're on a machine, things tend not to be stored in an encrypted way. I think we can do better there. Because we've got these public and private keys everywhere, any package that a client sends to the server, they should be able to encrypt that package so that it's not just that you can't listen in on the connection, but that, actually, if you were to go on that client machine and find that file, it would actually still be encrypted there.
Now, you're still not perfect, because that machine has to have the ability to decrypt the file; otherwise, there's no point. But it does make it a little more it's a bit of a security by obscurity, because, if you know where the private key is and you know how public key cryptography works, then you can decrypt it, but it is actually more secure, and you don't get things like leakage by somebody accidentally CATing a file to the screen or copying the wrong files around. Another aspect of the encrypt by default is internally, there are a number of, essentially, secrets that the team's working pretty hard on "How do we encrypt those secrets?"
Right now we try to hash them when we print them, but they aren't actually printed. They aren't encrypted in the back end. How do we encrypt, essentially, in the short term, key secrets; in the long term, quite possibly, any value?
Luke Kanies: Then you get to the point where, yes, the edges can all decrypt, but you essentially have to have those key pairs in place. We've already got the CAs. We've already got both key pairs on both ends. Everybody already has access to those keys.
The infrastructure's there. Now you've just got to build the code paths to make it all work. So that's another area where I think we can and we've already been investing in better security. Once you've got that platform in place, where you can essentially give the system "Here's a thing that I'm going to hand you in one part of the system, and I don't want anybody anywhere throughout the rest of the process to get access, other than this one individual machine." There's a ton of power in that. Going back to the credit cards, it means somebody having access to the database it doesn't help them, right?
You can get a full copy of the database, and it's still entirely encrypted, because the only machine in the entire world that can decrypt that field is the individual node that that data was destined for, because it has the private key. So that makes a pretty big difference.
I think the other areas that are kind of foundationally complicated in a way that's only ever going to get better, but never going to become good, are essentially better trust models and authorization models.
When you talk about what has access to what, who has access to what, we've worked hard to build RBAC into the core architecture, but that's an area where we had some aspects of RBAC early on, but we didn't have anywhere near a sufficient system for what our customers need today. We've done a ton of investment over the years. The problem becomes there are so many capabilities and data types that you care about. If you try to matrix that against all the kinds of people and systems that could care about those, it's just foundational and complex, and so that's an area where you need to keep investing.
Again, the challenge is always how do I keep this simple enough that, when I go visit a customer, they haven't just given up, and they've either made it so everyone logs in with the same account, or everyone is essentially all in one big group, and they all have admin access? And that's an area where I think we've got a lot of work to do to really improve that and especially to push the security models down into the data models down into the access models. That, I think, is what makes it really hard.
Beth: I think what I'm hearing is that my roadmap is approved by the CEO, so that's awesome.
Luke Kanies: Well, doing something is approved.
Luke Kanies: I don't know about your specific roadmap.
Beth: My roadmap does include some of those things. No, but having worked in a SAS environment where we had the second highest number of Social Security numbers and PII next to the Social Security Administration, I totally understand the importance of keeping that information secure.
Chris Barker: Yeah. I think the last point you touched on is an area that I'd really like us to see improve a lot more on, which is we have access. I mean, really, we have the access controls down, but now it's the actual data model, so there's some areas deep in Puppet internals where it's like, once you get there, you just see everything. PuppetDB is still kind of in the point of like I can pull up the catalog. If I can talk to it and gets facts, I can pull out the catalog. I can pull out the status.
And those, all of a sudden, realize there's a lot of data in there that you don't initially think of as being sensitive until you realize that sensitive data just ends up in there just because of the act of "This is a great tool. We're using it to automate our infrastructure. Let's just throw stuff in there, throw stuff in there, throw stuff in there. Oh, wait. We just put our backup software in there with the credentials we use, the shared password for all of our backups."
And then, all of a sudden, that means that, going back to the "Well, I got the shared password for the backups over from a web node. Now I'm going to do a restore for the database server and go walk away" I think those types of isolations are the things where from a security standpoint, the things I always look at is the lateral movement that people can do inside an organization, where people just don't realize that there's these privilege access.
I remember, at a previous company, I was doing kind of some audit of their active directory trust rules and found out that there was a front desk receptionist who had full domain access privileges for the entire AD environment, because they were trying to troubleshoot a printer problem.
And they ended up just granting that person they just kind of kept going down and put them in on one group and another group, and then all of a sudden, that whole spiraled outwards into that group technically had domain admin access, because another person troubleshooting another issue added that group to that, not realizing the impact of that change.
Chris Barker: So this person literally could have done anything they wanted in the organization. Luckily, they were just trying to print. Going back to also
Luke Kanies: [Laughs] It's pretty insecure, actually.
Chris Barker: Printing is always a terrible thing.
Luke Kanies: When your printing description format is Turing complete, you've got some real problems.
Chris Barker: Yeah. But, no, I think that's really the area that, for us, in the product, is finding better ways to isolate that data and finding either other systems because so much of the design of, for example, PuppetDB was optimized for fast catalog compilation and distribution of facts and storage of facts and being able to kind of be that central data warehouse specific for, really, Puppet masters to do their job very quickly.
Then that turned into "We can use that for other information" and then realizing that there maybe other data model services that would be more optimized for the human-facing integration into that. But all of a sudden, that means that, just because machines have to have access to this data doesn't mean that's the same model that's appropriate for humans, too.
Luke Kanies: Yeah. I think one of the things that I think we have a unique ability to do better at is so much of security can be dramatically improved by taking things that the world already knows and making it so the people who are using your tools also know those as a result.
A great example is it's not that hard to get a list of vulnerable software. It's not that hard to matrix that list against the software that's installed in your system, but it turns out there aren't a lot of tools out there that do that really well
Luke Kanies: effectively and efficiently. And then it's not that hard to connect that with remediation software that says, "Hey, make sure that's not the case anymore."
Luke Kanies: But closing that complete loop is not something that other people have done well before.
That's something that I think we're somewhat uniquely set up I don't know that you can be "somewhat unique" but we are well set up for, and very few others are. Again, any case where you can add security into the system with the user not having to answer any questions, not noticing it's there until something would have leaked, but didn't, and they went, "Huh, I didn't realize that was encrypted" any time you can do that, you've gotten a huge security win.
So that point about encrypting the data in PuppetDB, not just encrypting the connection, but encrypting the data there, encrypting the data on the client there are all these places where we shouldn't even ask. It should just like, "Duh. Of course it's encrypted. Of course we're going to do that. Of course this is how it's going to work." You should prenegotiate it. The client server should just do all that for you. Of course, the problem with the negotiation is, as we've learned with SSL recently, if you can tell the server, "Oh, I'm sorry. I can't support that new fancy model," then it will just give you the old bad secure model, so you need to figure that out a little bit, too.
But I think there's a ton of opportunity to go through the system. This is where you cross over from Puppet's architectural security to the security of the applications and things that your customers install. If you can tell your customers, "Look, all these machines you have: here are the insecure systems they have. Here are the insecure package versions or system versions they have. Here are the remediation tasks to take." Because, again, the feeds all exist, but most customers don't directly connect those feeds into production.
Then, of course, the second step is, you need to make it ridiculously simple for your customers to test and employ any potential changes, because, ideally, you do a security fix, it doesn't change production behavior. But that's the ideal world, and we're talking about the real world, unfortunately, so you need to make that test cycle incredibly efficient and incredibly cheap.
Once you can connect it all together in a way that the customer just kind of you put a light up that says, "Here are all the things I remediated last night, and I confirmed that they didn't have any behavior or performance problems" or "Hey, here are the things that I'm going to do, as long as you're okay with it. I'll do them tonight on the schedule that you've already set," and it's essentially all to the point where the user, at worst, has to kind of you know how, once in a while, you look at Chrome, and there's the yellow hamburger menu instead of the green one, and you have to say, "Yes, I'd like to restart Chrome right now." That should be the worst that your customer ever has to see in terms of doing all the security remediations.
That requires the whole system to be connected from end to end with backend data sources that tell you about the reality of vulnerabilities today. Again, most changes that happen on systems today are changes that were well-intentioned, even if they were probably stupid. Being able to differentiate really clearly between "Hey, there were 10 billion changes to your systems yesterday to your 50,000 production servers. There were 120 that we weren't expecting." [Laughs]
Luke Kanies: Pulling that 120 out of the thousands and thousands of intentional changes is fantastically complicated, but with the right automation in place, with the right systems in place, it becomes dramatically simpler.
Chris Barker: Yeah. I think, when you were talking about the [making a quick redaction to] the remediation stage that you can make that change with confidence speaks to the fact that almost all of the large credit card breaches that we've had in large corporations have never been like, "Hey, I just found this zero day and I've just trolled through the infrastructure in like 30 seconds and punched through all the firewalls" I mean, unless you're a government agency who put them there in the first place.
But, I mean, you could see that most of the stuff was they were there for months and they were going through months-old, years-old vulnerabilities and just kind of going through a playlist and just moving through the infrastructure, and even people were seeing it, but they didn't have any execution ability to make those changes. I think the biggest thing that
When I talk to people about what Puppet as a tool brings to them is the ability to make changes quickly and make them with confidence and know what you're actually doing, and whether or not that means fixing a production issue, removing an admin who has been let go, or literally just rolling out a series of patches and being able to say, "We can fix open SSL today for these problems and know that our application is still going to work tomorrow" I think, at the end of the day, if you can make it easier to get those changes out there to the people, so they can roll out those changes because there's always going to be security problems. There's always going to be remediation. There's always going to be changes that have to happen.
Pushing that forward is really it's the question I've always had, which is, "How do you roll back?" And it's like, well, you can never really ever I mean, you can get really existential about this and you can never actually roll back, but the best thing you can do is roll forward as quickly as possible to the next new known good state. The faster we can shorten those cycles for people to get to that point is really the area that I think will drive a lot of this adoption.
Beth: All right. I could probably ask dozens and dozens of more questions to keep you all going, but we're at probably about the 30minute mark, so I just wanted to leave it. I always like to leave the podcast with Luke, do you have anything else or any questions that I didn't ask or anything that you'd like to talk about that I didn't cover already?
Luke Kanies: I'm probably most interested in the collection and distribution of preexisting security solutions, so essentially going back to the role of Titan, where some expert somewhere took the time to collect "Hey, here's a way to secure the heck out of your systems" and nobody else had to learn that expertise as a result. I feel like we're in this weird world where we're foolishly expecting nearly every organization to be good at security.
I'd love to rely more successfully on the ways in which Puppet is good at allowing people to share expertise. "Hey, if you're really good at installing and configuring this service, then I don't need to be good at it. I can rely on the module you built." I think that's probably one of my kind of biggest unsolved opportunities in the Puppet world, where, if we can get your world as good as Titan was in the year 2000 on Solaris, but not just on Solaris on every platform you use, because why wouldn't you, right?
Why wouldn't you just do all the things that security experts say you should start with this? Now, in a lot of cases, you have to back it off, because, if you start with the most secure version, then suddenly, almost everything is impossible, but without some ability to ratchet security down as much as possible, as quickly as possible, without having to think very much, you're always going to have this massive attack surface. One of the best things that tools like Titan can do is help you reduce that attack surface.
Ironically, we're getting to the point where "Oh, wow. We're going to have containers. We're going to have" and everyone talks about it like, frankly, no one talks about the security role in that world yet, because they're not using it in production yet, so they haven't had to worry about the reality of problems like security, but you're talking about a world where you often have the consequence of a vulnerability gets much larger, because, instead of having one operating system instance, like the old physical world, or 10, like the virtual world, you have a hundred or a thousand.
Let's say every container that you have up, which only is up for 30 seconds, is a vulnerable container, and it's doing pernicious work. It's only for 30 seconds. How are you ever even going to figure out that it's doing pernicious work? Where is it logging in a way that you can even figure out what's happening? It's nearly impossible. So you talk about it's already hard to find the traces of malevolent activity, right? Imagine that in a world where everything is temporary. Everything is essentially going to disappear in 30 seconds.
It's just, if you don't think about security in that world, and if you don't take every opportunity to rely on the expertise of others rather than to try to build it internally, then we're going to be super lost.
Obviously, we haven't even talked about things like IOT, which are such a disaster when it comes to security. It's ridiculous. Some of it's the obvious stuff, like all those operating systems are unpatched and completely vulnerable, but of it's less obvious stuff, like almost anybody who is a super-geek on Internet of Things you can walk up their front porch, yell, "Hey, Siri," open the front door, and walk right in.
Of course, those keys they had before the locks they had before weren't hard to pick, necessarily, but it's a little bit different just being able to walk up to somebody's front door and look like you're allowed in. All that stuff if you could connect that into a platform that knew how to secure things, knew what vulnerabilities existed, had expertise that other people had gone to the effort of building, and just connected that directly into production if you can build that connected platform, suddenly, the whole world looks really different.
And that's got to be the world we're driving to.
Chris Barker: Yeah. I'd say, actually, the Internet of Things is kind of the more terrifying space that
Chris Barker: I feel like some of it comes down to
Chris Barker: as people who have been working kind of in the virtual world for the last six years or so, where building infrastructure, provisioning infrastructure that is painless, and so being able to iterate on that is pretty easy, versus you're now back to "I have to build something that's going to be shipped on a ship, and there are going to be a thousand of them, and how do I patch it after they've been made and printed" and all this other stuff makes it a terrible process to figure out how to get updates and config changes to it.
At Defcon, there was a talk about how a guy got a solar panel control system that was left with an open VPN connection back to the manufacturer that was on the same subnet as all the other solar panel control systems, so he could go from his network to the manufacturer's network
Chris Barker: to someone else's
Luke Kanies: Every other customer in the network.
Chris Barker: Every other customer.
Chris Barker: And they were like, "Oops. We're sorry. We shipped you the wrong one. That was the dev unit," and they fixed it and they remediated after he had gone through the whole process. I'm assuming this was a public talk at Defcon, so everyone knows about it now. You can find it.
But again, the thing of we talk about Agile and DevOps and these quick iterative processes, but then you go to the world of physical manufacturing and you realize that, where we got the ideas from, the state of software development in physical manufacturing gave us the ideas of Agile and Kanban and Lean management, but then the flip side is their software policies, which we are now finally getting adopted in the software community and in the IT community, are still way behind what we're doing right now.
Luke Kanies: Yeah. I think, by 2020, the modern home is going to look dangerously and painfully like a data center around 1999.
Lots of physical infrastructure with direct, poorly written software that's poorly secured. It's all oneoff. It's in completely unknown states. You can't update it. No one in that building knows how to upgrade it. You can't do any assessments on it. All this stuff just missing. It's going to be crazy. And then you're going to have the same number of machines, right? An average data center was, back then, a couple hundred machines. You'll have a
Luke Kanies: couple hundred IP-connected devices in your home. And when something breaks when it works, it's going to be like magic, and when it doesn't work, you're going to want to burn the damn thing down.
Beth: Yep. All right. Okay. Excellent. Thank you both for your time. I really appreciate it. This was
Luke Kanies: Thank you for having us on the show.
Beth: This was a really fun talk. Thanks.