Acima Development

Episode 71: Testing in Production


Listen Later

In this episode of the Acima Development Podcast, host Mike and the team—Will, Matt, Kyle, and first-time guest Vignesh—explore the topic of testing in production, using robotics and AI as a compelling entry point. Mike explains how reinforcement learning, while powerful, is limited in robotics due to the high cost and risk of physical failure. However, recent advancements in simulation environments have opened doors for safer learning before deploying robots into the real world. This analogy sets the stage for the broader discussion: the balance between risk and necessity in testing software systems directly in production environments.

The conversation shifts to real-world engineering practices, with Will sharing how his team manages mobile app rollouts using feature flags and staged deployments to control risk. Vignesh and Matt echo the sentiment, highlighting scenarios where pre-production environments can’t replicate real-world conditions, especially when working with third-party vendors, such as payment processors, where true testing isn’t possible until go-live. The team agrees that in these cases, observability, logging, and fast rollback mechanisms are critical. Production testing, when done responsibly, can create long-term resiliency in systems and processes, especially when it’s augmented with metrics, monitoring, and small, incremental changes.

By the end, the group reflects on the importance of ownership and accountability. They distinguish between testing and validation, noting that while testing should ideally be done before production, validation in production is inevitable and often essential. They also acknowledge the contextual nuance: startups may need to move fast, and students may operate in sandboxed environments, but in professional practice, taking testing seriously is a non-negotiable part of responsible development. The episode concludes with a reminder that thoughtful, well-instrumented production testing can be a powerful tool—if you approach it with caution, strategy, and care.

Transcript

 MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I am Mike, and I am hosting again this week. With me, as usual, I have Will Archer. I've also got Matt, and Kyle, and Vignesh. Vignesh, is this your first time joining us?

VIGNESH: Yeah, this is the first time.

MIKE: Great. So, Vignesh comes over from the Rent-A-Center side, and we work with him a lot. He's a deep technology expert, been in tech for a long time, and he is a solutions architect. Should provide a lot of expertise and expect to be seeing him more in the future. So, great to have you with us, Vignesh.

VIGNESH: Thank you.

MIKE: Today, as usual, I'd like to introduce our topic by going kind of sideways and talking about something that will get us there. There's an interesting thing that we've seen in the world of AI. We've seen lots of text generation, right? If you haven't noticed, you've probably been living under a rock, but [laughs] there have been some really popular language models over the last couple of years that have made a huge impact. It's been a big thing.

And there's been other AI achievements, audio, you know, you can talk to your smart speaker, and it actually understands you. There's been progress in image recognition, image generation. But you know what there has been very limited progress in? Is improving robotics. There's been some, but it's been fairly limited, and there's a good reason for that. And it's really very simple.

If you want your robot to learn something, it's probably going to destroy itself while it does it. If you've got a robot that's strong enough like an industrial robot that can go and pick up a car, you probably don't want to say, “Hey, just industrial robot, go try and learn some stuff and see what happens,” because it's going to pick up things that are not cars that you don't want picked up, like, maybe itself [laughs]. Or maybe it's going to throw said car, or maybe it's going to use its internal hydraulics to destroy itself. In fact, this is a pretty common problem. And when you've got a million-dollar robot, that's a very, very expensive test.

There's areas of machine learning, and one of them is reinforcement learning. Reinforcement learning is when you try stuff out, see if it works, and then adjust your policy to be a little closer to what works. And, over time, you learn better what works. It's worked really well for games, but it's been very limited in robotics. And it really comes down to it's just crazy expensive. If you've got robots, you don't want to go break them. Robots are expensive and testing them in production, you know, is the only way to do it.

The only way you can test your robot is with a live robot. Now, that's actually changing a little bit in recent years, like, year, you know [laughs], they've built better simulation environments. If you can build a simulation environment that's really, really close to, you know, the real world and you can build a simulated robot, let it go out there; let it go ahead and destroy things; it’s fine. It's not going to break itself because it's just happening in software. And then, if you have it learn a policy that's pretty good, then you bring it to the real world and let it finish. Now that it knows not to do a lot of horrible things, it will do some reasonable things, and it's just getting a little bit better at them.

So, reinforcement learning is, you know, a big deal. It can be extremely effective. And they're just now getting in robots because if you couldn't test it ahead of time, it's just too expensive. It's just too expensive. They do have some lower-priced robots, you know, some people have managed to do with drones because they're fairly cheap. You can use a small one, and you can put it in a padded room. You can do some things to help keep it from being destroyed, but it's just really expensive.

There's a lot that we can learn about that, about testing. There are times when you have to test in production. You got your robot, and you want it to learn there. It's the only place you can. You cannot really do it in the real world. But, man, it is so expensive that almost nobody does. They don't release it because, without a test environment, you just can't learn.

Today, we're going to talk about testing in production, and [laughs] when we talked about this topic, you know, Kyle said, “Testing in production?” I said, “Never [laughs]?” [laughs] And we got a chuckle out of it. The truth is you wish it could be never, but sometimes you've got that robot you've got to teach, right? There's sometimes when it's your only option.

WILL: I do it all the time. All the time.

MIKE: Do tell, do, tell.

WILL: Well, I mean, so one of the main things that I'm doing right now for my major electronics retailers I work on their mobile app. And so, the thing about our mobile app is the release cycle is really long, right? So, I go out. I package the thing up. I send the app out to, you know, the approval board, right? Like, Apple's decently fast these days, but it's still maybe a day or possibly two to get approved, right? Google usually you can get it through the same day. And then, it rolls out to the customers, and it rolls out to the customers whenever their phones feel like downloading my update, right?

So, it goes out over the course of, I would say, like, a week or so, like, to everyone to 100%. And so, commonly, what we will do is we will release new features behind a feature flag, right? Which we can sort of manually turn on, you know, in test devices. And then, we can turn it on for a few people, and then a few more people, and then everybody when we feel confident. In fact, even our regular rollouts of a new version of the app we'll release it to, like, 1%, make sure everything's okay.

Because the other thing about it is we are releasing software on a platform that we do not own. Your phone, that's your phone, and you have a greater or a lesser level of control on what happens with that phone. You know, Apple's pretty locked down. Android, anything could happen. We don't have control over what's going on in your phone. You could be out of disc space. You could be out of memory. Anything can happen.

And so, there's a certain level of uncertainty around any release that we execute. That means we may need to claw it back. It’s easy to release to a production server when it's your server, but it's not my server. And I can only be so confident until it actually runs and everything is good, and it's like, oh, is it good on a Samsung? Is it good on an LG? Is it good on an Apple? You know, because they all rolled their own operating system, and they all have their own problems from day to day. Sometimes we need to claw it back. That's just how it is.

VIGNESH: I don't think we’re ever able to say, “No.” Everything done not in production, everything done [inaudible 07:21] environment go to production. That's impossible. Sometimes you have a lot of vendors. Like, if you're own grown solution, you may have all flexibility, but when you are integrating with others, you have to somewhere, you know, take a payment provider. You never have a real card testing, right?

You'll always have a dummy card, dummy things. They’re just a mock-up of data. There's not even running through the true system. They just do a lot of mock-ups on the side. This is the test card. You’re getting this number. I get test card. You return this response, right? So, then you only have option is before even rolling out to everybody as customers, you have to do your validation when you deploy. So, you need to have that production testing for certain cases.

MATT: Yeah. You hit on something really important, right? You need to control your risk, and that's extremely important. So, feature flagging, great way because you can react extremely quickly, and if something goes awry, you hurry and flip feature the flag off, and you mitigate your losses.

I just recently went through something, and, in fact, we were talking about robots and AI, AI-related, where we had to test in production because we didn't have the data we needed to properly test in a pre-flight or dev environment. So, you do as much as you can to mitigate your risk, and you have to keep a really close eye on so you can pivot quickly. And I think that's really the most important thing in production testing is you need to watch it extremely closely and be able to react quickly in case anything goes wrong.

MIKE: And you also touched on something there. In all the cases we mentioned, you don't have full control. And [chuckles] that's the reality sometimes, right? Sometimes you're working with a third-party platform you don't control, or you don't sort of control the data source. You have outside vendors. Credit card processing is a great example, Vignesh. It's one recently I’ve seen done where, yeah, you just can't. You want to tokenize a card to put in a mobile wallet? Good luck doing that outside of production because there is no sandbox credit card processing network [laughs].

MATT: And, Large Language Models, you could test a hundred times, and then it just takes that one request, right? And the model gets confused. It doesn't know how to respond, and you're going to get something totally unexpected. And that's when you have to make adjustments as you go. And I don't think there's really, you know, at this point in time, a way around it, unless you collect, you know, millions and millions of rows of data to throw against it and train on.

WILL: I’ll also say it's not all bad. Because one of the things about this sort of, like, a heavily feature-flagged environment, when you build that stuff out, you could ratchet stuff up and ratchet stuff down. Like, an emergent property of those kinds of things is you have fine-grained troubleshooting techniques where it's like, oh, with this payment provider, they're not having a good day today. Boom, they're off, right? No service interruptions, no problem. It's just like, okay, those guys aren't getting any of the traffic right now.

Similarly, if I'm having a problem, you know, there's a feature on my mobile app; it's, like, some backend, downstream services just having a day, and I'm just like, well, not today. You know, we're going to turn this thing off. You get everything else, but, you know, this aspect of the app is disabled until we can get it back up. This is one of those things where if you find yourself on shift where somebody else's, you know, experience or a backend subsystem is going wrong, there's only so much you could do. And having, you know, a button, you know, a panic button that you could press is a really valuable tool to have. And it's something you could get for free by testing in production [chuckles].

MIKE: Well, that's really interesting. So, the risk mitigation that you have to do to do something as risky [chuckles] as testing in production, when you're doing it right, when you're not cowboying it, but when you're actually saying, “Well, this is something I have to do, so I'm going to build as much structure as I can to make this as safe as I can,” is not single use. Having that system of feature flags actually builds resiliency into your system that can protect you operationally. So, when third-party service X goes down, well, you've already developed the ability to deal with that.

WILL: Exactly.

MATT: And we said, how often do we want to test in production? And everyone says, “Never.” You can also argue always.

WILL: Right.

VIGNESH: I don't think never is the right answer [laughs].

MATT: Well, right. And I would argue you always do. Yes, you want to go through a QA process. You want to do developer testing. You want to throw as many scenarios at something as you can prior to releasing, but you always want to go test and validate in production, post-release, always.

WILL: Yeah, that's the final...like, I think, informally, everybody does that, where you could have done every single possible peer review, code review, you know, linting, QA process, but I always check it. It’s like, okay, we rolled it out. I'm just like, okay, I'm just going to run one.

[laughter]

VIGNESH: Unless, you know, the point you mentioned, you know, I have a percentage of rollout, rollout for 1% or 10% of customers, monitoring a log. So, if something goes wrong, then I roll back. But you have to sit through that time, also, hey, that rollout process. But most of the system doesn't build that way. I know that those rolling out, like, percentage-based it’s very few systems right now and few companies does that one. Definitely, you need to validate yours in production, not just say, “Hey, it's going to work it out.” And a lot of times, you know, that's not the true statement.

MATT: Right. And if you don't, there's a really big cost because bugs go unnoticed and unreported because customers may be running into them and just bouncing, and that whole time you're losing money. Whereas if you would have went and validated in production, caught it early, you would have saved the company and yourselves a lot of time, and money, and effort.

MIKE: So, the monitoring instrumentation, visibility is how you make it work is what I'm hearing. Kyle, you might have some opinions on that [laughs].

KYLE: Yeah, I was just sitting here thinking, so it kind of seems like we can't, or we shouldn't be doing any of this testing in production if we don't have the observability behind it, if we don't have monitoring, if we don't have the logging because you want to do metrics. Because it sounds like, you know, feature flags and canaries is what we're talking about, sorry, canary deployments is kind of what we're talking about and doing a subset of data. And we need to know how each of these different systems are performing.

So, more than any other environment, it sounds like it's more important just to be able to have that kind of information for these testing scenarios, especially with the idea of load testing, too, because you don't get a more realistic scale of your services than load testing with real user data.

VIGNESH: Yeah. I mean, nowadays, due to microservice model, a lot of this new tech stack, which we will [inaudible 15:27], even though it's our environment, but we don't technically own a lot of things, you know, behind the scenes. Observability is the key. Nowadays, you know, even before starting working, the first thinking you should come in mind, say, how do I monitor this one? You know, what is my criteria failover scenarios? But I don't say 100% time as Matt said it. We don't need to test in production, but there is certain areas you definitely need to validate in production. Otherwise, you know, you may end up risking, especially, like, payments or some of the decision engine, something you have to run a dry run to make sure, hey, it's working [inaudible 16:07] connection.

Sometimes I saw, you know, deployment people think it works, and if one person deployed an hour before, especially when you are deploying cross-team application deployment, that's even critical your application connections is correct. If you think, I did it; my application is working; there's no logs, and you went off, then other one is doing, gone. One time I saw, like, six hours was an issue. So, that's where the five minutes taking just validating either through observability, I mean, yeah. One way of fixing is...I said two ways, right? One, you do real production testing, other is your observability is key point.

MATT: Yeah. I mean, it's critical. And when I say 100% of the time, I do mean it if we're talking about changes in business logic. If I say, go make that border a different shade of blue, probably okay to not go do production testing. But if you're changing business logic, I think it's really important, especially when there's the possibility of a lot of edge cases that just aren't going to get caught in quality assurance testing and dev environments.

VIGNESH: For example, this Monday, we are going to release, not this Monday, the following Monday [inaudible 17:30]. We said we have to test a program. We don't want to just say, “Hey, all the services deployed working. The customer is going to validate it.” No, we want to validate before even saying, you know, it goes to calendar, we have to: the new features, new integration, new business logic. It's worth to test it than just assume it will work.

MATT: Yeah, absolutely.

KYLE: There's a subset of testing, right? As much as we would like to test everything, there's RUM, right? And we can't get RUM without actually giving it to the customer, to the user. And that's, you know, as much as any developer would love that they use the product the way that we intended them to, they're always going to use them outside those bounds, be it a large percent or a small percent. It's not going to be used quite as we expected or hoped some of those times.

MATT: Yeah, that's really hard to predict, too. I mean, user behavior, you know, psychology, and you can't always predict everything somebody's going to try to do.

MIKE: So, it's interesting. We started, and we're still talking about testing in production, and it's almost 100% come down to validation and observability, mitigating risk by making sure you're watching it, doing slow rollouts. There's process around this risky thing you need to do that you can use to make it work. It's a fascinating idea, what do I say, and we all arrive to the same place. You make this work by being as careful as you can, and you only do it when you have to, but in some ways, you're always doing it.

When can you avoid? When can you avoid testing in production? Because there's a lot of times you think, well, I have to. So, give me an example. I'm going to be a little bit vague here because I’m not going to go deep into the business. But sometimes there's third-party providers that have sensitive data. You think about, like, credit reporting agencies, for example. They have a contractual obligation that you will not give them test data because they don't want to have their data polluted.

So, if you try to test with them in production, you're violating your contract, and that's not a good situation, right [chuckles]? So, there's, you know, there's a, well, you cannot, and in that kind of case, well, yeah, I'm not going to test in production because I can't. I'm not allowed to. My contract says, “No,” so all I can get is observability.

But there are other cases where like, well, yeah, I could probably come up with a test for this, but it'd be easier just to roll it out. And this happens especially at small companies or new projects, things that are pretty small. You're like, well, yeah, I don't have to set up my test environment. I can probably just roll it out there. What should that boundary be? Are there times when you really can get away with just putting it out there and seeing what happens?

MATT: Yeah, I think so. Something you have to keep in mind is how sensitive is the data that you're dealing with, right? Are you dealing with people's financials, health information, things like that, where it's critical that it remains secure? And then, there's other things that you can just throw out there and see how it goes.

I think, you know, we went through a phase here, I don't know, 10 years back. It started roughly with the whole gig economy thing. I think that's a perfect example. Everyone, not everyone, but a lot of people were out there trying new ideas and things, trying to be fastest to market with this gig economy and just throwing a bunch of stuff at the wall to see what sticks. And you had to move fast, or you get left behind, and then you find your big players in that market, right?, DoorDash, perfect example, Uber Eats. You know, there were a million food delivery services, and the survivors are the ones that move fast to market and spread quickly.

MIKE: Well, did they get there by having lax testing processes though?

MATT: No, I don't know that's the case, but I think they were a little more loose with the features they were rolling out and liberal.

MIKE: So, there's a difference there, right? Say, “Well, I'm going to test this out. I don't know how this is going to work from a business use case. And I don't know that we need to architect this really well because we just need to trial it.” But you probably want that feature to work and not damage your customer relationship by always just having broken code.

MATT: Sure.

MIKE: So, there's some nuances there I'd like to tug at a little bit. So, I agree, you're a startup. Getting out to market is your life, right? You do not survive as a business unless you're trying stuff out and doing, just as you described, getting out products that aren't complete that maybe have some weirdness to them, right? But it's another thing to say, well, I'm just going to throw out garbage. And I’m not even going to bother testing, just throw out whatever, because that might not actually be faster.

MATT: Oh yeah, I mean, you always pay the price, right? With your debt. You're creating your relationships. And if anyone's just pushing out garbage, they probably shouldn't be in the business honestly. You always want your stuff to work. And I don't know that in my entire career I've ever actually pushed anything that wasn't at least manually tested. You know, prior to the days of TDD, we did that all the time, right? But you test your happy paths, maybe an edge case or two, but you don't test everything.

I worked at a startup and was kind of the lone wolf for quite some time just due to resources and funding, where the CEO would just come in my office and say, “Hey. I want to do this thing. Can you have it ready by tomorrow?” And I'm thinking, well, that'll take three months, four months to build, but I guess [laughter]. And you prototype something, and sometimes that prototype is good enough to get out in the wild and just see how people behave to it, right? It's not perfect. You're going to run into issues, likely bugs, but it's good enough to test the market. And that's kind of where I was going with that.

MIKE: And you hit some important stuff there, that you care about validating, right, about making sure your system works, and any responsible engineer is going to care about that. On the flip side, there are going to be business needs, whether that be internal business needs or external ones, right? Internal ones say, “If we don't get this out tomorrow, we lose a client.” External ones say, “You cannot contractually test this,” or “We don't have a sandbox. Sorry. We have to test this in production,” you know, the credit card network, for example, that force you to test in production. And that's the reality of it [chuckles]. That's the reality of it. There's sometimes you have to. But you talked about debt. Every time you cut corners, you make things a little harder for next time.

MATT: Absolutely. Absolutely. And that's the price you pay to move fast.

KYLE: I wonder, too, how much has this faster movement affected testing in production? Like, when you've got a release cycle that is, you know, once a month or every two weeks or whatever, how is your testing methodology in production changing compared to if you can release to production 10 times a day? You know, are you making...I would hope at least that we're shipping out code that's easier tested and doesn't need as much testing type thing, and there are going to be much smaller and less effective changes in that case. But are we less cautious because we can iterate so fast?

MIKE: One thing...we've talked about this before. The larger your release is, the more complex it is. And it very quickly becomes impossible to truly test every path through the code, just mathematically [chuckles].

KYLE: Too big of a matrix.

MATT: Yeah. I mean, that's why, you know, moving into the world of CI/CD is so important. You are mitigating so much risk versus waterfall style where you're releasing, you know, 20 features in a single release. You find a problem. Those problems sometimes are extremely hard to track down, whereas you're releasing a small incremental change, really easy, less impact, and you can react way more quickly.

MIKE: You did kind of hint at something there, Kyle, about becoming complacent. And you go, wow, I've tested these are always working, you know, I'm really good at pushing out these small releases. I always test them. Maybe I forget to even think about validating it. And you knew it was important, but you just got so good at your process that you started missing parts. And you start believing that it's going to save you every time.

MATT: Yeah. And that's dangerous because you don't think about downstream effects and impact, right? You think, oh, well, this small thing I did is isolated here. But that change could impact things further downstream and cause real problems. And we've all seen that before.

MIKE: Yeah. So, the rules: while continuous CI/CD is huge for reducing risk, it's just reducing risk. It doesn't eliminate risk. And all the practices we've talked about, particularly the validation and having all of that instrumentation, whatever it is you do to make sure that it works, which may just be going up and testing it, is vital. That never goes away.

MATT: Yeah. I mean, we can get close, but we're never going to be 100% of anything. It's like solving for pi, right? The farther you go, the closer you get, but you're never actually going to solve all those problems. And you run into that a lot with machine learning and artificial intelligence. And sometimes the closer you try to get to that 100%, the further off it pushes you because of the impact downstream that you just aren't seeing at the time of change.

MIKE: So, we've talked about some good practices, about sometimes when you need to get there, about times to, you know, where you can't. So, we haven't talked that much about when you don't. And we've talked about, you know, yes, you're always testing in production. You've always got users who are using your stuff differently than you expected. So, I’m going to ask a simple question.

So, I mentioned it before, you know, when do you test in production? Well, you don't, yeah, never. Of course, it’s not true, as we've talked about thus far. But where does that sentiment come from? Why would we have the gut reaction, well, no, you don't test in production? Why is that the case? And what are we actually trying to say?

MATT: I think the reaction comes from data. You're muddling your data. When you test in production, you're not using real data. Your reporting can get thrown off, things like that and, you know, the risks you talked about before, contractual stuff, compliance issues, those types of things. So, maybe we think about things a little differently, and we talk about testing versus validation, you know, actual testing. Your tests should be written. You're not, you know, you're trying to get as much coverage as you can with written tests, with quality assurance, developer and peer testing before you send it to production. Once it's there, you want to validate. Sometimes you can't with reliance on third parties that you can't control. And that's when you're kind of forced to really test in production versus just do validation.

MIKE: I'm going to go back to the industrial robot thought, right? Let's say you've got a million-dollar robot. If you had a reasonable simulation environment, would you test in the real world without testing in the simulation environment first?

MATT: Absolutely not.

MIKE: It would be just burning money.

MATT: Yeah, and irresponsible.

MIKE: Irresponsible.

KYLE: But there's also a cost to getting that simulation environment.

MIKE: Exactly [chuckles]. That simulation environment is really expensive, and there's companies that have been spending a decade or more working on building that sort of thing. Companies like Nvidia who have a vested interest in having more companies using their product are trying to provide an open environment for people to test in because it benefits them. And so, they put millions, a billion dollars, you know, just vast sums of money into developing that because it is so expensive. Individual companies are going to have a really hard time doing that on their own.

MATT: Yeah, and it's like we talked about with the debt you're creating by moving quickly, right? The cost over time may be much greater than the cost you front load with in setting up that environment. It depends on what you're dealing with. You have to do the math and say, is it worth it?

MIKE: That's a very conscious decision. You say you do the math [chuckles]. You don't just kind of roll with your gut and say, “Ah, it'll be fine,” right?

MATT: Yeah.

MIKE: You look at the value of what you might damage, your relationship with your customers, financial information, you know, loss of information, and, you know, these extremely...well, I'll say particularly financial information, right? I mean, it's just not something you ever tinker with. The risk is too high. It's irresponsible to do anything with your customers that is not extremely well-vetted. Or even more, what about medical hardware?

MATT: Right. When you're dealing with people's lives.

MIKE: Yeah. You know, you test that in every possible way that you can before you even consider going into production. And where the risk is high, where the cost of the risk is high...it's not just financial cost but however you define that cost because, you know, people's lives that's high cost. The cost is extremely high then you're careful. You're careful.

MATT: Yeah. Use good judgment. I mean, everything isn't black and white, and not everything is objective.

MIKE: To flip it around, we said, well, there's times you don't, and we've also talked about low risk. And this is, I think, maybe where testing in production gets its reputation from. You've got students in school. They're writing the code for their project. And a lot of them aren't going to have a test environment. The risks are so low, right [chuckles]? Do I pass the class or not? You know, do I pass this assignment or not? And the cost of building a test environment is high enough if it's a small project, right?

Rationally, they look at the risk-reward and say, you know what? My test environment is the TA [chuckles]. I will be testing against the TA, and if they can run the code, then that is passed. And that's a valid test environment in that scenario. I could argue against it, and say, well, students should be learning to write tests. Well, they should, but if the system doesn't incentivize it, then they're going to make the rational choice [chuckles], which is don't do the extra work because I have a tester. I have a test environment built. In fact, they probably built one in the class that's going to test this for me.

MATT: Yeah. And, in that way of thinking, you're putting the reliance on somebody else, you know? Just a little bit of advice for any listeners: have some ownership; have some accountability and try and do as much of this stuff yourself, and don't always rely on somebody else to cover you.

MIKE: It's a sandbox, right? School, or, you know, bootcamp or, you know, your own test work, you know, your own learning. It's all sort of sandbox. But in the real world, testing matters [laughs] and not ever relying on somebody else is one of the things...now, let me state that a little differently. We rely on each other heavily, and it's the only way you get software made. But not putting the expectation on somebody else where you could be reasonably be expected to do it is the way that you end up building good software and moving ahead in your career because you care.

MATT: Yes, and good relationships with those around you. In this case, don't ever think it's not my job because it is.

MIKE: And that may be a good way to tie this one up [chuckles]. It is your job. All of us, you know, we've talked a lot about the exceptions, but in the end, in almost all cases, it's your job to validate before it goes into production and after it's gone into production. You've got to do both. And owning that, knowing that you, you know, this is something that you built, and it's your, what do I say, stewardship, you know, it's something that you take personally, is what's going to lead to the craftspersonship, where you truly become an expert who can build great things.

MATT: It's my name on my commits.

MIKE: Yeah. Well, thank you for listening. Thank you for listening. We've covered some good stuff and not one of our longest sessions, but really pointed the critical importance of visibility, monitoring what you're doing, testing it as early as possible in the process, which may be in production in some cases but doing everything you can to mitigate that risk so that you treat your customers and their data with respect.

And until next time on the Acima Development Podcast.

...more
View all episodesView all episodes
Download on the App Store

Acima DevelopmentBy Mike Challis