Share The Build Better Software Podcast
Share to email
Share to Facebook
Share to X
By George Stocker
The podcast currently has 11 episodes available.
Buy the book here: https://gum.co/dont-say-that/podcast-special
Michael's pluralsight courses here: https://www.pluralsight.com/authors/michael-callaghan
Rough Transcript (powered by Otter.ai)
George Stocker 0:00
Hi, I'm George Stocker, and this is the build better software podcast. Today I have the pleasure of talking with Michael Callahan, lead software engineer at Walt Disney World. And I want to welcome you to the show.
Michael Callaghan 0:11
Thank you, George, happy to be here.
George Stocker 0:13
So for the those of us who may not know about you, or what you do, tell us a little bit about yourself,
Michael Callaghan 0:19
where can I start, I am halfway through my third decade of professional software development. It was way back in the ninth grade in buoy High School. When the data processing teacher, we actually had that class, took pity on me, and allowed me to essentially use her dumb terminals in the classroom after school to teach myself basic. That led to a love for computers and software that never really waned. Even though it was about 10 years after graduation, before I got my very first paid software development gig. And I even got burned out in the late 2000s. Well, mid mid to late 2000s. And didn't work for three years in the industry. And fortunately, that that changed. And I'm now in my 10th year at Disney with Disney Parks experiences and products, where I build what we call cast facing web applications.
George Stocker 1:28
So applications for the internal employees that work at Microsoft, or not Microsoft at Disney,
Michael Callaghan 1:35
correct. As you may or may not be aware, Disney Parks refers to their employees as cast members, because the entire place if you will, is the metaphor as a as an ongoing show. So even us, we were called backup house cast members, because we're never on stage.
George Stocker 1:53
And now you have a book that just came out, which I had the privilege to read. It's called "Don't Say That at Work". Tell us a little bit about that.
Michael Callaghan 2:00
What can I tell you about that, as you can probably imagine, if you've done anything for any length of time, you're going to make a lot of mistakes. Hopefully you recover from those mistakes and learn from them. This book is about some of what I consider the more egregious errors that I've made over my career, and some cases, mistakes that somebody else might have made or things that I've observed. And I just decided to put them down in essay form, came up with 20 topics and went ahead and publish the book. So far, it's been well received.
George Stocker 2:36
Now, before we dive deeper into your background, I want to dive a little bit into the book. And in the book you talk about not only, you know, mistakes that that you've made, but also things that both software engineers and software leaders should be aware of. And you have a story in it about about one of your bosses, can you go deeper into that story
Michael Callaghan 2:57
I mentioned a few different bosses in the story is which one he is talking about in particular,
George Stocker 3:03
it was it was a boss that was not was not altogether truthful.
Michael Callaghan 3:09
That was a fun experience, because that was very early in my career. And so I was still naive, wet behind the ears, whatever phrase you want to use. And I never had a college degree, at least not at that point. I was a University of Maryland computer science dropout twice. So when I got my very first software development job in 1995, I felt very fortunate that someone was willing to give me a chance without a degree. That did not turn out too well. And then I got my second job. And that was this particular boss. Not only did he not give me the job that he hired me to do, which was that of a Macintosh developer. And yes, I was a Mac developer before it was cool back when we used Pascal. But not only did he not give me the job that he had hired me to do a few years into the into the job, I want to say bout a year and a half, maybe two years. He asked me to falsify my resume. Because what he would do was send resumes of his employees when he when he would bid on a job. So we were as we were an independent software development shop. And he would go and bid on different development projects, bring them back in house, and then he would manage the project. So this particular client wanted only college graduates to work on their project. And that's their prerogative. I didn't have a degree. And when I pointed that out to him, and he did two things very quickly. One he got annoyed with me for not having a degree even though he knew that and then second, he went ahead and modified my resume to say that I had a computer science degree when he sent it to the client. As you can imagine, I didn't take that very well. But this is my my boss. This is my livelihood, doing what can you do about it. Eventually, I decided that I couldn't in good conscious, keep working for this guy. So I started looking for other jobs. So I went ahead and submitted my resignation and turned over the key to the office and walked out the door, essentially.
George Stocker 5:16
But that's not the end of it, is it?
Michael Callaghan 5:17
It is not.
You have read the book. So right after I resigned, I thought we were on pretty good terms. He sent me an email that said, Hey, would you mind signing this affidavit? I just need something for, for the record saying that, you know, you officially quit and you don't have any company property. And then you're not going to solicit any of our clients or, or employees to try to poach them. I was good with that I looked through it didn't seem to be anything scary in there. So I signed it to the back a day or two later, I was cleaning out my desk, at home, my work from home desk, and I found a couple of CDs that obviously belonged to my employer, my former employer. So I sent off a quick email to him, I said, hey, I've got these CDs. I must have overlooked them. If you want, I can bring them by the office sometime, put them in the mail, whatever you want, set them aside. didn't think anything more about it. That Saturday, I got a priority overnight, FedEx letter from his attorney, accusing me of stealing, not only the CDs, but also source code, and informing me that I was now the subject of both civil and criminal investigations.
George Stocker 6:31
And so at that point, how are you? How are you feeling like to get that that letter petrified,
Michael Callaghan 6:38
absolutely terrified. And here I am, I've got a wife in a newborn, I think my son was about 18 months old, maybe close to two years. And here I am being told that I'm going to be arrested and thrown in prison. Because I committed perjury by saying that I hadn't kept any company property. But you
George Stocker 6:57
did the right thing, and that you engage the lawyer in this kind of entertainment that ever gets in the situation. talk to a lawyer before you do anything. And you talk to a lawyer and a lawyer. I did
Michael Callaghan 7:07
talk to a lawyer. But keep in mind, it was Saturday. There was no Google there wasn't really much of an internet in 1997. To speak of. So I there wasn't a lot of research I could do there wasn't, I couldn't go to a website and ask questions or, you know, legal online forums, I had to go to the Yellow Pages for New Hampshire, find a lawyer pretty much at random, and wait until Monday. So I had to wait two whole days, not knowing what was gonna happen. And then Monday morning, I called someone that I had foun...
Tess on Twitter:
https://twitter.com/_tessr
George Stocker 0:00
Hi, I'm George Stocker, and this is the build better software podcast. Today I'm joined by test rynearson. And we're here to talk about engineering and Engineering Leadership tests. Welcome to the show. Hi, thanks for having me. Thank you for being here. Now for people who don't know who you are or what you do. Could you tell us a little bit about yourself?
Tess Rinearson 0:19
Yeah, absolutely. So I am the VP of Engineering at a small blockchain company based in Berlin, which is called interchange. GMB H. GMB H is like LLC or Inc, but in German.
And I've been working in engineering management at blockchain companies for the last while I've been in the blockchain space for the last five years or so. And for about the last three years that I've been in, in management positions. Before that, I was a software engineer at medium and before that I was a computer science student.
George Stocker 0:54
Okay, now let's let's start backwards. You're working in the blockchain space now, as someone who's never been in the blockchain space and doesn't really know what that's like, describe, you know, developing new products in that space.
Tess Rinearson 1:09
Yeah, totally. Um, it's a really fun space to work in because lots of things are new. And because the, like most blockchain projects have a bit of a unique funding model where
not necessarily all of the funding but often a lot of the funding comes from token sales. And so a lot of the people who are sort of like, like invested in your product are also your users are also like, like regular people in a lot of ways. And so there's, there's a different kind of like, like product development and funding cycle from a lot of other like software tech companies, I would say, and I think that gives you like a lot of a lot of latitude and also like a very community oriented
way to do your work. When you're building software,
George Stocker 2:01
and you're in a nascent market, I mean, we like we don't know the full potential of blockchain. We at least seen it realized yet. So how do you as a leader, how do you work within that that largely unexplored territory?
Tess Rinearson 2:18
Yeah, totally. That's a really good question. So when I got started in the blockchain space, it was 2015. And I joined a tiny blockchain company which is called chain because this was early enough in the sort of like blockchain, you know, lifecycle that you could actually call your blockchain company something like that. And chain worked on like a really, really wide range of products built, you know, first of Bitcoin and then kind of like our own blockchain product, and some stuff with like cloud blockchains, like explored a really, really wide range of products that you could build with block chains. And none of them really stuck. And the team ultimately got acquired by another blockchain company, which is a whole other story. But anyway, what I learned from that experience was that even though I am personally super, super excited about the technology behind blockchains, I don't really think I'm like the person who's going to build like a great product from it, at least not a great End User Product. And it's funny because I think that is like kind of the thing that needs to happen for the blockchain industry to really advance like, like when we get when we finally have something that you know, is used by, by regular people, by everyday people, not just by like, crypto nerds. I think that that will be a huge milestone for the whole industry. But I'm not sure that like, I am the person to work on that particular effort. What I realized is that I'm really excited about the infrastructure and like kind of the lower level stuff. So what I work on now is mostly a consensus algorithm, which is basically like one of the key building blocks for building block chains. No pun intended. And so it's just like a lower level thing. And so most of our users are other engineering teams that are also in the blockchain space. And they are building. Most of them are building products that can be used by end users or again by like non blockchain teams. So I'm kind of like one step removed actually from from, like, the product product process. And so what we think about really is just like, you know, what do we what do we need to do to make this tool basically, this piece of infrastructure, as stable as possible, as secure as possible? as standard as possible is actually a really big thing because there's so many things you can explore and do and innovate on in the blockchain space. Like we want to keep everything that we're not trying to experiment with as standard as possible. So that's like something else that we've been really driving towards this year is trying to standardize You know, it As much as our own pieces as much as possible. So yeah, again, just trying to like build really good software for other engineers fundamentally.
George Stocker 5:09
Okay. everything I know about blockchain, I learned from the show Silicon Valley. And so I probably know next to nothing, what is the consensus algorithm and how does it work?
Tess Rinearson 5:19
Yeah. So, this is one of my favorite subjects actually. So if I, if I get too deep, please cut me off. But basically, consensus algorithms, let a group of computers come to consensus on a value, right? And you can do this in a centralized way in a really easy way where you say like, okay, one computer is, you know, the canonical source of truth and if you need to know what value you should be having, just go ask that. Just go ask that computer for most blockchains or really for most like open blockchains on open networks where, where anyone should be able to join And it's like truly meant to be decentralized. You don't want to have one machine that is like the only source of truth, because that's, you know, very vulnerable to malicious behavior from that, that single entity. And so broadly speaking consensus algorithms are Yeah, how you get here you get a bunch of computers to come to agreement on a value. But in the blockchain space specifically, we really focus on consensus algorithms that are Byzantine fault tolerant, which is a term that means like, you know, able to withstand behavior from from the machines from the computers where they may tell like part of the network one thing and another part of the network, another thing you know, some people say like malicious behavior, which is roughly correct. And yet you want to be able to protect against that in a blockchain network. Because if you can have anyone join, you know, you're sort of inviting potentially bad actors into your system. If needed to be able to withstand a certain amount of that kind of like, Melissa, malicious or inconsistent behavior.
George Stocker 7:08
And because the work you're doing is, like I said, largely unexplored, you're relying a lot on theoretical papers and computer science research. You know, how do you how do you find ways to to learn more about this topic? And how do you? How do you level up to where you are with understanding how blockchain works?
Tess Rinearson 7:34
Yeah, totally. So we have actually like a, like a sister company that has a lot of researchers in it. And a lot of those folks are from like, have an academic background. And they largely are the ones who kind of like design the what's the right way to say this, like the protocol, and especially like the finer kind of like binary details of the protocol. They do a lot of that design work. And they actually also are in the process of formally verifying it using TLA plus. So TLA plus lets you like, kind of write out your specification...
Bruce's site:
https://productculture.org
Bruce's book:
Product Roadmaps Relaunched
The product Scorecard:
https://walkerux.wordpress.com/2017/08/21/notes-on-bruce-mccarthys-prioritizing-product-goals/
Transcript (powered by Otter.ai; please let me know about any discrepancies!)
George Stocker 0:00
Hello, I'm George Stocker, and this is the build better software podcast. Today we're talking about product roadmaps with Bruce McCarthy. Bruce, welcome to the show.
Bruce Mccarthy 0:09
Thanks, George. Nice to be here.
George Stocker 0:11
Now for people who may not know who you are or what you do. Can you tell us a little bit about yourself and your work?
Bruce Mccarthy 0:17
Sure. The way I always introduce myself is I'm a Product guy. I've been a product manager, Chief Product officer, an engineering manager, a design manager, business development, guy marketing, sales. I've done all these different things, agile, enablement, and whatever. But I always come back to my roots as a product guy. I like building stuff. I like solving problems. I like getting the team together and working on Alright, how are we going to tackle this folks? How are we going to make things better for the customer and for the business? So I always kind of go back to that product leadership kind of role and these days For the past seven or eight years, I really been teaching and workshopping and coaching teams how to do that stuff. After a long career of being fairly successful at it myself, I felt like I had learned the ingredients and how they fit together properly. And so I do workshops, public ones that you can buy tickets to, and private ones for companies. And I have a forum for invitation only for Chief Product officers where we get together and workshop each other's challenges. And I do consulting and speaking at conferences and things like that. I also wrote a book that I think you've seen called product roadmaps relaunched, and how to set direction while embracing uncertainty came out from O'Reilly a couple years ago, and it's become kind of the standard, the standard book on product roadmapping.
George Stocker 1:58
And you brought up solving problems from customers. And I'm glad you brought that up. Because at least in my career, when I've seen product roadmaps they have, well, we need these particular features at this particular time to grow. And you know, this particular outcome, and it was laid out in a calendar fashion with exactly what features the product team needed to build and what they needed to do. How does how you view a roadmap differ from that?
Bruce Mccarthy 2:23
Well, I kind of think that that, in my mind, old fashioned view of a roadmap gets teams into trouble a lot. Number one, it gets them into trouble in terms of broken promises. They are constantly finding that their dates were over optimistic and so they're constantly feeling behind. Also, those kind of optimistic roadmaps don't take into account. stuff you got to do to keep the old things the things the features you shipped last year still working for clients and updated and free of killer p one bugs and so on. And it doesn't take into account shifting priorities because you might make up a roadmap at a point in time, say just before the sales kickoff in January of a given year. And it might go for a whole year or even longer. But your process as a product person of learning what's going on in the market doesn't stop an end there. Even if you've done a ton of research, and you think you know exactly what's right on January 10. of that year, on January 11. Someone's going to come to you with some new information. And you're going to be like, Huh, I wonder if that really should change our priorities? And maybe on January 11, you're not sure yet. But by February 10, you're probably like, Yeah, yep. You know, that thing we were thinking about for the end of the year. It no longer seems as important as this other thing that's just clearly becoming a theme among our customers, or you got to respond to competitive pressures. are new and unpredictable. So this idea of committing in advance to exactly what features we're building on what dates is kind of a doomed effort, because you're going to change your mind and you're going to find that some things take longer than you expected them to. So my approach to roadmaps is to admit that upfront, and to have a regular process of updating the roadmap every month or every quarter with the latest information, and to say upfront, this is not a commitment. In fact, our confidence in anything beyond this quarter is increasingly low. Some teams I work with actually publish a percentage of confidence on each item and the roadmap or each timeframe in the roadmap say quarters or something like that. That goes down to something like you know, four quarters out, it goes down to like, we're 20% confidence that that this is actually what we're going to be shipping at that point. time. But there's one more critical point that I really want to hit on aside from unpredictability. And that is that most roadmaps, forgetting just about the time commitments. They are, as you said, a list of features. There are a list of things we plan to ship changes we plan to make, and those commitments to features and changes and tweaks and redesigns or whatever, are made well in advance of actually opening up the code and digging into it, or testing the idea with customers or producing a design and seeing if it works. And those commitments are made really prematurely. If you've got a problem to solve and you think you've got a good idea for a feature to solve that problem for the customer, like make them more productive or, or something like that. You really can't know in advance whether That feature will effectively solve that problem or whether that feature is the best way to solve that problem. You can measure it after the fact. But what if it turns out that you were wrong? What if you ship feature x? Because you're sure that it's going to raise your conversion rate or your retention rate or something like that? And you find out, it doesn't actually do that at all, or it does it but not half as much as you need to meet your business goals. What are you going to do? You're going to go back to the drawing board and come up with another feature, or another idea. And if it's six months before you ship that, well, that's a really slow way to improve your business, right. So instead, the in the book I described a different way to approach the core content of a roadmap rather than being features. The core content tend to have a roadmap is problems, problems to solve or customer needs that are Currently unfulfilled or under fulfilled. And so, you know, you would actually put on the roadmap productivity, customer productivity or some more specific example like that like something you could measure, like increasing their output
George Stocker 7:17
conversion rate for checkout or,
Bruce Mccarthy 7:21
yeah, if it were an e commerce site, for example, that's a good example. For e commerce websites, one of their biggest bane of their existence is abandoned carts. People pick out a product, put it into their shopping cart, and then never check out. And it's hard to know why. But if let's say that was the problem you're trying to solve is low checkout rates or abandoned cart rates that are too high. Well, there are a variety of things that you might try to fix that problem. Maybe the problem is, your checkout process is too confusing. And so you might read design it. Or maybe it's too long, there's just too many steps and people get tired and they abandon partway through. Or maybe it...
Open Source initiative: https://opensource.org
Transcript (Provided by Otter.ai) (Please reach out with corrections):
George Stocker 0:00
Hi, I'm George Stocker. And this is the build better software podcast, a podcast, where we talk about the issues that will help software teams build better software. Today, I'm answering the question, should your team adopt that open source library?
Now for the purposes of this episode, I'm going to use the Open Source Definition provided by the Open Source Initiative. At https://opensource.org.
There are two main divisions, there are copyleft licenses, which require that derivative works, use the same license as the original work. And then there are permissive open source licenses. And that's anything else that's not a copyleft license. A permissive license basically allows you to do anything you want with the software. Again, I'm not a lawyer. This is not legal advice. This is just my understanding of it. So back to the question, should your team adopt that open source library?
First question we want to ask is, is the license that you is going to help us or hurt us. That is, is the license of the project copyleft license, which means that we probably need to redistribute what we're doing if we use that library in the same license form, or is it a permissive license? Will it basically allow us to do whatever we want with the software? That's the first question you have to answer. And that's a legal question. I can't answer it for you.
The second question is, is that open source library central to the problem domain your software operates in? Or is it 10 gentle to it? Here's what I mean by that. If you are a data processing company, then you will pay very close attention to how your sorting and searching algorithms work. You may even adopt your own ETL processes for data warehousing. That's to be expected after all great data processing company. That's what you do. Now, if you're a web designer, company and you design websites for other businesses, then you're probably not going to build your own web framework. Because that's not how you make your money. You make your money by giving someone a finished product, not by putting your time and money into making a web framework. If the software that you're trying to adopt as tangential to your problem domain, you're more likely to decide you either want to buy it or use an open source library than you are to say, we want to develop this functionality in house ourselves, then to say we want to develop this functionality in house ourselves.
The next question you're going to ask is, Will adopting this library helped my team solve the problem it's trying to solve faster? Here's what I mean by that. All software, whether you build it or whether you buy it, or whether you adopt an open source library has an adoption timeframe. You have to take that into account when you're deciding whether or not to adopt a library or framework. For instance, when we were adopting RabbitMQ as our message bus We had to adopt it and understand what it did with each and every language that we use to interface with it. So it had drivers written in Perl and C# in TypeScript. And what we had to do is everybody that taught to RabbitMQ, had to invest that time into learning those API's. Now, in some cases, that'll be a very trivial amount of time. After all the software has good documentation, or it has very self explanatory use cases. Other times, it won't be, and you can't just pay that time for the person implementing it. It's everybody on the team that's going to be touching that open source library or framework. They all have to worry about this.
And that gets to the next problem.
What is the worldview of the open source library you're trying to adopt versus what is your applications worldview? If you're adopting something like SQLite as a database engine, it has a very strict understanding of the world. It believes that it's going to be a single consumer database that is, that might be one connection to the database. And it will run on a local or a situation where only one person will try to talk to it at a time. That doesn't mean it can't handle bigger workloads it can. But it means that it was designed around the idea of a self contained database isolated from the larger world. Contrast that with something like SQL server or Postgres, and its idea of the world is, yeah, there's going to be a lot of people connecting to me at once, and they're all going to want data. How do we handle that? Every open source framework or library has a worldview, and you have to understand its world view, to understand how much trouble or how easy it will be to integrate into your application.
George Stocker 4:52
The next question you have to ask is, how often is this open source library updated? What is its traffic look like? If it's on GitHub, how many forks does it have? How many open issues does it have? How many closed issues does it have? How many pull requests are pending? How long have they been pending? For, you want to know the answer all these questions because it's going to determine whether this is a hobby project, or whether this is an actual serious project that you will be able to rely on without having to pick up the pieces yourself. Just like my own hobbies, I get to them when I get to them. I haven't played golf in almost three years now, just haven't been able to get around to it. Now, if golf are my job, I'd be playing every day no matter what open source software runs the gamut from Hobby to business. And before you adopt a library, you have to know where it fits. If an open source library isn't updated very often, or if it has open issues, then you'll have to fix those issues if you decide to adopt that library, which means forking the project. There's a second part to updates. Once you've adopted an open source library into your system. You now have to keep up to date with its security releases with its minor releases with its major releases. Hopefully it's following something like semantic versioning where you can tell Just by looking at a version number, whether or not a change would be breaking or not. If you're going to adopt a library, you have to fit its release cadence and its update cadence along with your own. Miss gets further complicated if you're releasing your software as a distributor, as a distributable, to your users, in maintaining open source software that you've adopted into your project, you also have to worry about forking and releasing changes that may not be upstream of you. For instance, you're using a data grid, you find an error in that data grid, you fix it, you've released an upstream patch, but you don't want to have to wait for the library that you're using to be updated for them to release a new version. So you go ahead and you fork it, you make the change in your forked version, and then you use that version. That's something that you're going to find yourself doing. Often in open source software, the longer you use it. You need to plan for that and you need to understand it can happen What to do when it does happen. And in fact, if you're going to adopt an open source project, look at how it's built and look at how it's deployed. Those are two areas where you're going to find trouble when you need to fork or make changes in the software for your releases that aren't going to be accepted in the upstream release. Overall, with open source software, there's a certain feeling of you've adopted it, you get to maintain it. Now, whether or not that's legitimate is up to each project that you've adopted. It's different for each one. In some projects, changes are adopted very quickly, and they're fixed very quickly. In other projects, it could take weeks or even m...
Show Notes
Rough Transcript (via otter.ai )
George Stocker 0:00
Welcome to the build better software podcast, the podcast for software leaders who want to enable their teams to build better software. I'm your host, George Stocker. And today I am joined by guest, Laura Hogan, to talk about resilient management. Laura, welcome to the show. Thank you so much. I'm so excited. I'm really excited. Now, for folks that who are just meeting you for the first time, could you share a little bit about who you are and what you do?
Lara Hogan 0:24
Yeah, these days, I coach managers and leaders, fortunately, all over the world. Before I was doing this, I worked as the VP of Engineering at Kickstarter. And before that, I was an engineering director at sea and before that many other small startups in the tech space. I started out as a self taught front end developer and then figured out that management was definitely the place for me.
George Stocker 0:48
Yeah, so you've worked at large companies, you've worked at startups, and they're, those are typically differently paced. So I want to go into that deeper. But after you after you did that, you've now started your own company.
Lara Hogan 1:07
Yeah, it's called WhereWithAll . So I realized I had read this study eons ago now about firefighters and how they develop expertise. It turns out, you know, it was it was still basic expertise, but in this study, it was trying to figure out, okay, comparing firefighters in urban areas to firefighters in rural areas, which are the deeper experts just kind of controlling for number of fires and years experience. And the study showed that firefighters in this case in urban areas were deeper experts because of the diversity of fires. So different buildings, sizes, different materials, different you know, just like different kinds of population densities, it was diversity of experience kind of led to expertise building and I realized, I really wanted to get some more expertise in lots of different kinds of companies. And so now that I run my own business against a pretty managers and leaders of all kinds, different levels, also different kinds of organizations, ancient Organizations organizations with lots of hierarchy organizations with no hierarchy, distributed organizations co located you know, it's just the the diversity of organizations that get to support right now is is pretty cool. I'm definitely learning a lot very rapidly and it's been lovely.
George Stocker 2:14
Okay, and what sort of offerings Do you have to help out leaders?
Lara Hogan 2:18
So I kind of split my time between one on one coaching and group coaching and training. So I either go into companies and provide workshops or I offer like ticketed workshops which you have actually attended one of my in person workshops at the time now it's of course all remote. But it's it's been amazing to be able to go in and support all of these different heads leaders, both hands on application, skill based training for mentors because I don't know about you, but I didn't get any training when I became a manager.
George Stocker 2:44
No, the only reason I ever had any managerial training was through the army which is a bit unlike everything else. Yeah. But there are 200 year organization and they do they have a an entire they've books upon books and manuals. about leadership and about running teams, and there's a lot that we could learn from it, but it is a completely different space.
Lara Hogan 3:07
So many fields have actually developed management training curriculum, tech, I mean, classic engineers, like we get to, we're like, oh, we're gonna figure this out for ourselves, like, we know, we can reinvent. Yeah, precisely. It's been fascinating to try to support tech leaders, specifically, because I'm sure you've experienced this, like people are just so hungry to do right by their teams. And so it's been lovely to bring in not just management experience, but also, you know, I've done a lot of studying on how to be a good trainer, a good a good educator, a good facilitator. And that's also a whole new discipline. And so it's been really, it's been really nice to try to bring in these skills to tech organizations to try to help people out.
George Stocker 3:45
You you run a at least the workshop I went to it was a one day workshop, I think might have been to at the lead dev conference. Now, if people who don't know the lead dev conferences, it's a conference for as it says on the tin lead developers, so it talks about so groups that are useful to tech leads, software managers and the like. And I, I loved it, I can't recommend it enough.
Lara Hogan 4:07
And they're doing online right now. So they've got a whole bunch of amazing, they've got like a seven part series starting in this fall. It's all like three hour online events. It's gonna be just there. They're doing such great work
George Stocker 4:20
and supporting so many people. I'm going to drop that in the show notes, because I think everybody can still hear about that.
Lara Hogan 4:27
And I'm actually co hosting the first ones. The first one if folks are interested in this is all about how do we support our teammates as they grow? What are the skills that we need to use as lead devs to help our other teammates grow and develop?
George Stocker 4:39
So I don't want to spoil the subject, but what are skills that we need to help our our teammates grow?
Lara Hogan 4:45
So the thing that I've learned in doing this job for a while is that as knowledge workers, we're taught that the best way we can help our teammates is by teaching them pair programming or sharing with someone how we would do a thing They're working on mentoring them providing our perspective and our advice. And a bunch of research shows that those skill sets like the teaching, the mentoring skill sets, the advising skills, skill sets are really only helpful and getting someone unblocked or helping someone on board. That's it. If we actually want to help people grow, we need to use this whole other set of skills, which most of us are not equipped to use. And we've never been taught that they're important. Like, again, we've been taught that the best thing we can do is give our knowledge to other people, but actually not help people grow. So the three skills I really like to focus on, I'm missing like a broken record to you here is coaching. So helping people connect their own dots, introspect, reflect. This is when someone's like, Huh, like, what's important to you about this? What's hard about this? If you could change one thing right now what would you change those kinds of open questions really prompt like lightbulb moments in someone you know, it's it's so powerful to like, connect your own dots and be like, Oh, I know what I'm going to do next. ...
Dapper: https://github.com/StackExchange/Dapper
Transcript (Powered by Otter.ai. Please send corrections to [email protected])
Hi, my name is George Stocker, and welcome to the build better software podcast. Today we are talking about object relational mappers. Now object relational mappers or ORM s, as they're commonly known, are used by developers and development teams to not write SQL. Now here's what I mean by that. Whenever you're using a relational database, somewhere, somehow you have to write sequel object relational mapper does that for you. Here's how it does it, you create a poco in C sharp, or a plain old C sharp class, and you decorate it with some metadata to tell your ORM that this class represents a table in your database. Now whenever you ask the database for the information from that table using C sharp code, and if you're using C sharp past link 3.5, and you're using some form of language integrated query or link style queries to pull data from the database It translates your link into an actual SQL statement that gets executed by the database server. Now, this is really cool. If you didn't have this, you'd always have to write SQL in your code, map that SQL when it returns a data reader or some other type of database representation of the work you're doing into your classes and be a lot of mapper code back and forth by using an ORM. You don't have to do any of this. Sounds great, right? Shouldn't everybody use orams now? And the answer to that, of course, is no, they shouldn't. Now let's talk about why arms are good when you need to pull out data that roughly matches what your classes look like. And there's nothing too difficult about those, however, or I'm start to fall apart when you need to do more complicated queries, or when a query has lots of relations. Since forums are meant to pull data back and map it to your class automatically, they work really well in simple scenarios. But if you find yourself operating at scale larger than you already can handle, then you have to start and tuning those queries that it generates. Now, the API for doing this varies from Rn to Rm. But all of them have the same thing in common, which is, they don't exactly have the same API that you would use in the database itself. For instance, if I wanted to create an index in a database, tonight would use a create index statement. in SQL Server. If I want to create an index in let's say, Entity Framework, I have two different ways of doing that. I can put the index attribute on a field or fields, or I can use their fluent API to create an index. Now none of these are just as easy and as universal as Creating an index using the databases technology. And when it comes time for me to tune the queries that use that index, it's going to be far easier for me to do that in SQL Server Management Studio than it would be to use that translation layer that the ORM would provide. Because of this, using Rmi. To create queries that need to be finely tuned, is generally a mistake. Now I'm not dumping on RMS here. The fundamental problem is that we use relational databases far too often, even when we don't need them, or EMS make it easy to use a relational database. But it makes it a lot harder to tune our code, the way a relational database would expect, in most cases where we feel like we actually need a relational database, it's more of a vestige of inertia than anything else. Not all use cases. Is are meant for a relational database, if you have relational data, and that is data that actually has relationships to other data, then consider a relational database. But if what you really need are lookups by an ID and your data fits a non relational model, consider not using a relational database using something like a document dB. Now, object relational mappers, they again make it easy to write plain classes that translate to tables and they reduce your need to write sequel. One of the lesser advantages these days of using an ORM is that you may be able to switch out your database still keep the same order. This is sometimes touted as an advantage I have yet to see it happen in person, although I hear it's happened before. It's not one of those advantages I would count on. They do reduce your need to write SQL and they allow you to do Ogg pulling data back from the database into your integrated development environment like Visual Studio. They're really good at things like when you don't need SQL or you don't care if your SQL is highly performant. It's really good to use an ORM when you just need to get a prototype working. It's also really good to use for small projects, personal projects, school projects, things that don't have high load considerations. And it's really good when your team and business context is such that speed to market is more important than maintainability and scale. It's also good when your team doesn't have a dedicated DBA. And you'd rather make changes in code than modifications to the database. Now, I don't say all this to say that, or EMS can't scale they can. Many large teams use alarms for high traffic systems and they work. The problem comes in is that you end up doing a lot of hand tuning to make it happen. If you don't, you end up dealing with two different API's both the database, which has been around since the 70s, and your ORM. And its documentation. This is problematic for most teams. And in general, the more layers you add, the more dependencies you add as a team, the more you have to understand to get new work done. That isn't safe. Don't use an ORM. But it is to say, make sure you're getting the value from the ORM before you commit to using it. Now one issue that tends to compound the problem of using ORM is teams were the idea of consistency of data access is more important than ease of data access. Here's what I mean. If a team says we're going all in on Entity Framework, you must use Entity Framework code first, for all database work. Now immediately, anyone who's used Entity Framework code first will find the problems with that statement. Entity Framework code first does not have first class citizens like views. So if you need a view, you can't even do it through code first. Now, other problems crop up, and they're small problems. Some are some aren't. But they all come down to, there is no ORM out there, that gives you the entire API available to your database. And if you don't know your exact use cases, present and future, then you run the risk of running into problems, when you say consistency is more important than ease of data access. An easy way to get around this is to say you know what, in some cases, we will use RMS and others we will not make that a part of your planning discussions. Because sometimes solve a problem, the easiest way to do it is to read a database index, or a database view, or a stored procedure, or report. And those are things that orms generally have problems with. Now not all teams should consider ORMs. I typically shy away from ORMs entirely. If I need one I'll use a micro ORM like dapper. When I need that simple data access from column to C# property. When I need anything more complicated or anything where I need to tune it, then I'll write my own SQL in code and go from there. This works out well, Since Dapper takes care of that mapping between Column and C# Class Property. RElational Mappers are also not for teams that have dedicated DBAS. And when I talk about dedicated DBAS, I mean DBAs that can help you tune queries and help keep the Database at peak performance. Probably just insert a translation layer that will be harder to debug, and harder to tune. And if your team has a good understanding of SQL, relational database engines, maybe you don't need that ORM maybe the time it takes you to translate code into SQL isn't enough to justify the cost of an ORM. And that's really what all of this comes down ...
Josh Heyer (Pronounced "Higher", sorry Josh) aka Shog9 can be found at shog9.com
Josh is a Developer Advocate for Enterprise DB https://www.enterprisedb.com/
Twitter: @shog9
Jon Ericson : https://jlericson.com/ and on medium at https://medium.com/@jlericson
Twitter: @jlericson
I uploaded a remixed version that should result in a higher volume for Josh Heyer on 10 July 2020. If you listened to it before then and were annoyed by the levels; that was my fault, and I hope I've fixed it. If not, please reach out.
Rough Transcript (Powered by Otter.ai -please submit corrections!)
George Stocker 0:00
Hello, and welcome to the build better software podcast. I'm your host George Stocker, and today I'm joined by john Erickson and Josh hair. Welcome to the show.
Josh Heyer 0:11
Hi, hello,
George Stocker 0:14
john and Josh, for people who may not be familiar with who you are and what you do. Tell us about yourself.
Jon Ericson 0:21
Sure, we both talk at the same time.
George Stocker 0:23
One, one after the other.
Josh Heyer 0:26
To talk over somebody.
Jon Ericson 0:29
If we let you talk first, this will be the end of the episode, right?
Josh Heyer 0:33
Yes, that is plausible. I'm just this guy, you know. So, john. Uh,
Jon Ericson 0:40
well, you probably if you know me at all, it's because I was a community manager at Stack Overflow and Stack Exchange. I did that for almost seven years. And and now I am a community and product operations manager at college confidential, which you is a forum site for people who are applying to school for college and universities?
Josh Heyer 1:09
Yeah, that's a good intro. I'm going to just steal that. So pretend I said what john just said, except replace seven with nine and replace college confidential with enterprise DB or EDB. A Postgres company.
George Stocker 1:24
Cool. Now, I'm not gonna let you get away with that either of you know, yeah, so Josh, you were actually the first Community Manager hired for Stack Overflow, as I understand it, you were I
Josh Heyer 1:36
was, I was, let me see. 123 I was either the third or fourth. I'm gonna say third. It was Robert cortino. He was number one. Although we all had different job titles in the early days. I don't think we settled on Community Manager until like a year. He was Robert could Hannah was was the first year Community coordinator. And then and then it was Rebecca turnoff. Remember Rebecca?
Jon Ericson 2:08
Yeah, our turn Archer and yeah,
Josh Heyer 2:10
yeah, she was she was number two. Now. Now see, Rebecca was Rebecca was not originally community coordinator. She was I think it was community evangelist or developer evangelist, something like that. And then we all we all kind of coalesced on Community Manager after a while, as the least offensive generic name we could come up with, I was never comfortable with evangelists. That was that was what Jeff suggested to me. Right away and I was like, man, and then I came on as adjunct community coordinator, yeah. And working part time for the first year. Just kind of trying it out to see if, see if maybe the company just go under. I could save myself some work. And when that didn't And I came on full time in 2012.
George Stocker 3:03
Yeah. And so you know when to remember back in the day these this is 10 years ago is that community management from a public internet community perspective was still very new. And in fact, the only way I knew of it was through video games was that places like dice had community evangelists and community managers that helped manage manage video games, or manage the communities for video games. So, you know, in this fresh new world of community management, how did you all acclimate to that job?
Josh Heyer 3:39
So first, I want to say video games are like, the trendsetters in this field. They, they they were and still are kind of leading in terms of what it means to manage a community because I have I think they figured out way ahead of just about everybody else that you, you really do need people who are focused on that specifically, a lot of other companies had people doing similar things. But it was almost like, you know, this is something you got to do in your part time, above and beyond your real responsibilities. and video games pretty quickly figured out especially the massively online multiplayer versions, they figured out that, oh, we actually need to culture to nurture to guide this community of people that we depend on in order to, you know, have a viable game and, and put focus squarely on that. So we took our lead from that in a lot of ways. JOHN, we brought in because He was super awesome in our community. He was writing stuff that was better than what we were writing. Okay.
George Stocker 5:13
So how did you how did you come to be at StackOverflow? JOHN?
Jon Ericson 5:17
So I was I was a beta, user on stack Stack Overflow, and then I threw a fit, because I didn't like some of the things that Jeff was doing. I thought closed, closed votes, some closing questions was dumb, like, Are we going to run out of bits on the internet? And so I quit and then and then Stack Exchange came along, and they're all these crazy sites. And I was like, Oh, these are interesting. I thought gardening and philosophy. That was my, that's gonna be my entry back into it. And it turns out, it's hard to do gardening when you only have a little apartment, condo thing. And
Josh Heyer 5:57
fluffy is great man space.
Jon Ericson 6:01
I so I knew so little bad gardening, and I've got a house now I actually could use the gardening site. And then, but the thing that really got me going was biblical hermeneutics, which is about interpreting the Bible, which was really something that I still am fascinated by. And so I got into that. And I think what Josh was saying, at one point, there was a bunch of controversy over what the site meant. And I ended up spilling tons and tons of digital ink on the meta site. So why not work
George Stocker 6:39
biblical from a memetic? site? mentor? What what what almost almost like
Josh Heyer 6:43
hermeneutics and exit Jesus are not words you use in everyday conversation? I
George Stocker 6:48
can't even pronounce them.
Jon Ericson 6:51
Yeah, so. So the difficulty with biblical hermeneutics is that some people look at that and they're like, Oh, cool. I'm going to be an evangelist, too. pick up another word that Josh isn't a huge fan of.
Josh Heyer 7:05
For people who actually legit are evangelists I don't I don't feel like it's a great job title for people who are, you know, doing community management?
Jon Ericson 7:15
Yeah. Well, I guess it is a geeky connotations, right?
Josh Heyer 7:20
It is located. Yeah, it is complicated. You you there was another word by the way that you you guys struggled with a little bit unexpectedly. And that was biblical. Yeah.
Jon Ericson 7:34
Why? Why is that?
Josh Heyer 7:35
Well, different people have different ideas of what the Bible is.
George Stocker 7:40
That's right. Catholics, we would there, you know, five extra books for Roman Catholics in the Old Testament that aren't present next version.
Jon Ericson 7:52
And those five books, I mean, this is a huge, huge problem for us. So we got to, we got to excommunicate you. You're not A lot on our site.
Josh Heyer 8:01
And then there's there's like a whole group of people who who consider, you know, the entire New Te...
## Links
Jobs to be done
Milkshake in the morning theory
Practical Empathy - Indi Young
Sales Safari
Michele Hansen's talk at MicroConf 2019: How to get Useful User Feedback
30x500 - Amy Hoy
## Transcript (powered by Otter.ai - Please raise any issues found in the transcript. AI will one day get us there, but until then...)
George Stocker 0:00
Hello, I'm George Stocker, and this is the build better software podcast. Today we're talking about product discovery and customer product research. I have the privilege of welcoming Michelle Hanson, founder and CEO of Geocod.io to the show to talk about this. Welcome, Michelle. Hi.
Michele Hansen 0:15
Thank you for having me on today.
George Stocker 0:17
Thanks for joining us. Now for people who may or may not know you, could you tell us a little bit about yourself and your work?
Michele Hansen 0:24
Yeah, so I'm co founder of geocod.io, which is a bootstrap software as a service company that my husband and I started about six and a half years ago. We started it as a side project and over a couple of years of slowly growing, listening to our customers and building for what they needed, and where the gaps in the market were. Transition to full time. So my background is in product development. And that's primarily what what I would say my background is, is in and where my heart really lies. So now running a company was just the two of us take on a lot of hats far beyond product.
George Stocker 1:05
Yeah. And before GeoCod.io Do you also did product research or product management type work for the Motley Fool?
Michele Hansen 1:16
Yeah, so I did product management and product development, which was an incredibly fun part of my career, worked with some really wonderful people and did a lot of fun research that led to some some good outcomes.
George Stocker 1:30
Okay, so what was so I think that the Motley Fool is a is a services company. They have financial newsletters that they that they sell subscription services to. And Geocod.io is a SaaS product?
Michele Hansen 1:48
Yeah, that's right. Yeah. So so one was b2c, and the other is b2b, which has shown me some really interesting differences and what it's like to do customer research and in a consumer context versus in a business context, I think there are a lot more similarities than people might think.
George Stocker 2:06
Okay, so let's let's dive first into the into the b2c context the consumer. And so that would that would be your work at The Motley Fool I, I believe, right?
Michele Hansen 2:16
Yeah So So you mentioned that they create financial newsletters and a lot of what we're doing is basically how do we modernize the concept of a financial newsletter so they had gone from being print newsletters and then and then been fairly revolutionary and bringing them online fairly early. And then how do you evolve that into something that meets the expectations that consumers have now of things being customized and personalized and meeting their interests and being consumable very quickly, all those those kinds of things about that are that are very, that have been very relevant in consumer for quite some time now. How do we make the concept of a financial newsletter or a fight or a sort of financial publishing product meet those kinds of experts. Yeah. And so what was the where would an idea start? And where would the customer come in? When you were designing something new for a customer at the fool? That's a really interesting question because it came in and a lot of different places. When we first started really working directly with customers, when I was there, it was at the very end of the process, which people say is generally not when you should start talking to them, you should start talking to them before you even have the idea. But when we first started, the customer was was not in the beginning part of the process. And so it was more at the very end in terms of usability testing. And the more and more we did that, and the more and and more that we were creating things and then they weren't reaching the KPIs that we were hoping for that we started going back in the process and talking to the customer earlier and earlier and earlier, until eventually the customer was the very first start. We did a lot of different types of customer interaction from in depth interviews that could be an hour or an hour and a half to usability testing to. We did testing of products, we did testing of landing pages. We did observations through tools like hot jar and user testing, where you're not actually interacting with the user. We were fortunate to do a lot of different types of learning from our customer.
George Stocker 4:26
Yeah. And so you noticed that there was a KPI difference. So when you talk to the customers earlier, did it did it affect your KPIs? Did they go in, they start going in the right directions. Was there a correlation between the two?
Michele Hansen 4:39
Yeah, so where the product process would start, what for probably about the first year I was there was you you would have you know, a spreadsheet of all of the different KPIs and measures of you know, different groups of users and whether they are meeting them and you know, so if a user clicked on this, their livelihood, that they they ended up meeting the KPI was was this but if they didn't click on this, but they clicked on this and said, You know that they and sort of all sorts of permutations like that. And so it really start out with Okay, what's the what is the spreadsheet say about the users who are the most successful did these actions so then how do we make more users do those actions so they become more successful? And the problem was that was that those those actions really weren't weren't causative. And there was an awareness that those actions weren't causative. But there was a limited ability to be able, well, if we don't, if we don't use the spreadsheet, then what are we going to use? And so there could be a little, you know, well, let's bring in customer support and see what they think. And so we would you be gradually refining the products. And it really wasn't until we started interviewing the customers and diving deep into what they were trying to do. And baking usability testing into the process and bringing developers and designers into those interviews and into those usability sessions, that we really started to have breakthroughs. And so it sounds like the fool had a very robust technical process. For getting metrics from users like they, and it sounds like from what I'm hearing, at least that you took that you also but you think started may or once you focused on actually face to face conversations with with your customer. Yeah, there was an extremely strong culture of quantitative data. And where we ended up evolving the process was bringing in the, the qualitative side to explain why the quantitative data was showing us what it was because you can, you can, you know, look at Google Analytics all day, but it's never going to tell you why somebody did something. Only a person can tell you that, of course, you shouldn't just talk to one person, you need to talk to lots of them. But But we found that that really helped explain to us why ...
Show Notes
https://doubleyourproductivity.io
Transcript (Powered by otter.ai):
Hi, I'm George Stocker, and welcome to the build better software podcast. Today we're going to talk about whether or not you should adopt test driven development, commonly referred to as TDD. Now, test driven development is not about tests or even about testing or about test coverage. It's weird since the first word is test, but we all know naming is hard. test driven development is a development methodology. Put simply, it's a way for teams to write code and ship software systems reliably and consistently. tvV gives you the confidence that you can deploy at 5pm on a Friday because your tests passed. That's what TDD is. It does masquerade as a testing framework, and because of that, it gets a bad rap. But as I said, it's not about the tests. It's about about what the tests show you about your system. Now in systems that are easy to compose, testing is easy. But in systems where a component relies on another component, and they're tightly coupled together, testing is not so easy. Now that's a smell, that's a sign. And what TDV does is it shows you that smell or that sign early, so that you can avoid the problems that will inherently come from parts of your system being coupled together. And that's what it is for developers for a business. test driven development is a way of being able to develop software with confidence to produce software that can be shipped reliably and consistently, without regression bugs, and without deadlines slipping now I'm not talking about that full confidence of a white dude in tech. We've all seen that I'm talking about that quiet confidence that's usually disquieting and reassuring, at the same time, confidence of a system administrator that regularly checks their backups. Regression bugs can become a thing of the past. And even do developers can work on a code base with confidence, talking about that kind of confidence. Also for a business, it allows them to respond to change more quickly. Now, what do I mean by that? Your team has been given a new vertical to approach to develop a feature for and you don't know much about it. test driven development allows you to break down the problem to specify what the code is going to do in the customers language. And test driven development allows you to talk to your business or your customer in their language, and that's really important. Now, let's talk about a little bit about what test driven development is. What do you do Historically, there have been three rules of TDD, although that's itself a misnomer. And it briefly goes like this. You write a failing test, you write code that allows that test to pass. And then at regular intervals, you refactor your work. And it sounds simple. I know. And that's been part of the problem with adopting it. You see, there's a lot of detail into failing, test passing, testing and refactoring. You have an architecture and your system that will evolve from this. And that can't rely on just those three simple steps in order to work. So regular inspection of the work that you're doing, changing how you're implementing something, and then looking at an overarching picture. And that's why test driven development by itself won't make your team successful. There aren't any silver bullets in software development, as Fred Brooks once said, and he continues to be right. But test driven development together with other practices can make your team successful under the right circumstances. Now, those circumstances are why we're here today, should your team adopt test driven development to do that? I can't give you any absolutes. There aren't any. I can say that we can look at your context, look at your team, look at your business. Look at how you're set up. And we can go from there. But there are no absolutes and anybody who tells you otherwise. doesn't understand the problems that we face. The first thing that happens when you build systems through test driven development is you have to make sure the test for a given piece of functionality is written first.
Now if you're trying to adopt TDD Establish system that may or may not be an easy thing to do. You may have to set up a lot of dependencies to even get that first failing test to run. And that's assigned your team needs a lot of outside dependencies to run that's going to make adopting TDD in the existing system harder. You can do it, but it's going to incur months pain before you finally see the benefit. And this is, as they say, sub optimal. Adopting TDD for an existing system takes drive and discipline on the part of the development team and your business. Because you're about to run a marathon. If you're in a business or organization that changes too quickly, or doesn't want to work can't invest in work over the course of months and years to get to a better result, may want to rethink adopting TDD for your project or your team. You will endure pain to pass design decisions that coupled implementations together often unwittingly Pain was going to be hard to fix because it requires putting in characterization tests over code that's already written to ensure you don't break it as you're trying to refactor it out into a TDD system. Now, characterization tests are tests that you create over a working system to spell out in test what the system does. It's been merely to show that this is how the system operates. You try not to mock out anything if you can help it, because there's always a chance you'll mock out behavior that you actually needed to be in the test. It is slow and tedious work. Now, it's not all bad news. characterization tests help you to see the system for what it is for what it is and what it requires to work. This is incredibly useful information as you start to adopt test driven development, as it shows here is the system that can easily bring under test alongside parts that will require extra scrutiny since they deal with external actors or hard to configure states in the system. The second factor in determining whether you should adopt TBD is your building deployment pipeline? how mature are your continuous integration and continuous delivery practices ci and CD? How automated are your builds, some member of your team have to manually copy files over from one server to another, or manually run a SQL script. How does this look up your deployment stack? What is it like to deploy software to test or staging or production and TDD part of the value is seeing that the test failed immediately. And while TDD does strive for fast tests, characterization tests may not be fast. After all, they are literally building up parts of your system to run. So you'll want to make sure you have an automated build pipeline before you adopt TDD. Now. Third, TDD is an investment. We say that a lot in tech, but it's really true here. TDD is a way of changing How your team develops software. It is the difference between building houses from experience and building houses from a blueprint. If your team is really good at building houses, they may not need a blueprint, especially if they're building the same house over and over and over again. But if what you're building is a little bit different every time, then it helps to have a blueprint. With tester development. those tests are your blueprint. They keep you in check. They help you make sure that whatever code that you're writing is not tightly coupled to other pieces of code. One of the ways they do that is how your tests are set up. If a test is hard to set up, then that's assignments to coupled with something else. And test driven development shows you that but it still requires some overarching foresight and a larger picture to build from. It doesn't replace upfront architecture or design, one of the problems teams have had in adopting test driven development is they, they s...
Links:
Please support Black Lives Matter in any way that you can.
Episode Transcript (Automated Transcript via @otter_ai)
George Stocker 0:00
Welcome to the built better software podcast podcast for software leaders who want to enable their teams to build better software. I'm your host George Stocker. Today, I'm joined by guest, Ben Mosher to talk about wardley mapping, then Welcome to the show.
Ben Mosior 0:16
Thanks for having me, George.
George Stocker 0:18
For folks who are just meeting you for the first time, could you share a little bit about who you are and what you do?
Ben Mosior 0:25
Well, my name is Ben Mosier. I do a little bit of this and that I kind of jokingly call myself a methodology whisperer, which roughly translates to God, I wish I had a job title because I don't know how to describe myself. But you know, roughly that means I take these methods that people are using for thinking that are relatively unknown, but quite innovative. And then I try to turn them into everyday tools. So for the mapping is one of those endeavors. There are others but yeah, that's that's what I do. I run workshops and things like that and Build resources for people to learn.
George Stocker 1:02
Now for people who are new to wardley mapping, what is it?
Ben Mosior 1:07
Yeah, wardley mapping is a strategic thinking tool. I like to think of it as like a knowledge creation tool that enables action. It was invented by this lovely old man who lives in a swamp by the name of Simon wardley. He lives in UK. And he created this method after quite an extensive bit of research into what I would basically point out as being kind of fundamental patterns of capitalism, or the mapping the practice is basically three things. It's first and foremost a visual communication method. It's about kind of creating an artifact that we can challenge and have a conversation around. That's visual. And the second thing is, it's a body of research around these patterns of supply and demand competition, you know, how does capitalism big work, how do things evolve. And then the final thing is a strategic thinking process that kind of ties both those things together in order to enable you to make new decisions based on seeing a kind of strategic landscape that most will not be able to see. So it's a knowledge creation tool in order to enable you to take action.
George Stocker 2:24
Okay, and now with wardley mapping when when you start out with it, what are you mapping?
Ben Mosior 2:31
Yeah, so when you make a wardley map, what you're focusing on are things like, you know, what is the system that you're a part of, and you can have to make some curatorial decisions about how how some curatorial decisions about what scope to pay attention to, but you could say map of business, or you can map a market. You could even map yourself as an individual But roughly what you're doing is focusing on what is the system? What is its parts? How do they relate together? And then you do two things. You think about how the system produces value for somebody, some user. And then you also think about how those parts are changing under the pressures of supply and demand competition, which is capitalism. So what is the system? How is it changing? How does it produce value for users? And is basically creates a way for you to interact meaningfully with the world by modeling it. We can get into more of what that actually looks like. But it's not as complicated or weird as it sounds. It's literally just what are the parts? You know, what do they like? And how should we treat those parts? How should we have intent with respect to each of those parts?
George Stocker 3:51
Now, how do you spend your days with wardley mapping?
Ben Mosior 3:55
So I spend time with teams doing strategic kind of training. assizes, where what we'll do is we'll kind of go through the basics of wardley mapping, we'll apply doctrinal principles about, you know, what you ought to do as individuals in the organization, how to think about, basically, what's it What's a universal principle that you can apply to the work that you're doing? That that value that you all kind of share. And then also thinking about what Simon calls climatic patterns, which is basically like, what what is relatively predictable about capitalism that if we only took the time to notice, we could actually use that to anticipate change that's occurring in the wider market. And then like finally just thinking about how to go about strategic thinking, I get more energized by the thought of getting into kind of like, what what is the you're doing George, like with this podcast and what, what are the big questions you're trying to answer? And then you and I riffing off of that, given the context of what I do, and Given the context of mapping and how I think about those things, I feel like a lot of energy coming out of that kind of conversation, and kind of nodding toward the mapping without having to like get big make it the focus so much, if that knows,
George Stocker 5:14
no, I really like what you said. And that's, that's key is that, you? I, I believe that we, as software developers, and as an as a practice, software development itself is still in its infancy. You look at you look at construction, and they have building codes and building codes and building codes. They can tell you to a tee, how much a bridge will cost and how, what it can hold. You know, who's it for? What does it does, and with software development, we can barely tell you how long a little feature will take. And that's after 16 years of doing it. And then we're
Ben Mosior 5:55
still having fights about about like whether or not estimates are a valid thing to do.
George Stocker 6:00
Exactly what can you imagine? And you're to the point about building? Can you imagine if the building codes were like, yeah, you don't need blueprints. But we built software every day without blueprints without specifications. And we actively derived teams that do produce specifications that do produce tests in the form of test driven development. Now, what I focus on is I focus on I want every software team out there to know about test driven development. And I want to make it accessible for all software developers to a point that it becomes, I believe, it can become the standard way of developing software. Now I believe that we have we have taught it incorrectly. We have glossed over the hard parts. And we have, you know, over simplified it and not gone to, when you would use it, how you'd use it, why you'd ...
The podcast currently has 11 episodes available.