AI isn’t just speeding up coding. It’s starting to change how teams work, what they build, and even who needs to be involved. In this episode, Brian and Hunter separate real impact from hype and explore what’s already shifting inside teams.
Overview
AI tools are improving fast, but what does that actually mean for teams doing the work?
In this episode, Brian Milner sits down with Hunter Hillegas, CTO of Mountain Goat Software, to explore how AI is being used today inside real software teams. They dig into where these tools are genuinely accelerating work, from coding agents and automated testing to analyzing large data sets and reducing friction in everyday tasks. They also unpack the growing shift from writing code to reviewing it, and what that means for developers and team dynamics.
At the same time, they address the gap between hype and reality. Where does AI perform well, and where does it still fall short? What happens when adoption is pushed top-down without clarity? And how might AI start to reshape roles, collaboration, and expectations across a team?
This is a practical, honest look at what’s changing right now, where to start if you’re new to these tools, and how to think about AI as part of your team without losing sight of how real teams actually work.
References and resources mentioned in the show:
Hunter Hillegas
Mountain Goat Software’s AI Toolkit
#82: The Intersection of AI and Agile with Emilia Breton
#169: Building Practical AI for Agile Teams with Hunter Hillegas
#175: When AI Makes Agile Teams Worse with Hunter Hillegas
AI Doesn’t Eliminate Agile Teams — It Increases the Need for Great Ones by Mike Cohn
How to Use AI for Product Discovery and Writing Better User Stories by Mike Cohn
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
- Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
- Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Hunter Hillegas is the Chief Technology Officer at Mountain Goat Software. With over 20 years of experience in software development, product ownership, and team leadership, he leads the creation of tools like the AI Toolkit and Team Home to support effective, engaging learning experiences. Hunter lives in Santa Barbara, California, with his wife and their dog Enzo.
Auto-generated Transcript:
Brian Milner (00:00) Welcome back everyone. We're here for another episode of the Agile Mentors Podcast. I'm with you as always Brian Milner and back with us today, I have the one and only Mr. Hunter Hillegas with us. Welcome back Hunter.
Hunter (00:12) Thanks, Brian. Thanks for having me. Appreciate it.
Brian Milner (00:15) Yeah, really excited to have Hunter with us. Hunter is Mountain Goat Software's CTO. And so he is very familiar with the AI scene and what's been going on with AI, both as a hobby and as a professional. And we often share and swap stories about this and kind of exchange things that we've seen around the way about this.
Wanted to have Hunter on to talk about really some of the tools that people are using here in today's world, specifically around teams and how teams are using AI to kind of help improve their performance, improve what they do. Hunter, I'll start by just saying, I think that there's, when I talk to people about this, I usually get a lot of confusion. I know people feel overwhelmed a little bit and, you know,
a healthy dose of skepticism as well when it comes to this. So from where you sit, what are you seeing that's actually changing in the way teams work now that AI is kind of proliferating?
Hunter (01:12) Sure, I'll warn the audience a little bit. I'm pretty bought in on AI as a thing and its usefulness. that it's perfect and not that there aren't obvious places for improvement and you can have a whole separate conversation about potential downsides that are to things outside the scope of what we're talking about right now.
But so for those of you, if you are an AI super skeptic, you might find my takes pretty frustrating. But I'll just dive in and we'll go from there. I mean, I think a lot of my own usage, my own, what I see for myself is around the software engineering role. And I a lot of people that are listening to this podcast are doing Agile Scrum, et cetera.
on software teams, right? So I think that's a pretty big chunk of our audience and the people that we're talking to. So they can probably relate to this. And I think it probably depends quite a bit on where you work, right? I think with a lot of, as with a lot of other technologies, bigger companies are slower to adopt and maybe have more rules about how things can be used for better or worse, which can also mean that they're not getting to some of these things as quickly as smaller companies and startups where it's...
where taking on a little bit more risk is more acceptable. And so I'm sure that people's experiences vary even if they're using these tools in their work. From the software side, I would say we're recording this in early January. And I would say that even though tools like Cloud Code from Anthropic and OpenAI has a similar agentic coding tool called Codex, and there are others, of course, as well with full IDs like Cursor,
Those have been around for a while, right? Those are not brand new. But I would say that even in the last month or so, literally some from like the end of the middle of December until January, I feel like there's almost been like an inflection point with the quality of the models that's used especially for doing code related tasks where you can get, you're getting much better results. And if you know how to ask for things, prompt the model in a way that...
can get the most out of it, which is a skill. As a side, think I sort of chuckled when the term prompt engineer came into being a few years ago. Like, that can't be a real job. I mean, sure. But knowing how to prompt the models is a really useful skill. I'm just seeing the latest models, whether at the time of this recording, I think it's Opus 4.5 from Anthropic and Codex 5.2 from OpenAI and Gemini, which is
coming up super fast on the consumer side of things and a little bit slower adoption on the programmer side of things, but I have no reason to camp Google out. They're just, really, they're really good. You're now at a point where somebody, you can ask it to go do a task and it might go away and work for, in some cases over an hour and produce exactly what you asked for. Again, how you ask is still critically important, especially when we're talking about stuff like quality, but they're only getting better.
quite good now. mean, again, this is a general statement. are certain technologies and languages where they're much better than others. So if you spend your time building video games, you might find that they're not as good at game dev as they are at building web apps, for instance. And so it's not a completely even outcome. But it's pretty darn impressive. I am seeing a sentiment I'm hearing more and more from
hardcore programmers, people that have been writing code for a long time, really enjoy the whole process, is, wow, I am actually spending more time reviewing the output from my coding agents than I am writing code myself. So it feels like there's been a bit of a flip, at least for some people in some roles. I'll keep caveating that, but ⁓ it's pretty amazing. So I don't know, it's something I'm excited about. am personally, as Mountain Goat, there's a few of us doing software-related stuff. It's not a huge team.
Brian Milner (04:48) You
Hunter (04:57) But the amount of stuff that we're able to build is, we can be dramatically more impactful with these tools than we could before. And the other thing is that the types of tasks and software that we are building, we're building stuff that we never would have considered building before because the barrier, it just wasn't worth the time, right? But now that it's much easier to produce this stuff, we're, our,
Brian Milner (05:14) Mmm.
Hunter (05:19) our bar for how impactful does the thing that we're going to build, how impactful do we think it's going to be is dropped because it's just, we were able to get more done. So it's pretty exciting. And I'm personally pretty excited about it. And I can't wait to see what's going to happen over the coming months. I mean, think it's only going to get better.
Brian Milner (05:35) That's awesome. Yeah. I mean, I think your sense of optimism is shared. I think there's a lot of optimism around it. even if people are a little more skeptical, I think that they recognize that there is potential, even if it's not fully realized today. But that kind of leads me to thinking about that there's a lot of, it feels like there's a gap between sort of the hype and the reality a little bit.
maybe would help us to understand a little bit more to kind of back up and think about what are some of the things that AI does a really exceptional job with? What kinds of things does it do well? What kinds of things does it struggle with? So from your experience, what have you noticed on those two questions?
Hunter (06:20) Yeah, the hype thing is always an interesting one, right? And some of that comes from the labs making these models, right? They are companies that are marketing products they make. So, know, surprise, surprise, they're going to paint them in the most optimistic light most of the time. And they're not, of course, they're not great at everything. And the software writing code is actually something that they're
good at, have been good at in a general sense, have been good at for a while. And in some cases are getting really good. Like it's kind of scary good, scary in the sense of like, how is this going to reshape the employment market for people that write software for a living? Which I know is something that is something that I think about and I know a lot of my peers think about too. But they're not good at everything. I mean, you know, even today, this is not something I've tried, but
Brian Milner (06:59) Right.
Hunter (07:11) Literally, minutes before we started recording, OpenAI announced Chat GPT for Health, which is a new health-specific initiative where you can bring in your health records and you've got an iOS device connected to Apple Health and some other things that I know people are doing already, They're throwing in their lab work and putting them into the chatbots and whatnot. Is it good at that? mean, question mark, right? I don't know. I haven't tried this particular integration, but that's something that maybe the stakes are different, if not, you know.
You could say higher, but yeah, mean, you know, it's one thing if your chatbot writes some code and the test doesn't pass, you you can recover from that versus if it gives you some really bad medical advice. You know, that's a whole different ball of wax there. So it's interesting to kind of see how these things are applied in different areas. And I do think that skepticism is valid.
Software is nice because you can write tests and you can run it through a compiler and you keep, does it work? I mean, you kind of know, right? If it's, it's a, it's much more binary is if this output I got was correct, then a lot of the other things where it's more subjective and harder to know if the answer it you was right. So I, you know, I understand where people's skepticism is coming from in a lot of these cases, but I, it's hard for me to imagine a world where this stuff is going to go away, right? I think it's only going to become a bigger part of our lives. So I.
personally believe that there's benefit in jumping in and learning more about it and how you can apply it and learning where it's strong and where it's not as strong so that you can, you know, use it effectively.
Brian Milner (08:38) Yeah, no, I think you're right. think it's the context really matters. Like you said, if we're building something that's going to go inside someone's body, we want to be much more thorough in our testing to make sure that's not going to cause a problem. We don't want to use the person as a guinea pig. But if I'm building some game for the app store, I don't really care if there's a bug here or there. It's not really the end of the world. ⁓
Hunter (09:03) Right. I'm sure you
care, but it's not gonna kill anyone. I mean, just to be super stark.
Brian Milner (09:06) Right, yeah,
I should clarify, right? I'm not saying that you don't care if you have bugs, but I'm saying it's not gonna kill anyone if you have a bug, right? ⁓ Right, exactly, exactly. Well, let's get to some specifics because as a software developer, as someone who's been using this and you talk about this accelerating what the team here at Mountain Goat's been able to do, what are some specific kinds of tools that you found useful that have accelerated things and...
Hunter (09:13) Right, which is obviously much, much higher stakes.
Brian Milner (09:31) that you come back to.
Hunter (09:33) Yeah, I mean, like I mentioned before, the coding agent tools, think, are probably the most, in the work that we do, the most revolutionary, just because they can't go off and do a bunch of work in a lot of cases with, if they're prompted correctly, uninterrupted, which is pretty valuable, right? Because it means I can work on something else while that's doing its thing. It is kind of like having some additional team members, and maybe we'll touch on that a little bit more. Beyond that stuff, there's more. mean, so...
Like a lot of other companies, we have analytics on stuff like our website and just so we can learn more about how people use the products and services that we provide so that we can make them better. Like a lot of other companies, we have probably more analytics data than a single person can review just because of the volume that we're talking about. So one of the things that I was able to apply some of these models to was analyzing that data in a way that
we can see more macro trends, right? people are interested in this topic over this topic, or they're interacting in this way that suggests that maybe there's an opportunity for us to clarify something over here. Stuff that the data was there, but it was really difficult to get value out of it because there was just so much of it, not something a person could go through and look through individually. And being able to aggregate it at a higher level, was something that, it's not like it was impossible to do with other tools, but it was much easier.
able to be done in a way where even non-technical people in the team can ask a question and get a reasonable answer back without having to understand how all that stuff works. So that's an example of applying some of these language model tools to kind of a heap of data and saying, hey, I this question. Can you try and derive some inferences for me as well? So another example. And yeah, we build internal tools. We have our website where we have
many different pages discussing different projects and products. And we're always looking to try new things and do some experimentation. And we've got a great copywriting and design team that we engage all the time. But we have more stuff that we want to try than they have time to do in a day. So we're able to augment them with some additional tools that we put together, which we can do build some test pages, do some A-B testing of stuff. While staying under the Mountain Go brand,
We have a pretty specific voice at this point. Mike's been doing this for a very long time and a lot of what we write is from his perspective and almost exclusively written by him. So when we do marketing pages and whatnot, you know, it's nice to be able to, to make sure things stay within that, that voice. And this is another example where I can throw in something that I want to test very, very quickly, see how it goes. Maybe get rid of it, maybe promote it, maybe work on it, work on it with a copywriter to improve it. so if that
Again, we can do before, we can do it at a higher velocity than we were able to. so augmenting our existing team with some of these tools has been pretty fun.
Brian Milner (12:17) Are there some typical kind of software related tasks that it's kind of helped accelerate? I'm thinking things like testing or deployment, of DevOps kind of related work.
Hunter (12:29) Yeah,
yeah, that's a good question. Yeah, I mean, like everybody else, am a test believer, but please do write your automated tests. Also, they're not super fun to write in most cases, right? They're kind of feel like drudgery, I think, for me and for a lot of other programmers. They're like, I know this is like taking my vitamins, right? I have to do this, but this is not the fun part of my job. It is nice to be able to say, hey, language model, I need some tests for this. Now, again, you should
Brian Milner (12:38) You
Hunter (12:54) I would advise you to be very specific in what you ask for, especially if you've got some data that you can use as a validation. And you don't want to get into a place where your tests have all been generated automatically and you don't know if the tests are right or not, because that's sort of self-sabotage there. But yes, so if you can get to a point where you know that it's not garbage in, garbage out, it can be very nice to automate away some of that drudgery.
Brian Milner (13:09) Hahaha.
Hunter (13:20) And the DevOps is another good example. am enough of a sysadmin to be dangerous. I definitely can do the setup stuff that we need at Mountain Goat, even though it's not my full-time role. But it is nice to be able to say, go off and spin up a couple of...
EC2 instances on Amazon Web Services and give me a status on this and hey, take this one down. And again, stuff I could go do, click around on the website and handle. But it is nice to be able to automate that stuff. And also something that I use more than I thought, Amazon Web Services is a great example because they've got so many products. Many of them, you the core products that I use, I'm pretty familiar with, but there's a ton of other ones where I...
Maybe I've heard of them, maybe I haven't, and I don't know exactly how they work. Using some of the AI powered browsers like Atlas from Chess CPT or Browser Company has a few of them, Chrome is doing more of this natively too, you can kind of be like, you can go on the Amazon website and say, explain this to me. And it can help me a lot, right? The documentation is there. I can go spend an hour reading it all, or I could just ask it this question. It can look at the pages and be like, okay.
Brian Milner (14:25) Hahaha.
Hunter (14:34) And it also knows more about me. So it's like this fits into your other workflows this way, right? That saves me time. I like that because, you know, that's time that I can spend on something else.
Brian Milner (14:45) Yeah, those are great examples. And I know there's a lot of useful information coming at you here in this episode. So it's a lot to take in. We have put together a short little PDF of Hunter's top tools and recommendations that we have for you that's in the show notes for this episode. just know that that's there for you as we walk our way through this. I want to talk to a little bit about where AI starts to go wrong.
when teams struggle with AI, what usually are the signs that what goes wrong first?
Hunter (15:15) Well, I I think you could, and I think some of this is to be determined, right? We will see as this stuff becomes more popular, and I'm sure a lot of it also is team specific, right? The circumstances can be very different. How this stuff is introduced, you know, especially given the hype cycle around AI, I have not experienced this myself, but definitely heard about some very top-down,
projects that have been forced on teams to start adopting these things by someone, in some cases, many layers above them, that doesn't really tell them what to do or how they're supposed to use it. that, you know, they need to get, we did AI on to the minutes for the board meeting. And so they kind of force it top down without a lot of context or support. And I've heard bad things about those types of...
mandates in some cases. And you can imagine why, just like anything else that is coming down from above without a lot of guidance doesn't always work out very well. ⁓ So that's probably unavoidable, just given how all that works. And I think that's definitely happening with some teams. And I'm sorry if that is your situation. And then, of course, there's the inherent limitations in the model, because in the models that we have today.
Brian Milner (16:13) Ha
Hunter (16:29) They're not perfect. They do still hallucinate. That's less of an issue if you ground them with data and context. That doesn't mean it doesn't still happen and you need to be aware of that. And the sort of hype cycle that you mentioned earlier can feed into that too because people are like, it can do anything. Well, that's not true, right? You need to be realistic about what it can and can't do. And that may not be obvious given sort of how the bill of goods that's been sold in some of these cases.
I do think that it's important for people to keep that in mind as well. And some of that can be sort of navigated through experimentation, especially in areas where maybe you are an expert and you know kind of how it's supposed to work and you can see what it does and give it a, it did this well when I asked it this way. It did not do well when it asked it this other way. So I need to make sure I give it enough information to handle it in the future. Those kinds of things I think can be pretty useful.
Brian Milner (17:19) Yeah. Well, let's shift gears a little bit because I know that you're not only a consumer of AI tools, but you're a creator of AI tools that you worked on Mountain Goat's AI toolkit. And I want to talk about that for a little bit. Were there specific problems that Mountain Goat was trying to solve with that AI toolkit?
Hunter (17:38) Yeah, so just for those that don't know, we do produce something called the Mountain Goat AI Toolkit, which is available for, that anybody can sign up for if you want to try it. I think we'll try, we'll try and put a link in the show notes. think, know, Brian, I think I know somebody if you could get that in there. And then, or if not, it's linked from the main Mountain Goat website as well. And it includes some specific tools that we've set up to try to,
Brian Milner (17:54) Haha
Hunter (18:07) produce some outputs based on sort of the Mountain Goat way of doing things. So we focused on a few things, a sort of general agile coach, which is, you know, we get questions all the time, Mike gets more questions via email and other ways than he has time to answer. And so we wanted to try to, could be what we wanted to figure out, can we build something that can take all of what he's written, which at this point across books and video courses and hundreds of blog posts and weekly tips, there's quite a bit of stuff.
So can we take all of that collective knowledge and use that as a basis for a coaching tool? And we've deployed that, that's part of what's available and it works quite well, if I do say so myself. I mean, it is good at answering those questions. We did some red teaming of sorts, Brian, I you were a part of that, to break it early on, which I appreciate it because there were some weird rough edges. I think one of your colleagues somehow...
convinced it to go on and on about the Revolutionary War or something like that. it's not supposed to do that. So we were able to go through that process and resolve those kinds of issues. But it's been a useful tool. We've got some great feedback from people for the coaching bit. Beyond that, too, some specific things. So story splitting, Mike has a method called Spider, which is something he teaches in his Better User Stories course.
Brian Milner (19:04) Ha
Hunter (19:24) And we will do webinars from time to time to talk about user stories and splitting. We take questions as part of those webinars. And splitting is something that would come up all the time. It's something that clearly a lot of teams struggle with, right? They're not kind of sure the best techniques, whether they should slice this way, this way, you know. And they're definitely our best practices, Spider and other techniques as well, ways that you can use to sort of get some consistently good results. So we wanted to try to...
codify that a little bit on the AI side and turn it into a tool that could help people. Here's a user story and here's how we would split it according to these techniques and some options and here's which of those splits we probably would recommend. And here's some anti-patterns too, like you probably don't want to do this stuff that we see happen again and again. So that was an example of something where we could take something specific from the course, the spider story splitting technique in this case, and scope it.
down to something that is a useful tool and again, something that we've gotten some great feedback on. In addition to that, there's also some product backlog generation, almost like the inverse of that in a way. But being able to fill out some product backlogs with some stuff, especially if it's like, hey, I need to a login system. Can you write me some backlog items that, in many cases, things that if you've done it before, especially you would have thought of, but maybe some that you hadn't. And even if there's stuff you would have automatically thought you would have thought of.
It can save you some time. all this stuff's been a lot of fun to put together and like I mentioned, we've been getting some really good feedback. It's available through the website and one of the things that hasn't quite launched yet because we're waiting on opening AI, but that's something that's also, all those tools are gonna be available inside of ChatTPT proper. This is something that they started to do at the end of last year is what they're calling apps inside of that platform.
It looks like that's going to be coming to other platforms as well. when similar opportunities are available in Cloud and Gemini, et cetera, we will expand to those as well. But since people are already using these tools, we wanted to try and make them as accessible as possible. So if you're already inside one of these other chatbot tools, why not be able to apply some of these specific skills from Mountain Goat inside those tools as well? So that's pretty exciting. I'm looking forward to that being launched.
Brian Milner (21:31) That is exciting. Yeah, that'll be interesting to see how that plays out. When you guys were talking about putting this tool together and the kinds of things that would go into it, I'm just curious, and the answer may not be much here, but I just thought I'd ask, is there anything that you guys discussed including and deliberately decided not to include? Is there anything that you thought maybe this would not be a good use of this to have it do this thing?
Hunter (21:55) Well, I don't know if it was necessarily a specific feature that we considered and then threw out, but we went into the project initially not knowing if it was going to work. mean, it was like, we have no idea if this is going to be good enough, right? It could just be garbage output, not helpful. And we had a pretty high bar for what we wanted it to be able to do.
So when we started, it was not a slam dunk, this is obviously going to be something that will work well. It was pretty unclear how well it would work. That said, once I started doing some early testing and getting it out to some folks internally, it became clear pretty quickly that we could make it work and that we could get high quality output. And still, it was important to put some guardrails in, like the kind of stuff that I mentioned that you were part of testing, to make sure that it was reflective of...
what Mountain Goat's all about, but also not gonna do some things that would be awful slash embarrassing. So that stuff was important, but it turns out that it has worked really well and I'm happy with that.
Brian Milner (23:01) Awesome. Well, I want to kind of, as we're getting close to the end of our time, I want to think, look ahead a little bit. And with, I know you stay up to date and current on things and I know, as you said, we're kind of, this is kind of early January. So yeah, right, exactly. But I'm kind of curious, what's your thoughts on is, you know, with the developments you're seeing take place in this area, you know, helping teams, helping software developers.
Hunter (23:14) Yeah, I don't know when this will air, it's as we're talking now.
Brian Milner (23:28) What kind of feels inevitable about it or are there things that you feel like are kind of more overhyped right now and don't really live up to the promise?
Hunter (23:36) Well, I'm sure that there are people that would disagree with me and say that there's a lot more that's overhyped. I have some friends that are much, much more skeptical about these tools than I am. so they would probably give a much harsher grade in that department. Well, the thing that I wonder about and I don't know is how this is going to change the way that these teams work.
Brian Milner (23:50) Yeah.
Hunter (24:06) Right? mean, if, you know, if a single developer can do much more cross-functional stuff because the tool, I mean, the tool is doing it for them, right? And at their direction, but something where they maybe would have had to loop in a designer more often or, I mean, it's going to change how much each person can produce. It's going to change how quickly they can do it. It's going to change what, what the bar is for when they need to pull somebody else in. If it's outside their skillset.
Brian Milner (24:06) Hmm.
Hunter (24:34) And it's gonna, I mean, I hope it's going to mean that it's freeing up time that people can be doing some stuff that they wouldn't have been able to do before. And the result is a positive thing for the whole team. But I do really wonder about the team dynamics and what that's going to do to a lot of the stuff that we teach at Mountain Goat and how it's going to change things. I don't have the answer to that. I think that's way above my pay grade. But I do think it's really interesting to see how that.
will be, what impact that will have.
Brian Milner (25:02) Yeah, yeah, I agree. I agree. Yeah, I mean, it's I think this year is going to be very determinative, you know, kind of the direction and how this kind of starts to play with what we do on a day in and day out basis. I'm really curious to see how it changes the dynamics of a team. ⁓ I know we kind of talked a little bit about that, but, know, how does your team change when you have virtual team members or AI team members as part of it? ⁓
Hunter (25:18) Yeah.
Yeah.
It seems
inevitable to me that it will change. And of course that could mean that there's potential for it to change for the worse, we, you know, another reason I think to be eyes wide open about this stuff and trying to understand what the implications are so that we can avoid bad outcomes. but I don't think it's going away. So that's one of the reasons that I find it interesting.
Brian Milner (25:31) Yeah.
Just as sort of a way to start to wrap us up here, I'm kind of curious if you have a software developer friend who doesn't really know too much about this stuff and wants to start to dip their toe in, where would you suggest they start to explore this?
Hunter (26:04) Yeah, it's a great question. I think my recommendation would be as much as possible, and this can be complicated by access to tools and whatnot, but as much as possible, try to do experiments on whatever the current state of the art models are. Because the field is moving very quickly, if you're trying a model that...
If you're trying to learn something and you're using a model that is six months, 12 months old, you're going to have a very different experience from somebody that is on the cutting edge, right? So as much as possible within the constraints of your work and the tools they give you and economic constraints, some of these tools are more expensive than others, of course, and that's a factor. If you can do your experiments on whatever the current state of the art is, I think you'll find...
it to be a more interesting set of experiments. think personally as a software developer, the agentic coding tools, the cloud codes and codexes of the world are the most interesting area. Both of those are neck and neck and just like those models themselves, right? One month, Gemini is the best. Next month, OpenArea is on top again. Next month. I mean, it's like they're trading position so often. You never quite know, but there's a lot going on there. So I really find those agentic coding tools to be
really, really fascinating to see what they're capable of, to kind of push them a little bit. And I think, you know, you'll be impressed.
Brian Milner (27:23) Yeah, that's good advice. That's good advice. I mean, think the, you know, there's no time like the present, you know, if you're, you're interested in it, go ahead and start. There's, you know, explore, try things, you know, fail fast kind of thing, you know. ⁓
Hunter (27:36) Yeah.
Yeah.
Brian Milner (27:37) Well, Hunter, I really appreciate you coming on. This has been some really useful information for everyone who's listening. Again, just a reminder that we have a link in our show notes to a PDF just for this episode, because I know we're covering a lot here. And there's a lot of depth that we're not even able to go into here on the show. So look for that link. It's going to talk about some of the top AI tools for Agile teams there that Hunter is helping us put together. So find that in our show notes. So, Hunter, thanks again for making time.
Hunter (28:01) Yeah, of course. Thanks Brian. Thanks for having me.
