Agile Mentors Podcast from Mountain Goat Software

Agile Mentors Podcast from Mountain Goat Software

Practical advice for making agile work in the real world

Mountain Goat Software's Agile Mentors Podcast is for agilists of all levels. Whether you’re new to agile and Scrum or have years of experience, listen in to find answers to your questions and new ways to succeed with agile.

Listen on Apple Podcasts Listen on Overcast Listen on Pocket Casts Listen on Spotify Download RSS

#169: Building Practical AI for Agile Teams with Hunter Hillegas

December 03, 2025     34 minutes

It’s not just about cool tools. Hunter Hillegas (CTO at Mountain Goat Software) joins Brian to unpack what it’s really like to build with AI—from hallucinations and context management to dev workflows, testing strategies, and where the humans still matter most.

Overview

This episode dives deep into the real work behind bringing AI into agile. Brian and Hunter trace the arc from early experiments to full-scale agents, sharing what it took to build responsibly on large language models (and what still keeps them up at night). They get into the weeds of context handling, trust and verification, dev productivity, and what makes a good AI coach actually helpful. Along the way, they explore how tools are changing—faster than most teams can keep up—and what that means for the future of learning, coding, and collaborating in agile environments.

References and resources mentioned in the show:

Hunter Hilligas
AI Tool Kit
Agile Skils Video Library
Mike's Better User Stories Webinar
#82: The Intersection of AI and Agile with Emilia Breton
#151: What AI Is Really Delivering (and What It’s Not) with Evan Leybourn & Christopher Morales
#161: Test-Driven Development in the Age of AI with Clare Sudbery
#166: AI Isn’t Coming for Your Job, But It Is Joining Your Team with Dr. Michael Housman
Subscribe to the Agile Mentors Podcast

Want to get involved?

This show is designed for you, and we’d love your input.

  • Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
  • Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com

This episode’s presenters are:

Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.

Hunter Hillegas is the Chief Technology Officer at Mountain Goat Software. With over 20 years of experience in software development, product ownership, and team leadership, he leads the creation of tools like the AI Toolkit and Team Home to support effective, engaging learning experiences. Hunter lives in Santa Barbara, California, with his wife and their dog Enzo.

Auto-generated Transcript:

Brian Milner (00:00)
Welcome in Agile Mentors. We're back for another episode of the Agile Mentors Podcast. I'm here as always, Brian Milner. And today I have a special guest with us. Hunter Hillegas is with us. Hunter, welcome in.

Hunter (00:12)
Hey, Brian, thanks, it's great to see you.

Brian Milner (00:14)
Hunter's been on the show before if you missed him previously, but Hunter is the CTO at Mountain Goat Software. Hunter and I trade AI kind of tips and tools all the time and have a lot of fun doing that. We thought we'd have Hunter on for a couple of reasons. One, in general, we wanted to kind of focus a little bit on the AI landscape. And part of that AI landscape, I think, is

some of the big changes that have happened at Mountain Goat Software in regards to AI and what Mountain Goat Software does and how it makes things available to its customers. So Hunter, I know one of those things that's been kind of a new addition is we had some scattered kind of AI tools for people that did different things. And you could find those in our

a site, but now they've kind of all been rolled together. Talk a little bit about this AI toolkit that's been put together.

Hunter (01:07)
Yeah,

yeah, thanks, Brian. Absolutely. So like a lot of other companies, we are embracing this stuff, exploring a lot, trying to see what works, what doesn't work well, the places where...

The technology is really applicable and the other places where maybe there's promise, but it's not there yet. And we are, you know, it may be something down the road, but always trying to balance that. And we have done a lot of different AI experiments. We've done some stuff in classes, which I know, you know, because you're teaching those, but also this AI toolkit, which is something meant to be used after class, but it's also available for free. We'll talk about that in a second. So even if you haven't taken a class from us, you can sign up and try it.

started off with some experiments. Our goat bot coach, was at the first thing that we did, which is trying to take all of the resources that Mike and you and others have created for mountain goat over all the years and use it to create an agile coach. And that really was a sort of experiment to see how well will this work? will it give good answers? Will it tell you waterfalls is the best thing ever? so, you know, we wanted to see how it goes and that was a success. and then through that and some other experiments like a

prototyping story splitting tool, is something that, you know, we got a lot of questions about the right way to split different kinds of stories, something that Mike's written about and taught about a lot. So there's a lot of material there. We took what we learned from both of those things and created what we call the the agile AI toolkit. And right now that is comprised of three major features. The biggest one is this coaching feature, which is a pretty big expansion from what the original GoBot Vision was. The original GoBot Vision was pretty

much like one question, one turn kind of thing where you would say, you know, do I have to really do a daily scrum every day? And we give you an answer based off of our materials. The AI toolkit coach is much more of a conversational back and forth, like you would be used to with a chat GPT or Claude or Gemini or what have you. So you can have full on conversations. It supports media like images and PDF uploads and that sort of thing. And then in addition to that, it's got some special tools built in. So, so

If you ask it to split a story it it will sort of switch into story splitting mode Where it can do use Mike's spider technique to do breakdowns of these stories explain how it did it based on the various parameters there To try to give you some advice if there are stories that your team is having trouble splitting You're not quite sure how to approach it. Not sure if what you're doing is You know, there's some best practices around that that you want to learn about

So that's really useful. And then the third tool that's built in is to sort of almost the flip of that, but it's generating backlog items. So let's say I want to build a login system for my website, something that's we have all probably done in some form or another. There's a lot of sort of basic stuff that you expect to be a part of that. You don't necessarily want to write the 12 whatever backlog items to do it. It can do that for you based off of some information you give it about what you're working on, the parameters of your team, et cetera. So

Those are the tools that we're focused on now, but we're adding stuff all the time. I mean, we're excited about where this stuff is going. We see a lot of utility and especially as the models continue to get better, we can do even more. So it's pretty exciting.

Brian Milner (04:15)
Yeah, it really is exciting. it's like you said, these tools kind of used to exist in their own little buckets and you had to go to the right place for the right thing. But now they're sort of all in one spot. I mean, obviously that's the strategy moving forward is to just add more and more kind of things into that kind of coaching tool. And, you know, I'll...

throw my hat in the ring here and kind of give my endorsement, I really think it does a great job of just what you said of actually kind of coaching you through it. Not technically coaching, ⁓ know, in the broad sense, right? In the broad sense of coaching, it does a good job of not just giving you an answer, right? Here's the story split five ways.

Hunter (04:52)
Sure, in the broad sense.

Brian Milner (05:02)
but here's a story split five ways and here's why I split this way, right? Here's kind of, and that gives you really good practical knowledge so that you start to build your own capability of being able to do that moving forward and maybe you don't need to use it after doing it a few times. ⁓

Hunter (05:18)
Yeah, yeah, we

definitely see like sort of philosophically these tools as augmenting your team, right? That is how we've approached all of these things to help, whether it's the Scrum Master, product owner, team member, be able to, you know, if there's parts of the job that this can help with to spend less time potentially doing those or to augment those and more time doing something else that is potentially higher value.

Brian Milner (05:26)
Yeah.

I'm just kind of curious, I don't know if you have anything that would pop into your head when I ask this, but whenever you build something like this that's complex, there's always like some road bumps along the way that throw you a little bit. What was particularly hard about kind of putting this together?

Hunter (06:03)
Well, I mean, one of the bigger things that I was concerned about in building this is just, I mean, you can imagine terrible outputs, right? And that can be a very wide range of everything from really inappropriate, something really inappropriate for, this is a work context tool for almost everyone, right? So you can imagine even something that might be maybe okay in a non-work context is way less okay in a work context. So there's that sort of general social, like it is,

Brian Milner (06:10)
Right.

Hunter (06:30)
does not talk about politics. You cannot engage it on a topic like that. This is not the place to do that, right? There's plenty of other places that folks that want to talk about that can do that. So there's stuff like that, the general societal stuff, but then also kind of trying to hone the responses, right? Everything from just quality of too long, too short, but also is it giving advice that we would feel good about because, you know, we don't want to be putting out bad advice in the world, but we, from what our, from what our perspective is about advice.

And so that was something that I was worried about and didn't know going into especially the original experiment whether or not we could feel good enough about its outputs to release it. Obviously, we're not monitoring these responses. We did a beta period where we got feedback on stuff, but we, you know, I was worried it was going to give bad advice or not understand the source material or start making things up. The sort of hallucination problem that

We hear a lot about when it comes to LLMs. And so, you know, can never completely eliminate all those problems, but we did put in a fair amount of mitigations and, you know, continue to work to improve that stuff and also allow users to give feedback if they're like, this answer was terrible, please improve this.

Brian Milner (07:41)
Yeah. Yeah.

It's kind of, it's kind of been a nice, evolution in multiple ways because I remember you and I early on doing some testing back and forth with a, with a goat bot, with a, with a coaching tool. And, know, we, we have this kind of serendipitous thing of the fact that the, the, the models themselves have gotten so much better that kind of underpin it, but also then combined with, you know, mountain goats been improving and evolving.

the structure of these tools as well from our side. And those things together have created a tool that's really powerful. But I remember early on having conversations where I've tried to test it and trick it and say, what does Scrum say the difference between a product owner and product manager is? And it would give me an answer, but that's not the right answer, right? Scrum doesn't have a product manager, so.

Hunter (08:29)
Yeah, definitely since the base model has all kinds of training from all over the place, we had to add our own set of guidelines to influence it to give the kinds of responses that we wanted. I remember in the beta period, one of the folks somehow got it to go on a long tirade about the Peloponnesian War, which obviously was not something we expected, intended, or wanted. So there were definitely some things that we needed to fix as we were tuning it.

Brian Milner (08:54)
So a hot take on the Peloponnesian War, huh? Okay.

Hunter (08:56)
It was somehow scrim related, but it was

just way out of field. It was not not good

Brian Milner (09:02)
That's hilarious. Well, you mentioned a little bit, because I'm sure people are curious who are listening to this, that it is something that we make available to people who take a class with Mountain Goat, but also people can try it for free. So I just want to make sure people understand kind of the differences there.

Hunter (09:19)
Yeah, for sure. it is, we have a, a set of resources that we call MGS Essentials, which we launched earlier this year, which was a continuation on of the original agile mentor site, which includes a whole bunch of stuff. And this is, this part is totally free. It includes a ton of resources, like the backlog of Mike's weekly tips that he sends out on Thursdays, these AI tools, a fully interactive planning poker tool for teams that want to do that. The free version has some limits associated with it. So there's a maximum.

number of people you can have in each poker game, there's a max number of chats you can use in the AI tool, and then if you train with us you get the unlimited versions of all of these tools and you can have those limits lifted and use them even more more extensively. let's see I'm gonna go I'm gonna pledge right now to put a nice little link in the Mountain Goat navigation I just checked and there is one there now but I actually want to tweak the button by the time this airs it will be perfect so if you guys do want to go sign up

Brian Milner (10:13)
Hahaha.

Hunter (10:14)
and try this out. If you haven't already, can go to the netandgoatssoftware.com website. Under the resources tab at the top, there will be a link to the toolkit and it'll be really obvious how to sign up for a free account to do that if you want to kick the tires. And we encourage that. And also give us feedback. There's a link in the AI toolkit. We are really curious about how people are using this, what they find valuable, and how they think it could be more valuable.

Brian Milner (10:36)
Awesome, yeah. And just so people know, I know Hunter is serious about the feedback part of it. So if you do have some suggestions or feedback, please do provide that, because it can be helpfulness, improving it and making it better for you and for others as well. ⁓

Hunter (10:53)
Absolutely,

and there are some user preferences that you can set to kind of tune your responses. You know, if you like your user stories written a certain way, we do let you do that. You can give us more information about the project you're working on and maybe the size of your team or if it's a team that is not fluent in some set of technologies, you can specify that because it might influence whether you need more spikes in your splits and that sort of thing. So there are some configuration options, but again, we're really interested to see if you're like, I really wish I could do this. We want that feedback and I can't promise

everything's gonna go in right away, but we love to hear it.

Brian Milner (11:25)
Awesome. Well, I want to shift the focus a little bit because there is another kind of new AI functionality that Mountain Goat has added recently. It's exciting to me. think it's kind of a, as a trainer, as an educator, I think it's kind of an exciting path to take to have this kind of thing available. people that have been listening the past few weeks know that we've been

We've launched our own online course library that you can buy that includes all the courses that we've had. One of the things that comes as a bonus of that is a little AI assistance that can help you with the learning as you go through the class. Tell us a little bit about that and the journey of creating that.

Hunter (12:05)
Sure, the Agile Skills video library, yeah, it's our sort of combination of all of our video courses, including Brian's two courses on retrospectives and Mike's user stories course and estimating course and more. That's also on the website if you want to check it out. I'm biased, but I think it's an incredible deal. And so I would encourage folks to at least go look and see what it's about. But in addition to that, so if you buy that bundle and get these courses, there's a bunch of different videos that are separated into different modules.

or

through the path and learn whatever the topic is. Underneath each video, we have an AI assistant that knows about the context of the video that you're watching. So you can ask questions like, you know, please explain this to me in simpler terms if you just didn't quite get it for whatever reason or want an example that isn't included in the source material. Ask it, how can I apply this in my job, right? So it will ask you some follow-up questions about what is your job and how do you know, how are you looking to apply it? But it can then help you with that.

And it's really fun to be able to use the tool right in the context of the video, right? So as you go, you can make sure that you're getting the most out of each lesson. And if there is something where you're wanting to get some clarification or follow up, it's right there in the piece. This was actually kind of a last minute addition. We were putting together the bundle and we were putting together all the marketing and that sort of stuff for getting it together. And I think it was, I don't know, a week before we launched

I just had this idea because we may talk about the landscape more broadly in a minute, you know OpenAI had a developer day in early October I think it was now and one of the things they launched was this agent builder tool and I thought you know what I can I can do something with this and so we I use that tool and we talk more about those details if you want to but

to build this agent that lives inside our video playing experience over the course of three or four days. Then we had the, marketing team, had to redo a bunch of the marketing stuff to reference it at the last minute. But it's turned out to be really popular with the folks that have gotten into those classes. So I'm really glad that we did it, continue to evolve it. It's useful to be able to have inline, don't have to go someplace else, don't have to try and, know,

copy a transcript into ChatGPT or something like that. It's just right there and you can ask your questions and get some good context and follow up.

Brian Milner (14:26)
Yeah, that's an amazingly helpful kind of tool as you're going through. It kind of transitions it from being sort of a static thing to being very dynamic and personalized because you're watching a video that Mike or I put together, but then you can relate it to your situation. You can ask it questions about...

that are specific to you. Hey, I've got this weird kind of situation in my company where we do this. We get those questions all the time in class and that's what people love about having those open Q &A periods in our classes. So I know it must be an incredible help to people as they go through the classes. What kind of feedback have you gotten from folks about it?

Hunter (15:09)
It's been really positive, you know, in addition to getting direct feedback from, I mean, mostly praise. mean, maybe it's possible that there may be some feedback I haven't heard that someone has an issue with it, but it's been sort of universally positive. mean, of course, you know, nothing's perfect and we continue to improve it, but people seem to be really liking it and engaging with it. You know, you always, when you build a new feature, you're wondering if it will get

product market fit and so far it does seem like this is something that people find valuable and it is getting usage so they are asking some of those questions we prime it a bit with some of those questions by default to give you a suggestion but you can ask anything that you want and it also knows about the other videos that you have in the bundle you purchase so it might say you should watch this video next even if it's in a different course it might not be in the general sequence you know a lot of

people

just watch kind of top to bottom to get the whole thing. And of course you can do that. But you also can kind of jump around a little bit where if you have a user stories question from the Better User Stories course, maybe it translates into the Estimating with Story Points course because there's a connection there or maybe the Agile Estimating and Planning. So it knows about those other videos. It will suggest those if they're appropriate. So that's another benefit that is pretty fun.

Brian Milner (16:23)
That's awesome. Yeah, I do want to kind of talk a little bit more about the broader landscape and the fact that you built this with the agent builder kind of tool is a perfect segue into that. so tell us a little bit about the process of creating something like this. You don't have to give us all the gory details, but just kind of what sparked the idea and tell us a little bit about the tool itself.

Maybe that'll help us to kind of understand in our jobs how we might be able to use that for things we do.

Hunter (16:59)
Yeah, so the OpenAI Agent Builder is a tool that they introduced not long ago. And based on what they said when they introduced it, it really was to try to make it easier to build multi-step agents using their models.

They have SDKs that software engineers can use to build these agents. it's not necessarily a capability that didn't exist before, but it is a completely visual mechanism. If you're familiar with like a workflow builder type tool, you drag nodes over and connect them together visually to kind of create a flow, you know, ask for this input, run this model to get an output, feed it into this next stage, bring in some data from over here, combine it all together. And you can do it visually even if you have

programming experience. It does expect you to do right prompts and whatnot that you would do for a model in a chat session for instance, but you don't need to know how to code really anything to do the setup all the way through the preview process and then they give you a code to embed on your site and there's a little bit of development work there to get it installed but you know that's something that if you pass that off to somebody on your dev team that they could do in you know less than a day if they're doing

it

sort of based on the examples that OpenAI provides. So you can go from idea all the way to deployment very, very quickly with these tools. And of course, there are other tools like this, right? The OpenAI's agent builder is not the only one. There's Zapier and there's N8n and a bunch of others. So there's competition, which is great. But yeah, it was a really good experience. Like I said, I was able to put something together very quickly, which, you know,

it would have been, not that we wouldn't have done it, but it would have taken longer. I don't know if we would have been able to make it on time for the launch, et cetera. I, you for those that are interested, it's just part of that opening eyes developer program. Anybody can sign up and go take a look. It's fun to experiment with. They've got a bunch of templates too, so you can kind of see how build a customer support bot kind of thing. So it's pretty simple.

Brian Milner (18:56)
Yeah, I would imagine that in this case especially, it's a known universe of data. It's not, you know, it's something you can point to and you control the data that you're feeding it to do that kind of thing. So I would imagine that would be kind of more, kind of of a, I don't want to say simple, but simpler than, you know, connecting to four or five different sources for data.

So I'm kind of curious and just I'll try to play devil's advocate here. And if I'm a scrum master, I'm out there and I'm thinking, well, yeah, it kind of be nice to automate some of the stuff I do. Is this the kind of thing that I could connect to my JIRA instance and pull across information, look for patterns or create reports or things with?

Hunter (19:35)
Yeah, you can definitely. So they allow you to bring in external data a couple different ways. I mean, if you have sort of static data like PDFs or text documents or markdown documents or whatever, you can simply include them and then it will use that corpus of information as it's doing its answers for the various model steps. But you also can bring in external data through a protocol. called MCP. It's been widely adopted in the industry now. It stands for Model Context Protocol, but it is a,

a

standardized way for models, just OpenAI, actually Anthropic was the one that created this, but it's now broadly available across every major frontier model provider, I believe, to bring in data from remote sources. OpenAI has a whole bunch of ones that they've created inside their tool, a lot of common services like Shopify and Dropbox and Google, Gmail, and there's a pretty long list, but you can also create custom MCP servers that you can connect to.

you might need to negotiate with your IT department to get the right permissions to do this. This can always be a thing when it comes to data access with these tools. But presuming you can navigate that part, you can bring in data from these tools like JIRA would be a great example or any other tools that may be including that information and use it to provide information to the model. But you can then ask to do things with it. yeah.

Brian Milner (20:50)
That's awesome. Yeah. I mean, it does sound like a very fascinating tool. mean, in the realm of agents themselves, I mean, we've had things like custom GPTs in the past, which are sort of very, very rudimentary forms of that. But this just is sort of a higher level, a little bit higher level of that kind of a thing. Yeah. You mentioned N8N. I love that tool as well. That's maybe even a step higher.

just because it's got more flexibility to it, but it also takes a lot more technical knowledge to be able to do something like that for sure. Well, I'm curious about some of the other things that have happened. know we could talk about a lot of things, but there's things like even just ChatGPT launching 5 and now 5.1 this month. So how has that changed things?

how has that impacted what Mountain Goat does?

Hunter (21:46)
Sure, the stuff that we have built is all on top of OpenAI models. they are definitely the frontier lab that I follow the most closely. That said, I have sort of turned into an AI obsessive in the last year. I am very, they're very, very into what everybody's doing and it's moving very fast, which is exciting, but also means that there's a lot to consume.

Brian Milner (21:59)
Ha

Hunter (22:08)
Yeah, so the chat GPT-5 launch back in August, in terms of what we do with the models, one of the things that they improve with GPT-5 was instruction following. So the fact that if you tell the model to do thing one, then thing two, then thing three, it's not gonna go on a vacation halfway through thing two and completely lose track and do whatever it wants. It's actually quite good. And five one just came out this week, but.

is it seems to be even better at following instructions, which if you're using a model to as a base for an agent or a coaching thing where you do have a lot of instructions you needed to follow for each request is really useful and previous models were not as good, right? They you might have to with GPT for turbo or for one you

could still do some amazing stuff, but you would have to be more explicit about some of the stuff that you didn't want it to do because it might go off and lose track of what it's doing or interpret them in ways that made no sense to a human, but I guess made sense to the model. So the improved instruction following is a really notable thing for us.

And the other part of it is with all of these model introductions, OpenAI, but also Nantropic and Google Gemini, the two of other things that are happening pretty much every time we see a release is they get faster, right? So the time to generate tokens is going down and they're getting cheaper, right? The amount of money that we have to spend per token is less than it was six months ago. It's a lot less than it was a year ago. So.

Brian Milner (23:27)
Yeah.

Hunter (23:38)
Will that continue? We'll see. Obviously, there's a lot of interesting discussions around the economics of how these frontier model studios work, the amount of money they're raising, making, losing. So it may be that that doesn't continue forever, especially the getting cheaper part. But at least for now, we're taking advantage of that as each model release comes around. We see that once we validate that the new model is compatible with our tools, that we get some benefits of

Progress.

Brian Milner (24:07)
I'm kind of, want to, know we're getting low on time and there's one other area I wanted to kind of bring up with you and I really don't know how you're going to address this. So I'm curious, I'm curious to know your answer. But I know you've been using AI for a while to help you because you do coding as well. And just, just like this agent builder, but you actually get into the nuts and bolts coding. And I'm kind of curious, how has that changed over just the past?

six months or so and what have you learned about it in the last six months?

Hunter (24:33)
massively.

Yeah, it has changed my workflow completely. And I don't mean that as I'm not exaggerating, that's not hyperbole. And I, for those that don't know me, I'm 46 years old. So I'm not a 22 year old, know, brand new software developer, totally plugged into every little tiny thing that's happening. I've been doing this for a long time, but.

Brian Milner (24:41)
Ha ha ha.

Hunter (24:58)
even though, you know, I guess you can teach some dogs some new tricks because this has completely shifted how I do almost all of my work. I use primarily the opening codex coding agent, but Claude code also I've used quite a bit, which is another very strong contender. And this is a space where, you know, lot of people are competing to, to, to get market share.

I write, I now probably spend more time prompting than writing code in many cases, which is nothing I would ever have guessed. Now I have the benefit of doing this for a long time, so a lot of experience about knowing how I want something to be structured. So I'm not just saying like, make me an app and I come back in six hours and it's done. It's much, much, much more specific.

Brian Milner (25:23)
Wow.

Hunter (25:39)
In my experience, learning how to manage context in these models is like the most useful skill you can get, knowing how much to put in, what not to put in. And that's something that as I've just used them more, gotten a better feel for at least with the current set of models, how to manage that. But between well-written prompts, scoped to what I want an individual features to do, how I want it to work.

and also critically important are a great automated test suite because I always say at the model, like when you're done, run the test. And if they don't work, you're not done. To have those tests there and to trust the tests to know that they're right makes it much, much, much easier to get more value from these tools because they may not get everything right on the first try. But if they can go and validate their work, then by the time it gets back to me to do a thorough check on things,

Brian Milner (26:11)
Right.

Hunter (26:31)
You know, they've had the chance to go through run the test they're like, that didn't work and sometimes that happens, right? Sometimes they'll give you a great response on the first time great one-shot experience But there are other times when they try stuff and it does not work And they have to try it again. And it's I would rather have it do that cycle on its own than That me I I haven't measured this in any formal sense, but my guess is that I'm ten times more productive with these tools. I mean that

Brian Milner (26:43)
Ha

Hunter (26:57)
Again, that's a very fudgy measure, but it feels that way. For me, it's been dramatic, and I know that's true for a lot of people. And I know there are people that have resisted these tools for all kinds of reasons, and I respect that. But I have gotten an extreme benefit, so I'm pretty happy about the future.

Brian Milner (26:59)
Sure, sure.

Yeah.

That's awesome. I and I think you're taking the right approach and you know, it's, it's, I hope people are understanding there's still a human in the loop, right? You're not, you're not vibe coding stuff, right? I mean, you're, you're, you're, you're very heavily involved in it. And I think, you know, we had an episode, a while back on testing, specifically using this. And, and so you bring up some good points there as far as, helping it with your testing. Obviously I think it does a great job of helping you in that area.

Hunter (27:22)
absolutely. No.

Brian Milner (27:40)
especially if I'm a programmer, testing's not really my thing, but you're right, it's drudgery, but it can help you to do that. I'm just curious, how much are you having to hold its hand through that kind of thing, and how much are you able to trust it to just kind of think of edge cases and those sorts of things when it creates your test?

Hunter (27:44)
It's drudgery sometimes. It feel like that, yeah.

would say I don't trust it at all. I'll be very Ronald Reagan-y and say trust but verify. ⁓ I don't blindly trust any of the stuff that it does, unless it's something that truly doesn't matter. If I'm writing a one-off script to process some data or something and I can tell if the data's right, I'm not gonna spend all that much time. As long as the output's fine, I'm gonna throw that thing away after. So that's different. But I don't really trust it. And I don't mean that as a pejorative. I just feel like,

Brian Milner (28:09)
Yeah.

Hunter (28:32)
at least with the models we have today, you are still responsible. I am still responsible for this output I'm putting out. I can't be like, hey, yeah, blame chat GPT, it screwed things up. So given that, I care that it's correct. so I do spend, especially for if it's some kind of very critical code, we have a system that runs all the logins for all of the sites that we have for our services. Like that's important. It's a security thing, right? So that is very different than, again, some kind of

Brian Milner (28:40)
Ha ha ha.

Yeah.

Hunter (28:59)
script that I'm cooking up very quickly. I do a lot of verification and I find that in terms of finding stuff like education, that's a little bit situation dependent, but I've been impressed when it has found stuff that I wouldn't have thought to test. I'm like, that is wrong. Interesting. So it has definitely caught things. And we have also tools that do automatic code review. So it goes through multiple steps. So every time I submit something to GitHub and create a pull request, it will automatically run code review for me. And I get a report saying, here's what you

did all good or here's a P zero bug that you really should fix or here's something that didn't look right that you might want to look at. So I'm getting multiple levels but I still am very much in the loop and verifying the outputs that I get.

Brian Milner (29:41)
That's awesome. Yeah, I mean, that's what we do with humans. We don't just have one set of eyes look at stuff. We have to have multiple sets of eyes look at it and that's best practice. So, well, this has been fascinating. I really appreciate you taking some time out to kind of share with us a little bit behind the scenes of how this stuff is integrated into the kind of things that Mountain Goat does. I mean, it's an exciting time. We'll see kind of what happened. We're sitting here now and I wonder, you know, by next summer, what we'll be saying about this stuff and how it'll be.

affecting what we do. ⁓

Hunter (30:11)
Yeah,

it's hard to imagine. I'm definitely on the exciting side of the scale, not the sort of doomer side. And I know people are all on different levels of that. I think no matter what, it's happening. So that's why I've chosen to be on the exciting side.

Brian Milner (30:24)
Right,

right, right. Well, thanks for coming on, Hunter. I appreciate you sharing your time with us.

Hunter (30:29)
Yeah, Brian, thanks so much for having me.