Future State Exchange
Conversations exploring the technology, leadership, and ideas shaping tomorrow’s organisations.
Future State Exchange
Engineering judgment in the age of AI | Michael Bateman | Ep. 1
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of The Future States Exchange, host Tim Guest is joined by Michael Bateman, a senior engineering leader at C5 Alliance. Michael has spent more than two decades shaping Jersey’s technology landscape and delivering software at scale.
Drawing on first-hand experience from the earliest days of hands-on engineering to today’s AI-accelerated world, Michael explores what really changes - and also what doesn’t - when machines can generate production-grade code in seconds.
The conversation goes beyond hype to tackle the practical realities of AI in software development:
- Where accountability sits when AI writes the code
- Why governance, quality, and engineering discipline still matter
- The difference between building something that works and something that is safe, scalable and crucially maintainable
Michael shares why AI should be treated as a powerful tool - not a crutch - and why ownership can never be outsourced to a model. From proof-of-concept speed to production risk, he explains how organisations can use AI to move faster without cutting corners.
Looking ahead, the episode explores what effective human–AI partnership could look like over the next three to five years: smaller, more focused teams; AI agents doing the heavy lifting; and humans concentrating on judgment, trade-offs, leadership and outcomes.
Michael also challenges common narratives around AI “replacing” engineers, arguing instead for augmentation - and explains why great developers, with strong fundamentals and business understanding, matter more than ever.
This is a thoughtful, grounded discussion for technology leaders, business decision-makers and anyone navigating the realities of AI adoption - cutting through the noise to focus on responsibility, value and the continuing role of humans at the centre of modern engineering.
Michael Bateman's origin story
SPEAKER_00Hello and welcome to Future State Exchange. Conversations exploring the technology, leadership, and ideas shaping tomorrow's organization. Today we're joined by someone who's been shaping Jersey's technology landscape for more than two decades. Michael Bainman from C5 Alliance is one of those rare engineering leaders who've seen every wave of software evolution. From the days when you had to build everything by hand to a world where AI can generate production grade code in seconds. He's led high-performing teams, shipped real products at scale, and carries the scars and the wisdom that come from doing the hard things properly. Today we're diving into the big questions, what AI means for engineering discipline, where accountability really sits when machines write code, and what the next five years of human AI partnership will actually look like. Michael, welcome to Future State Exchange. Thanks for having me, Tom. Michael, thanks for joining us today. So before we get into the big AI and coding debates, uh give us your origin story. So what first set you on the path towards building software at scale and leading teams to deliver real products?
From electronics to software engineering
SPEAKER_01Yeah, I mean, I guess I always had that interest in engineering. And certainly early on in life, it was much more on the electronics side. I was very much interested in wiring and electricity and circuit design. And that gradually led to computers. And then I realized that actually the software side of being able to control electrical devices through software was actually really interesting. And seeing that sort of merger between being able to express something in code that didn't really exist and then have it have a real world impact of, you know, seeing circuits changing the state and switching things and having that level of control effectively was really interesting to how you could use that and have a real world impact. Yeah. And then from there, as I got more into it, I realized just how massive this was. You know, obviously the internet was going crazy back then, and things were moving from floppy disks to CD drives, and storage and compute were just going through the roof. Uh and gradually I just got more and more into it from there. So pretty much from school, it was when I would be writing softball programs and trying to help people solve problems.
SPEAKER_00Interesting. Yeah, I remember electronics at at school. It was always uh a fun lesson. I I remember there was a guy who who had a um uh a chain round his um uh wrist, and when he was plugging in some of the uh cables, the chain actually connected the positive and the negative. I mean it wasn't much, but it was enough to give him a bit of a shock. Yeah, a bit of a wake-up call. In the days when there was not that much health and safety in schools, and you could kind of do what you wanted. So when were you actually at at uni? How long ago was that?
SPEAKER_01Uh so I went to University of Southampton in 2007. 2007. Yeah, so I did computer science um pretty much as a pure subject um with modules in electronics and AI.
Lessons from the early days of coding
SPEAKER_00Yeah, I went to Southampton, uh, I think probably about 10 years before you. Was there a club called New York, New York still there, or that gone? No, we had Jesters as the Jesters. Yeah, I'm not sure that was there. There was a club called Chaos. Oh yes, yeah, and um on a Monday night they used to do 10p a pint. Can you imagine uh the sort of state we came over there? Yeah. Um, so that let's crack on. So you uh you were building technology long before modern AI tools arrived. So what were your early experiences? Talk us through some of the wins or or some of the kind of the scars that have shaped the way you look at engineering today.
SPEAKER_01So I guess when I started, you you were very focused on the code. You know, you were really trying to understand how to structure the code, how to put data processes, data flows, where storage would be, you know, what the latency of things would be. You know, if you've tried to do something in memory, if there was only a tiny amount of memory available. So you couldn't load a data set, you had to think about how you would would structure these things, how you would call it, what was most efficient. Um and that was really interesting. You know, we were doing modules and I was building out systems that were focused on the most recent data. Everything else would be archived if you needed to. You would have to load up the archive and do reviews across it. Um so that very much sort of I guess ingrains a very efficient way of working with code. You know, there couldn't be bloat. You have to really focus on what you wanted to do and what the outcome would be. As I went through that, the world was rapidly changing. That's the one of the beauty of technology. It's always moving, there's always something new, there's always a faster way of doing things. That almost started to go down slightly, you know.NET was around. That was the first sort of real way. You had a nice Microsoft framework to build quickly. The code became less about telling the computer what to do and more about solving the business problem. Yep. So you could start articulating what you were trying to solve in a not quite natural language way. You know, you have to follow the C sharp structure, but you were using English words, you understood that you were calling a method to calculate a specific thing on specific data, and that would have terms like balance and value date and things people would be familiar with. And then from there, the industry's just accelerated. But going back to those first days, I mean you were very much writing code, testing it, validating it, checking it again, because you knew that when this went out, it was kind of published, or you would physically walk over somewhere and deploy it. So if you needed to do a quick fix or there was a problem, it was a long path to go back, do another release again, try and explain why you didn't capture that the first time. So you really had to aim for a very high level before you did that first release or before the client would see anything. Um and that was, I guess, part of the challenge was being perfect the first time around, with as you know, not necessarily the full requirements defined up front.
SPEAKER_00Yeah, yeah. I mean now nowadays we kind of take it for for granted that you we can allow a first wave of customers onto uh an application or piece of software um to almost kind of act as testing whereas in those days you're right, it's you have to put something out that is that isn't the finished product.
SPEAKER_01So Yeah, and there was a nice sort of way about it, because you know, university and and online when you're you're chatting with different people around the world, there can sometimes be this idea that code should be complicated and you should really push what the language can do to the maximum. Um and you people swap solutions about how good this was and how many scenarios it would cover, and you would sort of be quite impressed and you'd be like, oh wow, that person's really understood how to make this work. But at three in the morning when it's gone wrong, you suddenly lost the interest in how complicated it is, and you're just looking for simplicity, easy to read, easy to troubleshoot, and something that you can actually maintain and and fix um going forward. And suddenly all that view of how complicated something could be drops out the window because not necessarily other people are going to just pick it up. You know, and you're reading someone else's code, you think, oh, I wish this was structured a bit better or a bit cleaner so I could work out what's happening here.
SPEAKER_00Yeah, I I had a client uh quite some time ago when I used to work for an SAP reseller, um, and and all of the reports then were built in in SQL. Um, and they had a perm employee that had kind of taught himself SQL and had written some really, really cool reports and and scripts, but not structured them very well, not put any um indications to what they were actually for. When he left the business, when some of these reports broke, they literally had no idea how to do certain aspects of the business. And we said, let's you know, let's put a governance structure in place. By all means, write your own SQL, but let's put a structure in place. Who's written it, what's the purpose of it, you're hoping to achieve, and put some framework behind it.
SPEAKER_01Yeah, because that nothing focuses the mind, quite like a call out at 3 a.m. to drive to a data center because you have to update some code there, you know.
SPEAKER_00It's like oh as long as you're not uh in the ingesters at the time to come. Um Ryan, let's fast forward two today then. So talk me through your your role. So, what kind of problems are you and your team actually solving and and what's the mission that really gets you up in the morning?
Shifting from shipping code to shipping value
SPEAKER_01So I guess where things have changed, both for me, for the industry, and I think the kind of problems we're solving. So when I started out, it was very much what I would call software development. You were given requirements and you were writing code to solve those requirements. And that was sort of I wouldn't say the limitation, but that was sort of the core focus of the role. And you would have other people doing requirements and testing and everything, but your role was to generate code. Nowadays, the generating the code part is quite straightforward in the sense that there's lots of tools available. Obviously, we'll get onto what AI's done to accelerate that even further. But it's very much now how do I solve the client's problem? So it's less about shipping code and more about shipping value. And the benefit of C5, and the reason I enjoy working there, is also that we move on to solving the client's problems and having that effect. You're not just writing code and sort of pushing it out there and seeing that someone uses it or developing back-end products that no one will ever see the last of the day, that you're very much solving very real problems and seeing the app but a few weeks later.
SPEAKER_00Yeah, no, that's great. So that let's move on a little bit with AI. So, as we most of us know, we are living in the world where AI can generate production gray code in a few seconds. Um, but and you know the responsibility still obviously sits firmly with with the developer. So, how do you feel about ownership and at what point does AI kind of stop helping and start becoming like a crutch for our developers?
SPEAKER_01Yeah, that's a good way to phrase it. Um I mean, effectively that we've always had tools to help us work. And there's always plenty of tools for everyone to work faster and more efficiently. It's making use of them. I would say that a tool would never be accountable for anything. It's always you as a person that's ultimately responsible for the decisions you make. And one of your decisions was to use the tool and trust the output. So you've got to be a little bit careful with how far you go. And AR's quite young in the sense of what we've got now and the large language models and the way they've been used for generations, but gradually that will change. Um, you know, the same way no one questions autocomplete and or spell check in Word, but when that came out, that was a game changer for how quickly people could write emails and draft Word documents. Um but I would say it would never you should never trust it at the end of the day. Or if you do trust it, at least verify, because all those decisions and all that work you do at the software layer is to solve a business problem. All of those business problems come together to serve a customer, and you cannot sit in front of a customer or the regulator and just sort of go, well, the AI did it, because that would never be an acceptable answer. So part of our job as developers and software consultants is to help explain that early on, uh, and not just sort of charge ahead with trying to generate as much code as possible and sort of leave that problem up uphill, as it were.
SPEAKER_00Um that ties back in with what you said earlier on that you know you're no longer just shipping code and delivering code, you're actually solving business problems. So the tool you're choosing to solve the business problems is kind of up to you guys, but but the you know, the um uh the onus and the ownership of of solving that problem is still very much with you. So yeah.
SPEAKER_01And so many tools available nowadays, no one's writing that low-level languages anymore. You know, yeah. You write in higher level languages, AI is almost like another layer of a higher level language that just happens to be natural language. Uh at the end result, you're still solving a problem at a technology level that meets the business criteria.
SPEAKER_00Yeah. That's my dad would say, uh, a poor workman blames his tools.
SPEAKER_01Yes, yeah, very much so. Yeah. Yeah, there's a lot more excuses nowadays, you know. My AI is run out of tokens. I wasn't able to finish that.
SPEAKER_00I don't know about you, but I I mean I use ChatGPT for my my personal life for various things as well. And I do find that it goes through stages where it it sometimes it's really not very useful at all, like it's having an off day.
SPEAKER_01Yeah. It's just generating that realism of dealing with other people.
SPEAKER_00Friday afternoon, you're asking it to do something really complicated. It's like, hold on, it's Friday afternoon.
SPEAKER_01Yeah. And I think we'll we'll probably see more of that. You know, if you think how many companies are scaling with AI and and how much AI companies are trying to buy computer data centers. I don't know if there's any stats out there that are sort of publicly available, but there must be a degree of they are struggling to build capacity as fast as the world is trying to consume it.
SPEAKER_00So I know a bit about this. So I used to work in a very different industry sector, and one of my peers um still works in the sector, and they supply the equipment for the uh air conditioning and ventilation and cooling systems, and one of their niche markets is data centers, and they are flat out. They cannot make the products quick enough to supply the data center demands continuous. Building at scale.
unknownYeah.
The risks of low-code and no-code tools
SPEAKER_00I mean, I'd rather think we're digressing a bit now, but when you think of the you know all the amount of amateur creators, particular TikTok accounts, Instagram, everything like that, just how much data was created daily and where the hell does it all go and get stored? It's yeah, it's a podcast from another time, maybe. Um so we we've seen with clients that we work in and from from our own experience that there is a huge opportunity for end user clients with AI, uh, and it now makes it possible for a business analyst or a founder, for example, to build an app themselves without writing a single line of code. Uh, but where do you see the boundary between empowering them to do that and actual danger and risk?
SPEAKER_01Yeah, I mean, there's always been a lot of tools. I know at the moment we're sort of talking specifically around software and the ability to develop it. Um, but the same for websites, the same for other apps that are out there. Businesses have always had the option to sort of get something in quickly that solves a problem. Whilst AI lowers the barrier to entry for sort of the bespoke development side, and you can very quickly spin things up. You used to be able to do it with GeoCities. You could make a website in no time, and that's not around anymore. Then the low-code platforms came out, and there was lots of sort of bespoke websites you could go to that would promise you to bundle a mobile app or bundle a website if you explained it and they would generate the code behind the scenes. Um so the options have been there, it's just getting more accessible. But the thing that doesn't change is that there's a big difference between what I'm looking at works and is what I'm looking at safe, scalable, reliable, maintainable. Because you know, businesses will last a long time and they can't have something that keeps churning. Um so there's a a very big benefit that you can solve a problem quicker, but a lot of that is being able to articulate the problem quicker. The actual part where you say, you know what, let's productionise this or put the right security in governance still needs to happen. And whether you're explaining that to the AR to implement or not's sort of a different part of that question. But I think the ability to very quickly turn around approve a concept, understand if it's solving the problem, testing is this the right direction of travel, and having the business along for the ride is amazingly powerful. And and I think that will make everyone's jobs a lot easier because you won't be trying to fit things to other systems. You're able to say, well, this is what I need. Could we solve it? And if you use AI, you can generate that proof of concept really quickly, test the theory, and then move on to that sort of productionalization and implementation of it in a way that meets the rest of your business's requirements.
SPEAKER_00Yeah, sounds good. And I mean, we know ourselves that you rarely go into a client's site and they say, I need you to build me this to do this. You don't go and build that. You go in and you you kind of do proper requirements. What do you actually want to get out of it? What's the purpose? What this could look like. Yeah. And and often what they think they want is not what they what they build at all. Yeah. Um I suppose the danger is if they go off and start building something based on what they think they need and get so far they can't go back, it's going to be a waste of time and resource as well, isn't it? So yeah.
SPEAKER_01And a lot of that's just closing the uh that sort of feedback loop. I think that's probably one of the more exciting aspects of sort of real short-term impact. You don't have that requirements discussion and then go back three weeks later, and then you do a UI design and ask if they want to sign it off, and then start looking at how that could be implemented. You can almost do that over the course of a few days, really get the engagement. You're like, this is what we can build, we can solve the problem, this is what it looks like. Is this the right direction? Is there something that you think actually this needs or is missing? Now that as a flip side, the work is still happening. From a dev perspective, uh and even from a product ownership perspective, you've really compacted that down. But all that thinking time that used to happen while things were being developed, or you only went off for a week to get coded up, you know, people didn't just sit around. And you're thinking, well, when that comes back, how am I going to use that? How does this get solved? Maybe they start looking at some processes. Now you're coming back to them the next day and saying, well, there you go, that's done. Does it work? But all that the thinking, the analysis, the looking at it, the understanding what's being built, did it meet the requirements, still has to happen. And that's gonna become a bigger bottleneck sort of as we move through the year, and more and more companies sort of become AI enabled uh at a core level.
SPEAKER_00Yeah. Different um different problems, more efficiently in some areas and and and bigger problems in other, or bigger challenges in other areas. So Yeah, yeah.
SPEAKER_01I mean, if you go back sort of ten years, or maybe five, the code development part was quite slow. You know, if you think of that traditional project plan with maybe like a month for requirements and a month for design, and then it was like six months of coding, and then like a month of UAT and a month for Galas. That has been massively compacted. But the month of like requirements and the month of testing is is sort of still there. Um and you can use AR for testing everything, but those systems still need prompting and everything else that comes with it and the quality. But effectively that development part has come right down. It's almost like code is cheaper.
SPEAKER_00It's it's the value that is getting delivered. Yeah, sounds good. Uh so this brings us on to governance now. So so AI can generate code, we all know that. Uh, but it can't take accountability when something breaks. So, how should organizations kind of rethink their governance, quality assurance, and risk when parts of their stack are produced by an AI model?
SPEAKER_01So I have a view on that, that it doesn't massively change. Just because something's been generated by AI shouldn't be in a different process, in the same way that, you know, if you bring in anyone else or even internally, you want to check what's been done and apply your own governance process to it. The fact that AI has generated it is maybe an additional risk flag. You know, maybe it requires some slightly further review. But effectively the process should still be the process. And if that governance process is designed that the right thing is easy and trying to do the wrong things is hard, then that's a good process. And if AI is generating very specific, very clean UI code and it's being reviewed and pushed through, it should be able to go through quite smoothly. I think the difference will come to how those guardrails exist. So you have architectural patterns and quality control and coding standards, and you can use AI to help enforce some of those as well. So it's not just about writing the code, but you can actually use it to help with the governance. Generate a document based on the code in the solution. So not what the developer thinks they've implemented, not something that was written and then the code base has changed. But right now, generating documentation over the code that is right there in the system. Um and as long as those loops are built in, you'll have a very up-to-date and accurate governance process um to sit around it.
SPEAKER_00Yeah. And I suppose going back to one of our earlier questions or comments, it's no different using AI to maybe bring in a third-party contractor to do some code and then they leave. Yeah. Um it's still you know an external party who's doing some work on your on your stack. So yeah.
SPEAKER_01Yeah, and you won't necessarily know. You know, if you're using a software as a service tool and talking to its APIs, probably the default assumption nowadays is that there's some AI code in there. And you almost have to take it that that's probably true um and apply the necessary controls on there.
SPEAKER_00Okay, so if you look um three, maybe five years ahead, uh, what do you think the ideal partnership between human engineers and AI would would would look like?
The next five years of human-AI partnership
SPEAKER_01There's sort of that sci-fi world where it's all running itself to a degree, and we're very much just there guiding, interfacing. I think in the short term, uh and to a degree in some industries we're we're already there. There's AI agents working, and you're a sort of the agent manager. You know, you you know what you're trying to achieve, the agents know what they're trying to achieve. Interesting research at the moment around context sharing and context ownership between humans and and AI agents. Um, because you're sort of moving into that world of doesn't matter if it's an AI or another person doing the work. If the if the work is good, it's been quality controlled and checked, whether that's writing a document, drafting an email, or writing code, it's all going to be there and it'll all feed through. I think that will evolve and the way that we work will massively accelerate, firstly. But I think the way companies are structured will change. So, you know, the hierarchy structure, the way there's sort of teams that are made up of people or sub people, and you know, you have consultants and juniors and layers, and people say, Okay, if we want to get this work done, I'm I'm gonna ask Jimmy because he'll be able to tell. That will drop off and you're very much in much smaller teams, much more focused teams. And the agents will be getting the the bulk of the grunt work done. Our role as humans will be much more around the interpretation, the strategy, the guidance, and I sort of say AI won't get there, but I think there's a degree of the end of the day business is about people working with people. It is.
SPEAKER_00Yeah. Yeah. And and that won't change when you think to the uh the introduction of the telephone, email, all these kind of revolutionary things. They haven't changed the fact that people do like to meet up with other humans and actually talk to get things done. Yeah. And uh and and that uh and hopefully that will continue. Let's um let's move on now. What's oh what do you think is overrated right now in the AI coding hype?
SPEAKER_01Uh I guess the ones that make me laugh, or well, probably not laugh, but get a bit of a chuckle out of it, is sort of that idea that, you know, all engineers will just be replaced because anyone can do it. And there's that degree of but anyone could do it before. But there's only so much time everyone has, and everyone has their own objectives that they want to do, and not everyone wants to be talking to AI generating stuff. Yeah, a lot of people just want a solution because they're trying to solve a different problem. So they're not gonna dive in and start trying to learn how to code up a website, for example, um, if someone else can just do that. Now, will it speed up? Absolutely. Will it replace? I don't think so. I think it's an augmentation question. Uh a similar way to pilots have been around for a long time. We have all the technology to make planes autonomous, but we still want two pilots sat at the front of the plane. And people don't want to get on a plane that's currently badged as fully autonomous. Um so I think there's a degree of if you're regulatory responsible and you're the one that's gonna sit down in front of the regulator and say, Well, this is fine, this is what we've done. Having just quickly pushed something through AI and deployed it into production, it's just not a position you're gonna be comfortable to answer.
SPEAKER_00Yeah, that's a good answer. So let's move on to what you think is underrated in AI that maybe nobody or not many people are talking about.
SPEAKER_01I think there's a modernization sort of angle. There's a lot of businesses that have legacy tech, and the benefit of that is that it's well structured, it's a known problem, and it's already in code. So the ability for AI to read code and generate up-to-date code is incredibly powerful. I think it was actually just earlier this week or last week. Uh COBOL has now been added, and IBM share price tanked somewhat because it's such a legacy language that so few people specialise in. Um, but now they've managed to get it into a model. So suddenly modernization projects are unlocked at a much more efficient level. Um so I think we'll start seeing more of that. But you know, right now, all the hype is around what you can build new. Um, you know, there's headlines like SAS is dead because you can build it yourself. So no, like a SaaS company has a huge amount of knowledge about what they've put together. Um, they've spent years homing that. It's not just about the tech, it's about the problem and how they go around solving it. I'm sure there'll be hundreds of companies that are not very good because they're basically just data in out over a database. And if they go, well, you know, that's kind of AR being replaced. But the ones that have actual knowledge, actual processes, it will be a tough argument on why you would go about building that internally to then maintain it to look after it and security and everything else that goes along with it when it's just there available. And hopefully the SAS pricing will come down. You know, as they use AI themselves to build and maintain and update, those license costs will will trend downwards. Yep.
SPEAKER_00And certainly if you look at some of the headlines uh around how AI is going to make you know 80% of white collar workers redundant and those things, they're all written by the people who own the AI companies who are currently struggling to make a profit. So it's uh a little bit of it sales, yeah.
SPEAKER_01Yeah, they sort of need to believe that because that's that's their pitch. Definitely. And I think one thing that will be really positive as well and isn't maybe being recognised yet, is sort of the consistency side. So if you've got AI agents that are deployed across your whole company, you know, suddenly tone of voice in emails, the way people are interacting with clients, the types of documentation is generated, the knowledge that will be accessible to everyone rather than siloed in different companies or different parts of the structure, um, will become much more accessible. Yeah. And just because you might not have access to it, if the agent has access and that helps shape the way something's put together or a deal is structured, the quality and consistency will suddenly fan out across the board. Yeah. And so, yeah, I think businesses will be in for a pretty good few years as that comes through.
SPEAKER_00If you had to choose, uh talking about higher for your team duck, a awesome developer with mediocre AI skills or a mediocre developer with awesome AI skills, which would you choose?
SPEAKER_01Oh, I'm a bit biased, but I would always take the awesome developer. I think AU, you could teach them AI skills probably quite quickly, but it's also what comes out of that. Yeah, so when you think of an awesome developer, I'm not thinking of someone who's just amazing at writing code. I'm more thinking of like a generalist. They know what good code looks like, but they also know the practical application of it. They're not just there to craft a masterpiece. You know, they are there to deliver value, they understand architecture, they understand security, not necessarily experts in it, but they know when to ask for help, or they know that something that's been put together is potentially something that needs additional scrutiny or is a high risk area because they know that from experience. Um I think where there's definitely risk is if you just take somebody who doesn't know how the code, they start building things in AI and just publishing.
SPEAKER_00Yeah.
SPEAKER_01Because they won't necessarily recognise or spot that what's been generated isn't correct. Um and I think but like when access rolled out and everyone can make a database, pretty much everyone was making databases, and then a few years later everyone ran around trying to upgrade all these access database systems into proper systems or or moving the data because they realized governance models weren't being followed or data protection wasn't applied, and you know, you start having all these challenges that will bubble up in the future. So yeah, good developer who can solve real business problems always. Great, great answer.
SPEAKER_00Last question now then so what is one thing that AI will never replace in good software engineering?
SPEAKER_01It probably comes back to that view of a good developer, like that human judgment and the ability to make the trade-off decisions and the long-term impact assessment. Um, AI will have a degree of context and that will get better over time. But you will understand at that point what the other people in the room and the business are feeling the pressure of the timelines they're up against and the problem they are actually trying to solve. And that is not that straightforward to just prompt, or you'll be prompting all that you're effectively trying to feed in there. So there's a degree of like the human empathy, the leadership side of it, you know, this is what we're gonna deliver, this is what we can turn around quickly. Um, and at the end of the day, just owning the outcome. You know, if you can turn around and say, look, we'll we'll get this delivered, you don't need to worry about it, it'll be with you on the state. A person will stand behind that and their reputation will stand behind that. Um and AR won't give you that. You're just gonna say, Oh, it didn't generate what we thought, and we'll try again. Which is not not a great answer higher up, Jay.
SPEAKER_00No, agreed. Only in the outcome. I'm I I love it. So that's a great place to leave it there. Uh Michael Bateman from uh C five, thank you very much. Thanks, Tim.