EP79 AI for Project Managers: Real-world benefits, risks
and future skills
Is AI making project managers sharper—or just making us lazy?
In this episode of Business Breaks, hosts Dante Healy and
John Byrne cut through the AI buzz and get real about what it means for project
management today. Drawing on firsthand experience, they tackle the promise and
pitfalls of using AI on real projects from smart scheduling to generative
"fluff" that could threaten your credibility. Should you trust AI to
make decisions, or does it put your entire project at risk? Which tasks benefit
from AI support, and where does human judgment remain irreplaceable? How can
project managers safeguard their value (and jobs) in a world of evolving tech?
Packed with sharp insights, cautionary tales, and
practical advice, this episode is essential listening for project professionals
who want to stay ahead without losing the edge that makes them indispensable.
Whether you're an AI skeptic or enthusiast, you'll walk away with actionable
ideas to keep your skills sharp and your projects on track.
Tune in to discover how to get the best from AI without
letting it run the show.
Tired of theory?
Get project insights that actually work →
Transcript
Dante Healy [00:00:02]:
Welcome to Business Breaks, your Project Management Edge. Here to bring you sharp thinking, honest conversation and practical insights to keep you one step ahead. Whether you're leading complex programs or managing day to day delivery, this podcast helps you stay focused, make better calls and lead with purpose. So now let's get into it. Hello everyone. Welcome to Business Breaks, your Project Management Edge. I'm Dan Te Healy and as always, here with me today is John Byrne. So now let's face it, everyone's been talking about AI for quite a while, but for us in project management, it's not just talk.
Dante Healy [00:00:45]:
AI is actually impacting how we work and that can be for better or for worse. And we're not just here to talk about the hype, we are actually focusing on what's real for us as project managers. So we are going to dig into the good parts of using AI and as well as the serious risks you need to watch out for. We will aim to cover how to keep AI as a tool and not a crutch. And we'll talk about trust, when AI helps and when it hurts, as well as how you can make sure your skills stay sharp for years to come. This is all about how you can use AI to get an edge in your career. So let's get into it. So we're going to start about cutting through the AI noise and there's quite a lot of it.
Dante Healy [00:01:34]:
So John, from your perspective and your background, when did AI first sneak into your project work? And I don't mean when you read about it, I mean when you felt the change in how you do projects.
John Byrne [00:01:47]:
I've been using AI for quite a while, long before the generative AI kicked off and the hype started. The very first time it kind of entered into my work wasn't so much me using it as project management, it was the systems I was using, the systems I was implementing started using it, you know, wasn't necessarily called AI at that stage. It was machine learning or sometimes AI or virtual assistants, various things like that. The, the terminology was, wasn't quite agreed at that stage. And yeah, it's, it's been, it's been used for quite a while now this age in a, from, from pure project management point of view, I suppose where it, it started creeping in more and more obviously in my work was helping me scheduling, scheduling tools was, was the main one there was enable the schedule, give it the details of what you had and when you had them, and it was able to schedule when you could use them best for your project. But that, that is A.I. i mean, you know, the generative A.I. that's getting all the hype now, that's just a latest iteration and that's kind of almost, you know, the useless thing that just looks good when you hype it.
John Byrne [00:02:59]:
But as a practical tool it's not really very good. You are useful for budget management. It's the underlying stuff, the helping the schedule things, helping the plan, using it as a tool. So it's not you're using it to help you do the work, you're not just expecting it to do the work for you. So yeah, that's kind of how I've been using it and have been for quite a while. And systems like Microsoft Project have been helping me schedule things for a good long time. And as I said, the systems, the ERPs and EPMs that I've been using have been using machine learning that to help with the analysis, the slicing and dicing. So I had to be up to, up to date with that stuff to be able to implement it properly.
John Byrne [00:03:39]:
How about yourself and have you, how long have you noticed it kind of slipping in?
Dante Healy [00:03:44]:
Well, I'm going to show my age and my years of experience. But when I first started in project management back in 2015 I was working for an automotive company, their banking division and to get ahead on trends I was attending a lot of events on future of business, mainly technology. Most of it was over my head at the time but it was mainly around cloud architecture, Internet of things, Artificial intelligence was a side piece back then it was talking about, I went to an event about IBM and Watson was the name of the AI tool back then. But it wasn't for. It wasn't just generating gump, it was actually about data capture mainly focused around targeting customers, so tagging individuals and then using the AI to predict where you could effectively sell them more goods and services. So just understanding their buying patterns, their profiling, it was more around segmentation and then using that to send automated promotions and things like that to check for demand elasticity based on the number of discounts you gave them. So, so that was my first experience with AI as a tool and beyond that we've obviously chat. GPT came out and became more democratized in the sense that people could just register play around with the interface for free.
Dante Healy [00:05:25]:
But even before then I was using GPT based AI tools playing around with new things. So the chat based interface was very much something like Jasper or writesonic pre 2020when chatgpt became instantly mainstream and there were tools using the API keys and building applications that were generating text. Right. And I think back then it was, it was in the early stages and once ChatGPT exploded, you suddenly had a raft of AI artificial intelligence experts who'd never, never learned about machine learning or anything like that. So you know, that's why I don't say I'm an AI expert, I'm an, I'm a user, but I'm a user like everyone else. My expert, our expertise is project management and I think that's where we, we come from. So I think in terms of artificial intelligence, from what I've seen, it's like there's the seduction of speed and it looks impressive when someone writes an instruction and then it generates all this output. Question is, how valid is the output? I guess, I mean what do you think? Because there's a lot of blurb and you know, the innovation model or old school innovation is like you throw a load of stuff in the market, you, you see what sticks, you draw a circle around it and you say yeah, we've hit our target.
Dante Healy [00:06:57]:
But it's not precision, right?
John Byrne [00:06:59]:
It's, it's very, I mean, I suppose with the CHAP GPT type of things you think about water, it is, it's, it's taking everything, it's, it's gotten its model and it's giving you average. So it's at best want to give you average stuff even if it's correct. But, but then it's, it's not always correct either. That's, that's one of the other problems. It can't distinguish between, it doesn't know what is correct and what is wrong. So it takes them all, treats them equally and gives you the average. And yeah, so I mean I have seen people, I've never used it myself but, but I've seen people use it to write up documentation. You know, if you're a prince to project manager that you use it to create your documentation and you still need to rewrite it all.
John Byrne [00:07:44]:
I mean using it to recreate your documentation would be the equivalent of having a, you know, fresh air college who understands that documentation and getting them to write a first draft for A E. They'll give you all the kind of standard headings and that. But content still requires a little bit of knowledge and it doesn't have that knowledge or you know, even if it does have the knowledge, you still have to double check it because it's just as liking to throw something completely off the wall in as it is to throw something good in. So yeah, I think for that kind of thing From a creative point of view, creating things. What kind of create? You know, as I said earlier, I use it for things like scheduling that it's does well. But from a creation point of view, the documentation. But what's the point really? Because if it Knows we're using Prince 2, it'll use Prince 2 templates. Should you have prints two templates, if you have access to them yourself.
Dante Healy [00:08:40]:
So why do you get it to generate a template when you already have one that is generic and on hand?
John Byrne [00:08:47]:
Exactly. Because you know, if it does change the template, it's not like you keep changing it for the better. It's only changing your. And leaving something now that you need or, or whatever. So just use the templates that you have. And I suppose so I suppose where it might be useful is somebody who's not an experienced project manager doesn't have access to the templates, doesn't know how really to fill them in. It can act as a little bit of a training guide for them, but even then it's a bit risky. It may not be correct.
John Byrne [00:09:13]:
It may not be giving them the right information. So yeah, I wouldn't trust it from a creative standpoint or for running your. If you're a project deal. But if you can, if you're not a good project manager, it's not going to, it's not going to help you be a good project manager. You need to be a good project manager and then just use it as a tool to help you run the project, but not relying on it to run the project. Right.
Dante Healy [00:09:36]:
Yeah. And the other thing that's frustrating about AI you mentioned, it is like a graduate and it is up to a point because AI will forget, you know, as soon as you start a new chat, you're. You're resetting from zero. At least a graduate will remember something if it makes a mistake, if they make a mistake. Maybe it'll take a less capable graduate two or three iterations, but the information will stick. AI, it resets. Even if you try and program it through its main context, you can't capture every idea. And even when you program it in.
Dante Healy [00:10:14]:
I mean, one of my biggest frustrations is how it, how it applies. Really bad copywriting. It's not. This is that, for example, every time you get output like that and it's trying to sound personable, but it's, it's quite an image. It's just a very bad imitation of a person. And you know, the concept. I think it was a Japanese technologist. He mentioned the uncanny value of trust and this applies to robots where if a robot starts to look.
Dante Healy [00:10:46]:
If you try and make a robot look human, actually it kind of becomes more eerie and people lose affinity or a connection with the tool because it's, it's like the person psychologically can detect its fate. And it's the same with AI trying to sound more human, but some people actually connect with it. So I don't know, I prefer a direct conversation and it's more about the content than the fluff, if that makes sense. And it produces a lot of fluff, in my experience.
John Byrne [00:11:18]:
Yeah, yeah, almost. I suppose it depends on the models are using and that. Or it is trying to be a little bit too, too clever. Yeah. Rather than giving you the thing. And yet it does. It kind of can still be a bit obvious. Like if somebody doesn't know how to put together the documentation and gets it to write it for them and doesn't edit it and upgrade it, basically it will be, you know, nine times out of ten it'll be obvious that this is just generated stuff and then that will cause you as a project manager to lose a certain amount of credibility.
Dante Healy [00:11:51]:
You rely on it too much and you are not fact checking or validating it because AI can, doesn't know what's true from what's fiction. And it can give you very much, not intentionally, because it doesn't have intent, it can feed you plausible lies, it hallucinates as well. But if you feed it bad data, I mean, it's just a sausage machine of characters. Right. Or you feed it bad stuff, bad, bad information will come out the other end.
John Byrne [00:12:23]:
Or even feeding it good information, I mean, you mentioned it can't. It forgets things.
Dante Healy [00:12:29]:
Yeah.
John Byrne [00:12:30]:
You know, I, and I know somebody, somebody be saying, oh, well, the new version can go and talk, talk to other chats that you've had and read them and all that. But even within the chat it forgets things. Yeah, I mean, I have used it and I've given it information, you know, when I do them, kind of using it as something to bounce off ideas often. And I've given it information up the top of the chat and you know, you're a fair bit down the bottom of the chat now. Let's start throwing stuff out to you. You're thinking, yeah, but we, we've already covered that up above and it should know that that is wrong because I've already told it up above it's wrong, but it just needs to go back to it. So it's not even that I can't remember across Chats within the same chat picks and chooses what it's going to pull out and what it's not going to pull out. Yeah Inch things completely.
Dante Healy [00:13:14]:
You know it comes across as very agreeable and that's part of it's trying to be useful. Right. It doesn't is not useful by fact checking. It's useful by saying oh you're right, I promise I won't do it again. And then immediately when you say to redo it, it will do it again.
John Byrne [00:13:32]:
Yeah, that's it. And like for, for one example I, I was, I was using it back how I noticed they've done away where they now are. They expect me to upgrade to pay more for it. But they had deep research in, in the 4.0 or 4.0 version. It's gone now on 5 I have to upgrades but I used, I was using the Deep research for a comparison between systems. I thought, you know, the deep research it'd be able to go in and get me the two things and it was well the system. One of the systems was Microsoft ERP and Microsoft have two. You know, they've the Business Central and they've Finance and Operations.
John Byrne [00:14:11]:
And it was actually Business central that I was looking to compare with the thing and it kept bringing me back stuff comparing finance and Operations. No matter how many times I told it no I need Business Central, it went down. So it was giving me this comparison where the Microsoft product was just miles ahead of the comparative product. But then you realize no, but this is their tier one Finance and Operations. The pricing is a bit, you know and no matter how many times and it would do it initially and it would fix things but then later on in the conversation you kind of realize hang on a second, this is looking very top quality. Again the, the, the system that it's. And you'd ask it what system is this Finance and Operations and But I've just told you how many times it's business central we have to focus on. Yeah.
John Byrne [00:14:57]:
Yeah. So yeah and that's within the same chat. So you know. Yeah, you need to. It has uses the chapies for I think though its strongest thing is not the, the chat variations, it's not the generative AI that's the usefulness, it's the underlying tools. It's using it for analytics, it's using it for scheduling, it's using it for planning but it's you doing the thinking and using it to as a tool and just purely as a tool if you start letting it do the thinking for you, you're in trouble.
Dante Healy [00:15:29]:
Yeah. And that's why they always say you have to have a human in the loop consistently within any process that is AO powered. That's why the whole idea of agents, they're never going to replace humans for very skilled tasks. All you're going to pay for is a load of automated garbage potentially. And I think that's why most intelligent architects I've worked with on digital systems, they will inject some, they may consider injecting some AI, but not for business critical components in the process. So for example, data, like if you have to make sure data is integral, has integrity and is secure, you don't, you don't inject AI into it. You might inject AI on the front end, at the downstream, right at the end where you have users who want to get certain reports or queries, but the data itself isn't, isn't touched by AI because AI can mess it up quite easily. And you don't want to risk that, do you? I mean if on basic chats you're getting this fuzzy logic that has no constraints or boundaries.
John Byrne [00:16:41]:
Yeah. And you know, if anybody I almost, I say this and then, then it'll torn out of that. The, you know, the latest versions are even better. But if anybody was thinking that, you know, we're downplaying it too much, what I would say is this. Put in a document, a reasonable document, and ask the AI tool to summarize it. Then start a new chat, do the exact same thing and you will probably, very likely find out that you'll get two very different summaries. And I don't even mean go to two different AI tools, use the same tool, just different chats and even within the same chat, ask it to summarize it a second or a tour time. And you know, I'd say if you, if you ask to summarize the same thing 10 times, you will get, you will get at least probably two or three very, very different summaries that almost look like they're summarizing in completely different documents.
Dante Healy [00:17:27]:
Yeah, no consistency.
John Byrne [00:17:29]:
No, no, no consistency.
Dante Healy [00:17:31]:
And I suspect there's too many factors at play, especially with public LLMs because the actual providers are tweaking their cloud costs. I mean, side topic, which I won't dwell on, but GPT5, recently that release has had massive criticism because they've seen it as a drop in quality and which the users have clearly noticed. But now they're saying it wasn't an innovative step. It was basically a cost cutting exercise using a router to actually Direct the most appropriate LLM model whether it was a high powered one or the lowest powered one. Lowest powered one. Giving you quick and dumb answers. The highest powered one, applying more thought before producing output. For most parts the router is actually directing to the stupid LLM which is cheaper to run.
Dante Healy [00:18:26]:
So people are getting low quality outputs. And I guess speaking of low quality, the more people rely on AI. Moving on, I suspect that over time I will make you stupid. I mean even to the point where I've had people say I can't even draft my own emails, now I have to rely on AI. So you know, your brain is a muscle and if you don't exercise your muscle it's gonna get lazy as well to the point you struggle with even basic tasks. Do you find that John, that that could be a risk?
John Byrne [00:19:07]:
Yeah, I mean, you know, so like I said, if you start relying on it to create your reports for you do, how are you going to. Eventually you'll forget how to create a report. And if you forget how to create a report, you're not going to be even able to check to make sure that its reports are accurate. I mean as we said, it'll look at the same information twice and give you two different answers. So how can you really rely on it to give you an accurate report? You need to be able to look at that information and you need to kind of know, you know, let Larry give you a draft if you want, but you need to know if that draft is correct or not by looking at the information. And if you stop doing that, you start relying on it. You forget how you know, you'll forget how you'll become unreliable and then suddenly you are no longer employable as a.
Dante Healy [00:19:53]:
Project manager because yeah, you can't sell yourself without GPT. Yeah, the room, you know, is like. And the only thing you're going to say is AI generated gump that you'll regurgitate when you're sneak. If you get the opportunity to sneak a look at your smartphone, you won't be able to hold the conversation. And it's, it's also clear I've been with consultants in even big firms, well known firms, high profile firms and I mean I'm a freelance project manager but the client, client manager has actually shared with me in private. I think these consultants are using too much chat GPT because you can see it in meetings where you have to have situational awareness, you have to read the room, you have to go in with intent and you have to understand what's the topic at this of discussion and they can't even start the meeting, never mind lead it, which is what they're meant to do. And I've had meetings where I've had to lead the agenda even though it wasn't my role because there were lots of people and if I didn't take it on, they. There would have been no, there would have been no meeting.
Dante Healy [00:21:05]:
So, you know, people stop doing what should be basic when you're actually coordinating things, like in, especially in critical meetings. So you stop questioning things. Meetings get shorter, but there's nothing covered. And that's the problem. People stop thinking and it can't look good for their profiles. I've seen these type of consultants get rolled off projects quickly and this is a recent trend.
John Byrne [00:21:34]:
Yeah. And I mean, you know, that's, that's the thing. You kind of need to. I keep using the term, but you need to use AI as a tool. Not as, not as your ego, just the tool. It can give you a draft, but you need to rewrite it in your own words and make sure that the information is accurate.
Dante Healy [00:21:52]:
Yeah.
John Byrne [00:21:52]:
Don't rely on it to make decisions, use it to bounce ideas off. Sometimes, you know, if you haven't got another person to sit there and talk, you can, and it may be it'll throw out something that will just get you thinking. But the idea is that it gets you thinking. It doesn't take away your think.
Dante Healy [00:22:07]:
It should elevate your thinking because it should remove a lot of the admin work that you're forced to do as a project manager. But at the same time, you can't over rely on it to generate your ideas and ask you to like, generate me a list of questions I should ask as a pm? No, think about the context and think about what are your risks, what are your objectives and how would you manage these scenarios? You need to be engaging your gray matter. Because AI doesn't do it very well in my experience.
John Byrne [00:22:42]:
No, like in a situation like that. Yeah, you can ask it, generate some questions, but then you start questioning the questions.
Dante Healy [00:22:48]:
Exactly. You exercise the judgment. I won't, I won't even make the call. It can give you rough ideas if you drag it out, but I dare say it won't. And you know, coming back to your point, it is just the tool it can't make. It shouldn't make judgment calls. It doesn't have the context. Context is more than just data points that are collected in the system.
Dante Healy [00:23:13]:
It's the politics, it's the nuance, it's the interpersonal Dynamics within the team.
John Byrne [00:23:18]:
And I think it was easier before. I think the chatbot interface has made it more difficult. I think, I don't think it's any worse. It is a lot better, actually. I think it is a lot better than it was a few years ago. But the danger has been that chatbot interface people, people now start thinking it's more intelligent than it actually is and they have to remember, no, it's not any more intelligent. It's still the same underlying thing. It can just do things faster and it can do a little bit more, but it's still not got true intelligence.
Dante Healy [00:23:55]:
That's on you.
John Byrne [00:23:56]:
You're supposed to have the intelligence and just remember to use it.
Dante Healy [00:24:00]:
Yeah, exactly. Coming back to your point is just the tool and you need to be the craftsman to get the best out of your tool set and not just think everything, every problem is a hammer to your AI nail.
John Byrne [00:24:14]:
Yes. So, yeah, you know, and, and ultimately keep responsibility. Responsibility is you. If you're the project manager, responsibility is yours. If AI tools could genuinely manage the project so that you didn't have to, well, then you're out of a job. So, you know, the fact that you have a job means it clearly can't manage projects and it never, to the.
Dante Healy [00:24:39]:
Extent that a person could, that like if, if, if a lead architect, a project lead, an executive sponsor has a very specific question. They don't want to be going back and forth with an AI to figure out that the AI doesn't have enough information to give you what you're looking for.
John Byrne [00:25:00]:
Yeah, and there's, and there's more to management, you know, more to project management than just that in any way that, you know, it will never be able to manage. You can schedule a project if somebody gives it the correct information, but that's the manager's job. It can help, you know, it can help design a nice reporting tool. But again, the information that has to be checked by a manager, the tool can't do it. I don't think it ever will. You know, I don't think we'd ever. Because again, projects that tend to be done by humans, there's going to be human interface somewhere along the line. And that's what a project manager is really.
John Byrne [00:25:32]:
We've kind of discussed it in the past about change management and this down the other and clashes of personalities and all that. An AI tool is never going to be able to deal with a human. Human is too illogical. And an AI tool, no matter how human like it may seem, will always be based on Pure logic of its program and it will not be able to predict what a human will do outside of that. Which means you'll always need a human who can react and read the room a little bit better. And managing people as well as state of the art tool can't do that. They can just help with the basic stuff. It'll help with scheduling, It'll get better at helping with scheduling.
John Byrne [00:26:07]:
You'll maybe have to put in a bit less information and it will still be able to do the same amount of scheduling for you and things like that. That's how it will improve. And being used like that is fine. Yeah, yeah.
Dante Healy [00:26:17]:
I mean it can, it can generate output. That's not, that's not necessarily unuseful. But you have to realize you have to polish it up, you have to review it. It goes back to Pareto principles 80, 20 though it can do possibly at best 80% of their work, you still have to finish off the 20% because ultimately a job isn't a hundred percent done until it's done. And actually it's a great point you mentioned going on this whole angle of trust and you know, I like to use the analogy of a story which is called the Sorcerer's Apprentice. And for those of you who haven't read the book or know the story or even seen the Disney, old Disney cartoon called Fantasia, there was, there was a sorcerer's apprentice and obviously he decided to use a spell to make a broom carry water to a well. But he went off, went, fell asleep, set and forget. And then eventually the broom kept pouring water and was flooding the sorcerer's parcel.
Dante Healy [00:27:26]:
And then he decided to cut the broom. But the broom multiplied and it kept multiplying and it exponentially compounded the problem because there was no natural kill switch. He knew how to start, he didn't know how to stop it. So unlike a cartoon, generally speaking you don't have someone with AI is literally set and forget. It's up to you to manage it, the inputs and outputs. And I think coming back to that point is if you are put feeding garbage or letting the garbage grow, and this is probably more along the lines of agentic AI, it's not maliciously going to ruin your process or whatever outcome you're looking for, but it will continue to follow, follow the flawed logic until you stop it. So I think there is this problem that AI probably gives you more of an illusion that you're in control than you are. What do you think, John?
John Byrne [00:28:32]:
I think actually what it does is it gives you the illusion that it is more in control than it is and then you foolishly hand off that control to it. Whereas the reality is no, you are the one who's still in control and you need to be in control and you need to be keeping an eye on it. And you can't just sit there and go off and go to sleep and then expect everything to be okay when you come back. And this is a very, very simple task. But even then the agent AI tools I'd be nervous about, I prefer good old fashioned algorithms that will only do what they've been told to do and don't take it upon themselves.
Dante Healy [00:29:06]:
Yeah, you know, our project life cycle, you know, at the end, before you go live, what do you do? You test, test, test thoroughly. The more you can, ideally the more you build quality upfront, the less, less expensive it becomes in a project.
John Byrne [00:29:22]:
And I suppose, you know, you need to take, you know, look at the people who have invented these tools that they haven't invented them but have been.
Dante Healy [00:29:30]:
Promoting and building them, designing them.
John Byrne [00:29:33]:
Yeah, they are the, you know, the tech roles type of thing and they all live under a similar brain. That's what they do. Break things and see how move fast.
Dante Healy [00:29:47]:
Break things and then.
John Byrne [00:29:49]:
Yeah, and so you need to remember that that's the mentality of the people who are working on it. That mentality is actually then being built into it. Yeah. So on your project, as a project manager, is that really what you. Are you happy enough to break things? You know, if you're doing a project that is very cutting edge and you know, with maybe that is perfectly fine for you and you're happy to break.
Dante Healy [00:30:12]:
Things, you have to break things because you're running experiments. Yeah. But a lot of like, like mature companies, they have a reputation to protect. So if you're breaking things that impact your brand, then you could be in trouble without those God rails, that's it.
John Byrne [00:30:30]:
Impacting your brand or even just impacting the business that the businesses run is so reliant on these controls and these systems, you can't be breaking them. So you know, you do need to think about that, that if break things is not something that is suitable for your project, then do not use a, you know, an agentic AI or a generative AI tool that has been designed by people who break things is actually their business model. It's break things fast, learn from it, move on, try to break something else. You know, that, that's that and that's where we are at at the moment that it is with that. So most projects that were like certainly ones we'd be doing, systems implementations and things like that. Break things is not, you know, that's what you want to avoid. You don't want anything breaking at all possible. So don't, don't hand over control of your project to a system that is designed to break things.
John Byrne [00:31:22]:
Keep control of yourself.
Dante Healy [00:31:24]:
Well, don't blindly do it as well add to it. Because with an AI the problem isn't that it's deliberately out to get you or make you look bad, it's just that it's a tool, it's a machine is blindly following what could be a flawed command because it doesn't have any sense of context or ethics unless you build it in. And that lack of what is true control is what makes the trust piece very difficult. If you're caught out and people start realizing all these, these, these chances are releasing garbage into the market.
John Byrne [00:32:04]:
But that's like what you said there. Its only sense of control is what's being built into it. And that's what I'm kind of mentioning is think about who's building it in their mindset is break things fast because that's what they're doing. So that's what's being built into these things as they're sense of control. So be very careful, don't become over reliant on it. As I said, keep saying use it as a tool but double check everything, verify, you know, everything comes out with due diligence basically. Due diligence, yeah. It will save you time to a certain degree, but it's not going to save you all the time.
John Byrne [00:32:42]:
It maybe will save you 50% of your time. If you've tried to get it to save you 90% of your time, then what you're doing is handing over patrol to it. And that's not going to end well.
Dante Healy [00:32:51]:
Yeah, I mean it's essentially what you're doing is cutting corners. You're delegating the control to someone else. I mean when you use public LLMs it's basically you are running the risk of that. They could easily switch something like with ChatGPT5, you know, they've gone for a cost cutting exercise and immediately the quality's dropped and people have been caught out with it and that's obviously for them, they've lost trust but at the same time they're trying to bring it back. Initially during the launch they, they dropped their legacy models which were probably more expensive to run, but that was on them. But if you are relying on your processes with their LLMs that could be a big problem because suddenly you've got a business that's reliant on someone else's product and you've just built on top of that. You haven't created your own product from scratch, so you've surrendered control, I guess. Moving on.
Dante Healy [00:33:48]:
You know, I think we've covered quite a lot of ground on this, but where does AI really help a project manager versus where it hurts hurts them. I think for me it's like, I think it helps when it's within its capabilities and you're not exposing yourself as a project manager to too much risk. But yeah, I just want to get your take. Where do you think AI is helpful in project management and where does it actually hurt project managers?
John Byrne [00:34:19]:
I think if we could get rid of this chat interface, I think it'd be much more useful because the underlying stuff is really good. It's really good for helping you to do planning, it's really good for helping you to schedule. It's the underlying algorithms, as long as you're using them, it can predict stuff. It can, it can find. I mean, there are tools that long predate what we consider AI now that you just throw in a whole load of data and it would find trends, it would find correlations, it would find all this because it had all the algorithms in and that was it. And people use that as a tool. They didn't just blindly trust the B, they reviewed and they said, oh, that is a trend. This is a thing.
John Byrne [00:34:57]:
That algorithm works well, it could help me predict what's coming and things like that. The problem is, so that's where it helps, you know, use it as a tool like that, where you are reviewing it and you are doing it, you are telling it what to do and then you're looking at the answers and it's maybe spotting something that you might not have spotted, but you're then investigating it. You're not just blanking, accepting it, you're testing it. You're doing it where it hurts. I think is with the, with this chat interface, because it's, it's fooling. It's, it's not intended to fool people. It's just meant to help make it easier to ask it to do stuff. But A, it means you have to be very clear with what you're asking me.
John Byrne [00:35:34]:
It sounds great. I don't have to be as clear, I don't have to understand the full algorithm to ask it to do it. But then if you don't fully understand what you're asking, you might be asking it to do the wrong thing and not even realize it. That's where it can hurt. And also it can lull people, it can make people think that because it's giving you an answer in normal language that oh, this is an intelligent being and it has reviewed all this data and done it all for me. And no, it hasn't done any of that. It's just sounds intelligent, but it's not, it's still a dumb tool that needs you to. So where it helps and where it hurts is kind of where it helps is it's even it's very good at that type of analysis work.
John Byrne [00:36:10]:
Like that type of helping you plan, schedule all the rest. But where the torting at the moment is, it's lulling people into thinking it's intelligent when it's not and lulling people thinking they don't have to be to cast a critical eye over the data anymore. But you do.
Dante Healy [00:36:25]:
I think where it can help is if there are some real routine admin tasks that do tie up a project manager's time, it can fill that gap. For example, for me, I find it does great meeting notes. Well, great, good enough. Especially if you're on digital calls where you can record the call and then it will listen in and then generate follow up actions and key points. So that's, that's quite useful. I mean one of the, one of the biggest things with project managers that they hate is actually being the meeting note taker, you know, so you can drive the meeting without having to think in the back of your head. Gotta note this down, gotta note that down. You know, there's been times back in the past where I had to pause a meeting for two minutes especially whilst I figure out in my head what did that note mean, you know, sentences and I'm leading the meeting as well or just delegating it to an assistant.
Dante Healy [00:37:25]:
So I think that really helps. It makes you more productive in that, in that context. But at the same time, what it doesn't do well is it doesn't, you know, people, it can't sense people. And there are things that could be said, you know, in words like text and you know, dialogue, but there's more communicated through body language, tone of voice. And it doesn't read between the lines, it can't read the room fully. So AI can't read body language. It can't know what the meaning is behind certain words. And then when you, you're looking for empathy, it doesn't have, it can only take one form of input and process that information.
John Byrne [00:38:13]:
And I mean even, even things like, I know they're doing things, you know, somebody be listening and saying, oh, but it can recognize facial expressions and stuff like that. And you know, some of it can they, they are testing with that. But here's the thing. Yes, it can recognize your facial expression that you are now angry. What it can't do is say why. Whereas a human probably can. They know something was said and you had a poker face for a certain amount of time, but then you just couldn't hold that anymore. Whereas the system, at best, what it would do is, well, this is what was said.
John Byrne [00:38:43]:
And he looked angry and said, yeah, but actually what he's angry about was what was said several, you know, parts of the. Several minutes earlier in the discussion. You just managed the whole spoker face. And it can't do that. As you said. What it can do is it can tell you what was said. It can transcribe what was said. Really? Well, yeah, it can't deter, it can't attach importance to it.
John Byrne [00:39:04]:
It can't say what's the most important thing that was said. It just knows what was said and it treats her all equally.
Dante Healy [00:39:08]:
And it can't prioritize because it doesn't have context to say what are the priorities? Yeah, what are the moral, what are the moral guardrails? What are the guy. What are the boundaries that shouldn't be crossed? And it will just process it as it's trained or as it's directed. So I mean, you can't, you can't really put political nuance on a spreadsheet. You can try, but in real time doubt it. So, yeah, yeah, it's, I guess AI does some basic logic tasks quite well, but it needs that data and it's not. Again, coming back to the point, it has use, has good use, but it's not perfect by any means unless there's a dedicated tool that I'm not aware of.
John Byrne [00:39:57]:
Again, I think, and what we've been coming back to quite a bit in this and I think we've continued, you know, as we talk, it has, it has usefulness to help you do things, but it cannot do themselves. Yeah, sometimes it'll, it'll, it'll make a suggestion or analytics, good suggestion. But you need to know is this a good suggestion or a bad suggestion? Because other times it'll make a suggestion and it's bad or it will say something, it will print something that will create something that's bad. You need, it's still on you to know that what you can use and what you can't use.
Dante Healy [00:40:34]:
Yeah, it can't make judgment calls. For example, it can't recognize a mistake if the data's bad and it hasn't been trained to spot those mistakes. It can't check for bias if it hasn't been told to check for bias and it's just been fed biased data. And so with all of that, it's very hard to establish trust in, if you've, in the AI, if you're looking to inject it in a process and not without having to give it a high degree of human oversight. So I think also there is this problem. I think what we're coming back to is the whole idea of control. And control is really about having the ability to manage the AI's actions and it's not about off or on, it's about steering the AI to work through and get the desired outcomes. And this is hard when the AI's goal might conflict with the user's goal.
Dante Healy [00:41:35]:
So you have to align the intent and that's not always easy. And again, because the user might not be able to articulate in a way that the AI can understand and follow correctly. So I think we need to have ways to solve it. And also building control points to make sure that there are checks and balances in place as the AI processes information in a process to make sure that the human ultimately has a say in whether whatever the AI produces is used or not. I think, yeah, definitely.
John Byrne [00:42:10]:
Ultimate control has to stay with the human. If you pass over that control to the AI, you've lost.
Dante Healy [00:42:16]:
Yeah, you might as well not be there. And this is what we're seeing, I think with companies letting people go. I think that's a hugely risky strategy because, you know, AI is limited and people who think that it is this all encompassing machine clearly are assuming they're going to invest in, I don't know, quantum computing. And that's not even a commercial reality because it's not feasible yet. What you're getting are like ropey automations based on fuzzy logic. So you're going to run your business on hallucinations. You might as well be in a casino gambling your, your company's money away.
John Byrne [00:42:56]:
Yeah, the systems, the, things aren't going to get much better because so much of it now is AI generated that it's just re, it's, it's reinforcing its own biases. So it still definitely means, and I actually think it'll get worse with dreams. You know, at the beginning as a project manager, you needed to do a Lot more interaction with it and to update it. Then it reached a certain point where your interaction was less because it was actually getting more right. But I think now, as we progress, it's getting less correct and it's getting more wrong. So your interactions are having to increase again.
Dante Healy [00:43:33]:
Yeah. And suddenly businesses will wake up to the idea that AI is actually garbage. We need the people that we let go back. I am kind of optimistic and maybe it's. We get them back, but we need people who are good with AI rather than people who just say, I'm a prompt engineer. But yeah, the stuff I produce is garbage because I'm not really a prompt expert. I just talk to the AI by giving it really vague, vague commands, you know.
John Byrne [00:44:05]:
But again, like, even with prompts, I mean, I know we did a thing a long time ago, experiment, experiment, and it was a very simple one. You were asking if to generate a top 10 thing. You were a bit of code that would generate a top 10. And I gave you the code that generated a tree. And it took. It took us about 20 minutes, 20 to 30 minutes before it actually gave you the code corrected to generate a list of 10 things. When you put. I think it was Python code, was it.
John Byrne [00:44:35]:
Or.
Dante Healy [00:44:36]:
Yeah, yeah, Python code. Or was it VBA in power? Oh, vba.
John Byrne [00:44:40]:
All right.
Dante Healy [00:44:41]:
I tried with the Python and I mean, this was before for the. The Canvas on ChatGPT. So now Canvas means that ChatGPT will export to Excel for you without any formatting. But back in the day, yeah, that was a good example of how AI really doesn't do precision tasks very well.
John Byrne [00:45:04]:
But, but it's even. It wasn't even just that it didn't do precision tasks very well. It meant you needed to be a good coder and to understand the code to know what was going wrong.
Dante Healy [00:45:13]:
I knew it was. Yeah. I had to manually fix it.
John Byrne [00:45:16]:
Yeah. So, you know, again, and I. The tools don't replace expertise. They can speed things up a little bit that, you know, maybe you had about 75% of the code created for you. You think you had about that much credit when you had to manually adjust it. You didn't have to start from scratch and write the whole thing. You just needed to adjust, but you still needed that expertise. And I think that's the thing, key thing here.
John Byrne [00:45:41]:
It will not replace the expertise, it will just help speed up the monotonous piece that you have to go through in order to get to the expertise. Different.
Dante Healy [00:45:51]:
The tasks that it actually automates are the one and done tasks like Say one presentation. If I wanted it to do that, if I wanted it to generate a code that I would be using more frequently, I don't think I'd even want to have to do that 30%. I would want it to be a hundred percent right at the start and also flexible enough to reuse without any or at least minimal modifications. So I think the problem with AI generated stuff is it needs guardrails. I've seen applications, website builders that in fact full transparency. Our podcast is using an. Our website is you are for the podcast is using an AI website web site builder and what it does is it uses libraries. The AI piece is just generating blurb in the components that involve text.
Dante Healy [00:46:51]:
So it's not actually that intelligent. It says if you want to generate a project management podcast website, here's the prompt and then it will generate a website, it will select the template and it will start populating the text boxes with blurb. And it's ultimately you're going to have to replace it all anyway because it's not going to be custom to you. Even with a good prompt. You're going to have to say oh, this is very generic. It clearly is AI written and I don't want that. So yeah, it's one of those things where I think AI is useful but it's to make it really usable. You, you need someone who knows and it's usually a human how to build.
John Byrne [00:47:42]:
System and, and they need to understand what it is that they're building. You know, it's not just that you need somebody who knows how to create the prompt properly together to create the website for you. You need somebody who actually understands how to create a website because even then it's not going to give you a good website. It's going to give you a good start to a good website and you need to finish it. And it's the same with the project management where network is. It will help you speed up some of the like the scheduling. You know, you don't have to be sitting there trying to schedule everything with a pen and paper and to do it the old fashioned it will do it. You just input the stuff and it will do the scheduling for you.
John Byrne [00:48:18]:
But then you still need to just review that scheduling, make sure that you know it's. It won't get little things, it'll. It'll get the big, you know, get the trends, but that's it. But you need to then review the detail and think yeah, that's grand but it did save you a lot of time because it Gave you the first draft.
Dante Healy [00:48:35]:
Yeah, yeah. But by chance, if it gets you to launch because you've, your, you or your team have invested in all of the other pieces, what I've seen is beyond launch, making it maintainable is hard. I've actually seen people like when we've gone past the initial launch and then we're going into bau, the application needs to be maintained. You can tell what's more most likely an AI generated or application consultants who aren't technically strong, who've used AI code, when they've passed it over to another vendor, they've realized actually there's hundreds of lines, it's not really maintainable, there's no reusable components, the documentation sucks and we, we need to rebuild it in order to make it maintainable for us.
John Byrne [00:49:24]:
I'll just be clear in case anybody's thinking that when I'm talking about the AI tools and project management, I'm talking as tools to manage the project. I'm not, you know, the end product that work well.
Dante Healy [00:49:36]:
Yeah, sorry, sorry. And I'm going off, I'm saying project managers who work on digital projects, you need to be aware shortcuts will come back to bite you. And AI is no exception to that rule. Wow. I'm actually scared because the last piece is really what's the future of AI Intel? You know, and you know, I've, I've been looking at the World Economic Forum Future of Jobs 2025 report which was released earlier this year and a lot of it, I'm, I'm, I dare say is, is actually, shall we say, a lot of it isn't too bad. You usually feel with these reports, they usually hype up something or a trend in order to encourage attention. But yeah, I think it's, it seems like skill gaps, which I agree with is going to be the problem, you know, so if people are doing business transformations, you're going to find that there are fewer people who are capable enough to deliver project management, you know, change, change is the new normal. The skills are becoming more important.
Dante Healy [00:50:48]:
So that's positive for project managers. But what they need from their project managers is to be more critical thinking, having the resilience, flexibility and the same buzzword agility and also the human element, which is leadership and social influencing, making things happen with groups of people. I guess from the AI side we need to think about, and this is not on the Future of Jobs report, it's about making sure that AI is for precision and not just generating loads of slot and because it can lead to bad outcomes if used beyond its capabilities. I guess, you know, obviously the low level, automate, low level rules based jobs can be automated even more easily with AI. And I think the good news for project managers is regardless of AI, it's still one of the most important roles in business. John, what do you think? Any thoughts from your side?
John Byrne [00:51:56]:
Yeah, I'd agree that I think there's going to be a little bit of a journey, I think where we are now. I think it will get bad a little bit for a while in that a lot of companies will try to outsource human jobs to AI. Then they'll realize that the AI is not capable of doing it and it will rebound. But there will be that little loop where it won't be, every company will do it. Some companies will be lagging behind and they'll actually end up being more successful because while the company went ahead and outsourced all the decision making and the important stuff to an AI tool, when they start failing, the other companies will, will be able to get a head of advantage. But yeah, I think, you know, that will happen maybe in the next five years and then after that things will return to normal and project management, project managers, experienced ones, will be back in high demand because everybody will have realized AI cannot replace expertise. It can only replace the, you know.
Dante Healy [00:53:00]:
The, the lack of expertise.
John Byrne [00:53:03]:
It can replace monotonous stuff. It can't replace every. So yeah, but you ask the question, would you hire somebody just out of school, do this job? And if your answer is oh yeah, you know, it's, it's, it's data entry, okay, AI tools probably replace that. Yeah, but if your answer is hell no, I wouldn't hire somebody just out of skill to do this job. It requires too much knowledge, too much, you know, experience to be able to do it well, then an AI tool is not, no matter how intelligent they sound, how intelligent they look, and you know, all the iterations that are improving with them are only making them look and sound more intelligent. They're not actually making them more intelligent. There is a difference between looking and sounding intelligent.
Dante Healy [00:53:48]:
That's very good. And I think beyond the market dynamics of being a project manager in a career, I think personally from a commercial perspective, a company's portfolio or project management system will be their most vital source of intelligence. So I think having the project artifacts in a single place will become more important as that becomes a single source of truth for the data. And that will be what feeds AI models in the future for portfolio managers and program managers who sit above the project managers. So for things like understanding history, understanding trends, understanding who's successful. So I think a project manager needs to be more aware of data mastery and they need to be able to know how to properly tag and log every piece of their data. I think the idea of an old messy standalone spreadsheet that only covers one project or program will be gone. And then beyond that, I think the systems that are put in place by forward thinking companies will end up becoming predictive assets.
Dante Healy [00:55:00]:
So things that the a real LLM or a dedicated AI that can do the job will suddenly make enable portfolio managers to make better decisions on project prioritization and resource allocation. That would be my, my thought.
John Byrne [00:55:20]:
I think the clear thing there is though, it would facilitate the program manager to make that decision correct.
Dante Healy [00:55:26]:
Making that decision, it will streamline the decision making it will speed it up because it won't make it.
John Byrne [00:55:32]:
And that's the difference. That's where success or failure will be. Yeah, the companies that will fail will be the ones that let the AI make the decision. The companies that succeed will be the ones that have an expert make the decision. But you as an AI tool to.
Dante Healy [00:55:44]:
Help them gather the information to come to. Yeah, yeah, exactly. And I think it all comes back to like my final thought on this is it's really about you as the user. You need to be leading the process and you are the human training model, as it were. You need to give the AI precise data, you need to be there to capture the errors and you need to force where you delegate. You need to make sure you force understanding or you pick that up because it can't. If it can't do it, then you have to be able to do that. So it's about what you as a project manager can do.
Dante Healy [00:56:26]:
So sharpen your own skills because ultimately you're the one who's providing the real value and that's the strategic insight. I would say it's a project manager's job in the future about really managing the projects better. And it will be focus on getting the data right, being an effective filter and then making sure you're on hand and alert and vigilant to correct the AI and capture those mistakes before they go further down the line and become issues. So it's ultimately you who's going to build the company's AI advantage. You can't do it by itself. What do you think, John?
John Byrne [00:57:06]:
Yeah, I suppose it's a. Project managers need to, you know, certain things need to be improved and project management, we've gone through it and many, you know, other things, all those things, AI tools will help them improve it. AI on its own will not be, will not improve it. It's just a tool to help you improve your project. You know, it can help you file things, it can help you find things, but it, it's not going to help you interpret things you need to be able to interpret. That's where the skills is, that's where the advantage is that humans are.
Dante Healy [00:57:38]:
Exactly. I mean, you can't, you can't just suddenly train an AI to have a voice and then suddenly ask it to negotiate with suppliers on your behalf. It's not going to happen.
John Byrne [00:57:49]:
And they, it will never. And no two situations are exactly the same. So even if you train greatly for all the situations you faced so far, it will then try to force those solutions onto the new situation. It will not come up with anything ineffective for the new. And that's where.
Dante Healy [00:58:07]:
Thank you, John. Well, with that in mind, that was our thoughts on AI. If you found it interesting, please let us know in the feedback form. I will. Also, I'm thinking about doing a course on AI for project managers. It will be very applied. It will be based on my own personal use cases, how I use AI. And despite how we have been very critical about it, I do believe it is a useful tool.
Dante Healy [00:58:34]:
If you're interested, I'm going to also add a form on the website where you can register your interest. If I get enough interest, I will proceed. I haven't decided on the format, but I'm thinking of a blended learning with virtual classrooms supported by instructional videos. So with that in mind, John, thank you. Pleasure as always.
John Byrne [00:58:58]:
Thank you, Dante.
Dante Healy [00:59:01]:
That's it for today's episode. If this sparked something useful, please share it with a fellow professional. And if you're after more edge in your work, stay tuned. New episodes land every two weeks. Thanks for listening to Business Breaks your project management Edge and Speak soon.