Working Conversations Episode 248:
Leading Through Uncertainty: Three Bold Predictions for 2026
If the last few years have taught us anything, it is this: uncertainty is no longer a phase. It is the environment leaders are operating in every single day. Systems feel more fragile, decisions carry higher stakes, and the margin for error keeps shrinking.
Many leaders are quietly asking the same question: How do you lead well when the ground keeps shifting beneath you?
In this episode of Working Conversations, I start by looking back before looking ahead. I revisit the predictions I made for 2025 and reflect on what played out as expected, and what those outcomes reveal about the signals leaders often overlook. This reflection sets the stage for a bigger question: if these patterns are already here, what do they suggest about what comes next?
From there, I share three bold predictions for 2026 and why I believe they will fundamentally shape how leaders think about risk, responsibility, and decision-making.
First, I explain why we should expect more frequent and more visible system outages in large cloud infrastructures. As our tools grow more complex and tightly connected, small failures can ripple quickly, turning inconvenience into disruption.
Next, I turn to AI and the growing challenge of determining what is real, what is synthetic, and who is accountable when the line blurs. As AI-generated content becomes harder to detect, leaders will need deeper partnerships with HR and legal teams.
Finally, I make the case for a shift that may feel counterintuitive in a fast-moving world. I believe slow, deliberate decision-making will become a critical leadership advantage. I explore why leaders who pause, ask better questions, and resist reactive speed will be better equipped to handle complexity, earn trust, and make decisions that hold up over time.
This episode is not about predicting the future for the sake of being right. It is about helping leaders recognize emerging patterns, prepare for what is likely ahead, and lead with clarity in an environment that rewards thoughtful action over panic.
If you are leading a team, advising senior leaders, or trying to make sense of where work and leadership are headed next, this episode will help you think more clearly about uncertainty and how to navigate it with intention.
Listen and catch the full episode here or wherever you listen to podcasts. You can also watch it and replay it on my YouTube channel, JanelAndersonPhD.
LINKS RELATED TO THIS EPISODE:
Episode 197: 2025 Workplace Predictions
Episode 240: How Big is Too Big? What the AWS Outage Can Teach U
EPISODE TRANSCRIPT
Every year around this time, I do something a little risky. I make predictions. Not the fluffy trend report kind, but the kind that make leaders just a little bit uncomfortable because they suggest that the ground under our feet is shifting faster than we might like to admit. So in this episode of the podcast, I'm sharing my three predictions for the coming year for 2026. And they all have one thing in common. They're not really about technology. They're about leadership under conditions of uncertainty. We're going to talk about why big systems are likely to fail more until they stabilize.
We're going to talk about why leaders need HR and legal partners more than ever and why the leaders that thrive the most might not be the fastest, but the most deliberate. If you lead people, if you make decisions, or if you are responsible for outcomes that you don't fully control. Well, this episode is for you. So let's get into it. All right, but as you know, before we get into the predictions for the coming year, I like to do a quick recap of the predictions that I made for the previous year and see how well I did. So we're going to start with that recap of my 2025 predictions. The first one was that hybrid work would normalize as opposed to a full return to office or staying fully remote. I predicted that hybrid work was going to be the new norm, and I shared how, how I thought businesses would navigate this shift and what it would mean for you and your team.
And as of this recording, and right now, we are closing in on the end of 2025, as of this recording, 70% of US Fortune 500 companies in the US are allowing hybrid work and again, having it as one of the normalized options. So I would say I was fairly accurate in that prediction. We are seeing some companies that have been mandating return to office. And, you know, whether that, you know, the suspicious minds among us would say that is because they want to reduce their workforce without laying people off and having to give layoff packages. And, well, whether that is true or not, I'm not going to put money on that. It may or may not be true. But again, with 70% of Fortune 500 companies saying, yeah, hybrid is working for us, then we're going to say, I locked that one in. My second prediction was that generative AI was going to be mainstream and was going to become mainstream over the course of 2025.
So in that episode where I made that prediction, I talked about how, how people would begin to harness this technology more to enhance productivity and creativity without losing the human touch. And if we want to say whether or not that prediction was on point, well, I think right now it is harder to find somebody who is not using generative AI in their work. Let me give you an example. I'm on a board of directors and another board member a little over a year ago was really taking issue with AI and she's in a highly, highly regulated industry. And she was just like, not on my watch. That's not happening here. We're not using it. We can't use it.
Well, I talked to her not that long ago. She is now fully on board with using generative AI and she was having a hard time remembering that was her position just a little over a year. So I think I'm spot on with that recommendation that in 2025, generative AI would become mainstream. My third prediction for 2025 was that we were going to be bringing humanity back into the workplace in a large scale. So I explored how organizations were rethinking workplace culture to prioritize authentic relationships, to prioritize mental health benefits, and really bring back that same sense of purpose that people have in their work. Now here I think I was spot on, but I don't think organizations have fully caught on with that yet. I think organizations are still struggling with this one. I think they're still working on it.
I think their intent is absolutely in the right place. And so I think that people generally agree with this prediction as being a good thing for organizations and organizational life, but I think they're finding it hard to carry it out. And the reason for that is, I mean, partly because of generative AI, but also because I think they are asked to do more with less. I was just talking with a client earlier today and they were explaining that every time a position is becomes open, they don't have the funds to backfill it. They are tightening their budgets and they are just making people do more with less. And you know, again, some of that is generative AI and some of it is just people are stretched thin. So I think organizations want to do a better job at bringing the humanity and the human touch back into the workplace. I think they're just continuing to struggle with it though.
So I would say on that prediction I was spot on. But I don't think organizations really have the resources to do that. They need more training, they need their leaders to have a better sense of high touch and to do that at scale. So I think they're working on it, but we're just not there yet. All right, well, let's move on then to the predictions for 2026. And again, like 2025, I have three of them for you and some of these are going to sound a little bit familiar because a couple of these I've talked about in the past over the past couple of months. So I predict we are going to have more system outages and failures. And, and if you heard my episode a couple months ago about the large scale AWS failure and how complicated it is for big systems like Amazon Web Services to maintain consistent for, I'm just going to start that prediction over.
All right, let's get into my 2026 predictions. In my first prediction for 2026, I predict that there will be more outages and failures of big cloud based systems like AWS. And if you heard my episode a couple of months ago about the large scale AWS and how these large systems that are so complex and so interconnected are having a harder and harder time maintaining stability, you'll know where I'm coming from. And if you haven't, we'll link that one up in the show notes so you can go listen to that. But I believe that in 2026 we're going to see more high profile cloud outages and not because the cloud is failing. And again, if you listened to that previous episode, you'll know that in these highly interdependent and complex systems, it's really hard because those systems have become more complex than the people who initially created them, you know, intended for them to be. We're asking the cloud to do things it was never originally designed to do.
Now, part of my prediction here is that there are going to be more system outages and failures initially, but we're going to start to see that swing to be a little bit more nuanced. So I think that AI, because cloud AI is a thing, so cloud AI is getting more and more sophisticated. And so I think AI will eventually help stimulate, stabilize these systems. But first it's going to expose just how fragile these systems already are. So it's going to turn a corner maybe at the end of 2026 or maybe in early 2027 as those cloud AI systems get more sophisticated. Now, more outages don't necessarily mean worse systems. They may actually be a sign of systems being pushed to their useful limits. AI is going to get better at anomaly detection.
So when there are, you know, I described it in, in that earlier podcast episode again, the one we're linking up in the show notes as like students passing in a hallway in a really large high school like the ones the, the one that my two youngest kids go to school in that when there's a jam up of things in the system, it causes issues downstream from that. Now AI is getting better at anomaly detection and detecting those things where like the student stops to tie their shoes and then all the people behind them back up and, and get stuck. So AI is getting better at that Anomaly, anomaly. Having trouble saying that word, anomaly detection. It's also getting better at predictive monitoring. So we're not relying on humans to be monitoring these systems anymore because we can build AI monitoring into it. And so AI is getting better at predictive monitoring and automated rollback when it detects a problem. It's getting better at rerouting and rolling things that those problems can get caught when they're small and get fixed right away instead of having these large scale system outages like we've seen in 2025.
So AI is also getting better at using itself or using AI for load balancing and rebalancing when there are traffic issues. So AI is actually very good at pattern recognition in noisy systems. And over time it's going to do a better and better job at surfacing weak signals that humans miss. And so I think AI is going to reduce recovery time before it reduces frequency failure. So in other words, outages are still going to happen, but I think they're going to be shorter, more localized and they're going to get resolved faster. And again, I don't think we're going to see that last piece of AI reducing recovery time before it reduces system failure until like a good solid year from now. Again, maybe at the end of 2026, early 2027, we're going to start to see that tip. But I believe we better buckle up and be ready for more system outages across 2026 because these complex systems are only getting bigger and more and more products are relying on them.
Again, everything from the systems that our kids use to do their schoolwork to how we order food delivered to our, our data is stored in the wordle and other types of systems that we're interacting with every day, all day long. So again, my prediction here is that in 2026 we are going to have more system outages and more failures of these big systems like AWS. All right, prediction number two. You are going to need your human resources and legal teams more than ever. So in 2026, I believe that leaders will rely more than ever on those HR and legal business partners. Not because employees are suddenly untrustworthy or anything like that, but because evidence is untrustworthy. It is harder and harder to discern what is real in a world that has so much generative AI in it. So when texts, audio, video and screenshots can be faked, leadership becomes less about reacting fast and more about responding responsibly and fairly.
Now I want to caution here against over reliance on HR and legal because that is absolutely going to erode trust. We need to find a balance where there is a trustworthy system and things are being double checked. So AI makes people suspicious and suspicion is corrosive when it is, you know, when it purvey is pervasive throughout a system or an organization. So HR and legal are going to help leaders answer questions like when do we open a formal investigation? When do we preserve devices and messages? When, how long do we hold on to things? What can we ask for legally and ethically? What are we obligated to disclose? And how do we avoid retaliation and risk of really subversive behavior when we are trying to verify things? So in other words, leaders are going to need again that expertise and help from their HR and legal teams to ensure that they're not accidentally creating a bigger problem than the one that they're trying to mitigate. Now this is the most, this is the part that most leaders are going to underestimate. It's not only fraud that we're talking about here, it's really the day to day norms. And so we also need to be looking at things like, and asking our legal and HR business partners questions like when are employees allowed to use AI? Are they allowed to use AI to write performance self reviews and self reflective pieces? Can candidates use AI during interviews? What counts as misrepresentation? Do you need disclosure like this image or this audio is synthetic or is AI generated? Especially when we're looking at internal communication or even when we're creating communication for our external partners. What do we disclose and, and whatnot? And how do we handle AI assisted grievances, complaints or documentation? So when an employee is upset with the organization, how do we handle that? Especially if their complaints are AI assisted? And now we need policy here, we absolutely need policy here.
And policy isn't about being punitive. We want to make sure that policy is written both on the employee's behalf and on the organization's behalf. So the policy is written very fairly and even handedly because that's what's going to keep that trust undergirding the system, that trust that is so important. So policies, again here are not about being punitive. They're about keeping the navigation working in organizations. It's about keeping the roadways moving smoothly in both directions. So again, prediction number two is that we are going to need to lean on leaders are going to need to lean on their human resources staff and their legal business partners more than ever. And I really want that to be a true partnership.
All right, my prediction number three is that organizations will rediscover the value of slow thinking in leadership now after paying a steep price for speed. So here's what I mean. In 2026, organizations will start to explicitly reward, or at least I hope they do, deliberate reflective leadership, not because it's trendy, but because fast reactive decision making in an AI accelerated environment is going to prove to be too costly. So let me break this down for you a bit. So AI accelerates action, not just judgment. And leaders are discovering that judgment is the bottleneck. So we're able to create action faster and faster with AI and that again is going to lead to some costly mistakes. And that's where I think we're going to rediscover this idea of slow thinking, of slowing down and making data driven decisions.
Not data that's just run through AI, but data driven decisions that are informed by our own thinking, by leaders own thinking. This isn't all AI driven either. So Daniel Kahneman's book, if you're familiar with it, thinking fast and slow from way back in 2011, it describes how the human brain thinks in both fast and slow ways. So in the fast category, what Kahneman calls system one, fast, automatic, frequent decision making, quick thinking, it's emotional, it's stereotypical, it's unconscious. And so a handful of examples that Kahneman discusses in the book as being representative of fast thinking would be completing a common phrase like in war and then you fill in the blank peace. Another example of fast thinking would be displaying an expression of disgust when you see a gruesome image. It doesn't take your prefrontal cortex like your high executive function thinking, thinking to see a picture that is disgusting and have the expression of disgust come across your face.
Solving basic arithmetic like two plus two. Two plus two is four. We immediately know that we don't actually have to think of two objects and another two objects and do the computation. We just know that reading text on a billboard like as we're driving your brain can absolutely rapidly do that. That's a very fast thinking exercise. Driving a car on an empty road or even driving a car in traffic, when you're going the exact same place that you always go, your brain is just automatically doing that. So that is. Those are all examples of fast thinking.
Now, Kahneman also describes slow thinking, or what he calls System two. So this is slow, effortful, logical, calculated, conscious thinking. The kind that we don't do over and over and over. Now, we might spend a fair amount of our day doing that slow thinking, but each different piece of thinking is going to be something different. So let me give you some examples of what Kahneman talks about as examples of System 2 thinking. Looking for a person with a particular feature. Or you might imagine Where's Waldo? Kind of thinking, like you're looking at a complex image or you're looking at a complex situation and you're trying to pick out a very specific example of a per. Again, it could be a person with a particular feature.
Another example would be determining the appropriateness of a particular action in a social setting. So, like, ooh, should I say this? Should I do this? Is it okay? Would a person be offended if. And thinking through that and rationally reasoning that if we want to go back to a driving example, parking in a tight parking space. Now, except for if you're like me, because I claim parallel parking as one of my superpowers. But even still, I may be sizing that up and thinking about it much more carefully than if there was a whole open block and I could park anywhere, then I could just pull right in and park right at the end of a parking spot and be done with it. But it takes more of that calculation to pull into a tight parking spot.
Determining the price or quality between two products when you're making a decision. So decision making of any sort, when you have to sort through a number of complex variables and then determining the validity of complex logical reasoning, maybe somebody else's logical reasoning and thinking through whether or not that was a valid choice they made. And if we want to use a math example, multiplying a couple of two or three digit numbers that are going to take multiple rows, and again, not using a calculator, but actually doing that longhand multiplication that those are all examples of that type 2 or system 2 thinking, where it's slow calculated reasoning. Now, again, we've already had this framework from Kahneman since 2011, but now when we add AI to that man, I mean, the thinking time that some of those type 2 system 2 type of thinking requires just gets blown out of the water because that AI is going to cut the thinking time into just a tiny fraction by doing the supposed thinking for us. It's giving us instant summaries, instant recommendations, like that compare and contrast thing I was talking about. It can do that in a split second for us. Instant responses, all of these things come so instantaneously. But I predict that in order to make good decisions, and in fact the best decisions, leaders are going to need to slow down on purpose.
Now still, they can use AI, but as we've seen, AI can make mistakes. And so not just going with the first thing that AI gives us, but to really interrogate that and to second guess that and to pressure test that, to load test that, if you will, so that we get the best decisions even when we are using AI. So I think the leadership skill of the future is not decisiveness, but it is delayed, data driven decisiveness. So really slowing our role, slowing ourselves down and when we are using AI to pressure test it. So quick summary of. Well, and that, that is my third prediction. That leaders, if they want to be effective leaders, they are going to need to be deliberate and reflective in their decision making. And that organizations will hopefully rediscover that value of slow thinking.
Okay, so quick summary of my predictions. Prediction number one, there will be more system outages and failures at first and eventually AI will be better at detecting them. They will still be frequent, but they will start to get shorter. Systems fail when they are pushed too far, too fast. Prediction number two, leaders need their HR and legal teams more than ever. Evidence can't be trusted at face value anymore, in large part because of AI. Prediction number three, slow thinking is needed more than ever. Leaders who move fast without thinking will make irreversible mistakes.
All right, now, as we wrap this whole thing up and we head into 2026, here's what I want to leave you with. None of these predictions are about panicking. They are all about thinking ahead and being prepared. Systems will fail not because the systems are bad or the systems are broken, not because people are careless, but because complexity is on the rise. And when we invite complexity, system failure is inevitable. Evidence is going to get murkier again, not because most employees are dishonest. There may be the occasional person here and there who are dishonest, but because AI is really blurring the line between what's real and what's fabricated. And that's even when employees have their best interest in the organization's best interest at heart.
And leadership will require more pause. Not because leaders are inept or weak, but because speed without judgment is dangerous. And the leaders who do well in 2026 and beyond won't pretend that they have all the answers. They will ask better questions, they will slow down at the right moments, and they will design their decisions with people and systems and consequences in mind. Now, if this episode sparked a question for you, or made you rethink how you're leading, or just gave you pause, I would love for you to share it with a colleague or leave a review. And as always, thank you for being part of these working conversations, and I will catch you next week.