612-327-8026

Working Conversations Episode 238:
Garbage In, Garbage Out – Now with AI

 

Powered by RedCircle

Everywhere you turn, someone is talking about how AI is transforming work.

It’s writing emails, creating reports, summarizing meetings, and helping us work faster than ever before.

But here’s the problem: it’s also creating something I call “workslop.”

That’s the flood of low-quality, AI-generated content that fills our inboxes and makes it harder to tell what’s valuable and what’s just noise.

 You’ve probably seen it yourself. The email that sounds professional but says nothing. The social media post that feels familiar because you’ve read a hundred just like it. The report that looks fine on paper but lacks real insight or care.

 In this episode, I talk about why this is happening and what we can do about it. The issue isn’t the technology—it’s how we use it. When we skip the human parts of work like empathy, context, and iteration, we end up with work that’s fast, but not thoughtful.

I share how “workslop” is really a design problem. It’s not about whether we use AI, but how we design the process around it. I’ll walk you through how to bring more intention to your AI use so that what you create is meaningful, useful, and built on real understanding.

You’ll learn why low-quality AI work damages trust and productivity, how empathy and context can make your AI output stand out and a simple approach to designing better, more human-centered work with AI.

This episode isn’t about turning away from AI—it’s about using it better. Because the quality of what we get from AI depends on the quality of what we put in.

If you want to make sure your AI-assisted work adds real value and helps you stand out for the right reasons, this episode is for you.

Tune in now on your favorite podcast platform—or watch the replay on YouTube at JanelAndersonPhD

If this episode made you stop and think, share it with a colleague or friend who’s using AI in their work. Let’s make sure we’re not just working faster with AI, but working smarter—and more humanly.

EPISODE TRANSCRIPT

Have you ever opened an email or a document that you received from a colleague and instantly thought, wow, did they even read this before they sent? Looks like it was entirely generated by AI? Well, it's not entirely AI's fault when you get those things. Enter the term “workslop.” This term has been coined as a mashup of the words work and slop, describing the lazy, low quality output that people produce when they rely too heavily or too carelessly on AI.

Now, some people will blame AI for lowering our standards or that it's AI's fault somehow if work has been done. Sloppy when AI is involved, but it's actually the humans who are to blame when workslop is the product. And some interesting research which I'll get to in a few minutes, shows that we are not necessarily judging AI harshly. We are judging each other harshly. At least the folks who are using AI responsibly when that workslop lands in our inbox.

Okay, so this isn't necessarily a tech episode, it's a design episode. We're going to talk about workslop. We're going to talk about how workslop happens when people skip the design process, when they don't have any empathy for the person who's receiving the output from them. They're not iterating it, they're not designing it, and they're certainly not testing it. So that's where we're headed with today's episode and we're going to get to the bottom of workslop, why it's a design issue and what you can do about it.

Okay, so let's talk about the definition of workslop. I've got a couple of different definitions to share with you. So first of all, it is low quality work where context is missing, it's often error ridden, and it is clearly not done by a human. And usually there are some tells that it is AI generated so people can tell like, oh, oh my gosh. I got an invitation to an event not too long ago that was clearly produced by AI and everything from an emoji that went with every single bullet point to overly enthusiastic terms and just tone throughout the whole thing. And I was like, oh, okay, I know the event organizer I, well, generally had have. I'm not sure which anymore, respect for the event organizer. But when I read that invitation, which was so clearly AI generated and just was not in their own tone of voice, like the person who was sending it was not in their tone of voice. And yeah, it was Workslop. So I've been on the receiving end of Workslop, fortunately not from any of my staff members. We have a pretty responsible use case for AI in the business here at Working Conversations. And I'm happy to say that the folks who are on my team are using AI responsibly. And I'm not getting workslop in my inbox from my coworkers or, you know, from those who report to me, but I certainly have received it from others that I'm in professional associations with and other types of places.

All right, so we want to also contrast that workslop and well designed AI work, I think of something that is, you know, polished--it's been checked by a human, it's been collaborated on between a human and an AI tool, and it is certainly aware of the context not only for the work product, but also the context that it is sending itself in or sending in. So AI itself is not sloppy. It's the use of AI that can be sloppy. And, you know, we can think about this in terms of user experience design.

Workslop is like launching a product prototype without ever doing any user testing of it. Sure, it's fast, but it's also full of friction. It's full of errors and bad experiences for whoever is on the receiving end of it, whoever has to use it. There's some research done recently that was conducted by the Stanford University Social media Lab and BetterUp Labs. They studied the phenomenon of workslop, and they described it and define it as follows. It's AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. Okay, that is their actual definition of it.

So it shows up and at first blush, it looks like it might be, you know, what you asked for or what you were expecting from your colleague. But then as you dig into it, you realize, oh, this is totally AI generated and it was not collaborated with a human. It was not checked by a human. Now, in that particular research study, 40% of the people interviewed report having received one workslop in the last month from a coworker. And what that often resulted in was pushing the actual work, the substantive changes and updates needed to that work, or the tough conversations about the low quality of the work to the person who's the recipient. So when you're on the receiving end of workslop it usually results in more work for you because you either have to fix something, update something, edit something, or go have a difficult conversation with the person who sent you the workslop. So it's not just a productivity issue. There's also a social emotional impact to this as well.

When you are either having the conversation with the person who sent you the workslop, or if you choose not to have the conversation with them, and many of the folks in the study chose not to, they just like, dealt with it themselves. But there was also the added frustration, agitation, irritation. So that social emotional part, even if they chose not to have a difficult conversation with their colleague about the workslop. Workslop, my friends, let's not do it. Okay?

So let's talk about why it happens. Okay? And again, I want you to think about this like I do. And that is through a UX lens. We're going to frame this up as a design problem because people are not taking the right steps. They're not fully working with AI as a collaborator. So let's look at what gets skipped. So the first thing that gets skipped is empathy. People who are committing workslop are failing to understand who the output is for what problem they're solving and what the impact is going to be when that work slop lands on somebody else's desk. So this would be, here's an example. Just ask AI to write a summary of an article without offering any context for who needs the summary. What role do they play? How are they going to use that summary? What else is, you know, what else is happening in the bigger picture? So again, you need to have some empathy towards who's receiving this. And you need to use that empathy to design a good prompt so that AI knows how to summarize the article.

Okay, that's a super, like, tip of the iceberg type example. All right, so another thing that we need to do to avoid workslop, and this is also part of why it happens, is that people do not articulate a clear problem or statement. They don't write a good prompt. In other words, they're not defining for AI what they need. So when you give a vague prompt, you're going to get something vague back, which is why this episode is titled Garbage In, Garbage Out Now Improved with AI. So again, when you write a garbage prompt, you're going to get garbage back. And again, just as an example of not doing this, here's it.

Let me give the antithetical example. So the other day I was co working in my office here with one of my speaker colleagues. So this is a bit of an aside, but speaking is a lonely business. When we're not up on stage in front of a whole bunch of people or podcasting in our offices all by ourselves, we are alone. And we're alone working on content, working on back office materials like billing and invoicing and like all the things that happen behind the scenes in the business. And it can be kind of a lonely business. So I am in a regular practice of co working sometimes with local speaker friends who come to my office and we co work together, or sometimes I co work with friends who are in speaker friends who are in far away places and we co work together on Zoom.

And so one day I had one of my speaker colleagues here in my office co working. And what we typically do when we cowork is we talk about, hey, what are you working on today? Okay, you're working on these things, I'm working on these things. Anything you need help with. And then we button it up. We don't talk to each other, we just get to work. And there's a certain sense of accountability and a certain sense of camaraderie, knowing that there's somebody right here in the room or in the Zoom room who is also working just like I am on probably some things that are similar. And then we will break periodically to say, hey, anything you want help with? We'll take a morning break, we'll take a lunch break, where we, you know, chit chat about social stuff or, you know, our personal lives or, you know, the things coworkers talk to each other about, and then we'll answer any questions.

So I had a particular thing that I needed some help with and I said, hey, what? How would you approach a situation like this? And I pitched what I needed help with, and my speaker friend said, oh, well, I'd ask AI. And then she followed up with a little bit of like, kind of how she thought the prompt would sound. And I was like, oh, great, thank you. And then I went to work on my prompt, and I'm typing away and typing away and typing away, and I typed like a good solid two paragraphs of a prompt and then hit enter. And then my AI partner, whom you often know, many of you probably know that I refer to chatGPT as Chad. So then Chad offers some response back, and I was delighted at the response that I got. And so I told my speaker friend, I was like, hey, that was great. And then she said, wait, I just heard you typing for a long time.

Was that all your prompt? And I said, oh, yeah, yeah, I write long prompts. The more context I can give Chad, the better the output Chad gives me. And then I shared the output that I got from Chad after the two paragraph prompt I wrote. And she was like, that is amazing. So the idea here is that we really do need to define what we want when we are collaborating with AI. And a big problem we come across when we're getting workslop from our colleagues is they're not taking the time to define.

Okay. The third thing that is partially to blame for this is that people are not ideating when they are working with AI. So what I mean by that is that many times people will just take the first thing that AI gives them as the way to run with something. Now I think of AI as again, you've probably heard me refer to AI as, you know, a college intern who's really eager and really fast, but often makes mistakes and, you know, needs a lot of coaching and context. So I'm almost never going to take the first idea that AI gives me and run with it. In fact, I rarely will ask for a single idea. I might say, hey, I'm thinking about this. Give me three different ways I might approach this, or give me five different ways I might approach this. Or if I need help with a title on something, whether it's a book chapter on a book I'm writing, or a podcast episode title, I'll say, give me 10 titles in three different tones. And I might say, I want serious tone. I want something that's playful and I want something that's scholarly. And then I might get, you know, and it doesn't always just give me 10. Sometimes it'll give me 15, it'll give me five of each of those three, or it might even give me 10 in each of those three categories. So again, we want to ideate with our AI partner. We don't want to just take the first option that's given and then the next thing is that people will often think of whatever is generated in that first effort as the final product. But we really have to iterate and prototype it. Prototype it. And by that I mean we have to play it out a little bit, get it a little bit more robust and then literally field test it.

We need to like, run it up the flag pole. We need to read it as if we are the consumer of it as opposed to the coauthor of it. So we really need to do some testing there along the way. So when we're skipping that testing and empathy, we are not using AI, we are abusing AI. All right, now let's look at some of the cost of workslop. The biggest, the biggest, biggest, biggest part of the cost of workslop is that your reputation gets damaged, your credibility gets damaged when your colleagues read something from you, whether it is a document, a report, even an email that is clearly workslop, they're going to assume that you are the one that is careless, not AI. So let me double down on that. AI does not get blamed for workslop.

In the same research study that I was talking about from Stanford and the other colleagues, the people in that study who were reporting back on when they received workslop, they were not blaming AI, they're blaming their colleagues. And they're thinking of their colleagues as lazy and sloppy in their work. And that results in an erosion of trust. Not an erosion of trust in AI, but an erosion of trust of the colleague who sent you the workslop. So it makes teams suspicious of one another. And when they're getting that sloppy work that's clearly got some, you know, AI behind it. And we also think of it as cognitive laziness. We think of our colleagues as being, you know, lazy, rushed, and not taking the time to do work effectively.

And again, it also starts to erode our own sense of empathy because we get really judgy when we get workslop. And so here's a place where we can, like, take a beat and slow ourselves down when we've got that output that misses the tone, misses the nuance, or doesn't have sufficient context. The work feels robotic. We need to make sure we're not being robotic and lacking empathy when we reach back out to that colleague who sent us the workslop. So in UX terms, workslop fails because it's designed by the system for the system, not designed by a human for a user. Now, the study that I mentioned earlier also puts a financial cost on the rework that comes along with workslop, because again, when you get workslop in your inbox, you probably need to do something with it, whether that's have a difficult conversation and push it back to the person who sent it to you or, or fix it yourself. So they estimate that it costs an organization $186 per person per month. Okay, and the way they came up with 186,000.

No, $186 per person per month. The way they came up with that number is they looked at the average salary of the people who reported, you know, those 40% of people getting the workslop. So they also knew the folks's general salary number. So they were able to calculate a dollar value for the amount of time spent handling and dealing with workslop. So they came up with $186 per person, which I quite frankly think is quite low per person per month. And then what they then calculated is that if you are in an organization that, say, has 10,000 people, that yields over $9 million per year in lost productivity, $9 million per year. So there are some very real costs in addition to the reputation cost of workslop.

All right, so now let's turn the tables and look at what can we do about it so that you do not get accused of workslop? What are some of the best practices that you can follow to make sure that your work is not perceived as work slop? So I'm going to give you five different things that you can do to make sure that your work is not ending up as workslop. So the first thing you want to do is you want to start with empathy. Before you even go to write your prompt, I want you to think clearly and intentionally about who is my audience for this work. Whether that's an email, whether that's a slide deck or report, or whatever it is that you're using AI to help you creating with the help of AI. So start with empathy, and before writing that prompt, clarify your audience and your goal for yourself. And then you're going to write a prompt that includes that. So then we're going to go on to step two.

The second thing that you can do to make sure that your work is not workslop. After you have just given it a think and involved some empathy. Now you're going to design your prompt like a prototype. Okay? So when prototyping, you start with something, you see what kind of reaction you get, and then you iterate and iterate and iterate some more. So you're going to write that prompt with lots of context. Imagine me banging out two paragraphs of context for my prompt. So use generous amounts of context. And then whatever results you get back, you're going to say, where did AI miss the mark? Where did it get it right? And then you want to prompt back and give some feedback back to AI. And you could say, hey, I think you were off on the problem area, but the downstream results were similar to what I was looking for whatever.

But you're going to give some feedback to this part you got right, this part you didn't get right, or if it shared some sources with you, some research you want to then iterate and say, where did you get that research? Can you supply me with the actual citations? And then you're going to go cross check those citations to make sure that they really exist. Because we all know that AI has been known to hallucinate things or come up with information that's just flat out incorrect. All right? The third thing that you can do to make sure that your work is not considered workslop is to use feedback loops. And here's what I mean by feedback loops. Once you have something that you're reasonably pleased with in terms of your collaboration with AI, you're now going to ask, how could this be improved? Or how could this sound more like my tone? And so I have my AI trained to know what. When I'm working on material that's related to speaking, when I'm working on material that's related to podcasting, when I'm working on material, that's my written voice, if I'm writing a book, chapter or an article, that sort of thing. So I might then make sure it's using the voice that I want it to so I can ask it.

Could you rewrite this in my podcaster voice? Or could you rewrite this in my I'm talking to a speaking colleague voice? Let's say I'm writing an email to somebody because I want to make sure that the tone really matches who I am in that situation, because my tone is going to vary if I'm talking to a live audience, if I'm talking on my podcast, or it's certainly going to be very different when I'm writing, because we consume information differently when we're doing any of those activities. Being in an audience, listening to a podcast, reading a book. And so I need to make sure that my tone matches the audience's expectation. I'm going back to that whole empathy idea, but want to use those feedback loops, asking AI, how could this be improved for this particular situation or tone? I also sometimes will ask it for a contrarian view. I'll ask it to think differently about it. And so I'll use a negative feedback loop to ask it to poke holes in the work that it did for me.

Okay. And I encourage you to do that, too. You might, like, let's say you were writing a report or an article or something. You might say somebody who had a different opinion, what were, what would be the top three things that they would disagree with in this article? And again, it gives you much better insight into how your work might be received. All right, fourth idea, refine through iteration. So I've kind of been talking about iteration in these first three steps, but again, once you get something that you're feeling pretty good about, you still want to refine. Now, you might not refine the whole thing.

You might say, give me four different ways that I could start this report, or six different calls to action that I could end with and ask it for more different ideas at various places along the way. Because again, the more you can iterate with AI, the better the outcome is going to be. And then finally, step five is review it like you were doing a usability test. Now, if you're not familiar with usability testing, what usability testing does is it takes software before it's been released or whatever the product is, but before it's been released to the market, and takes it to a small set of actual users and gives them some use cases that they could then, you know, try out in that product to see if it works, to see if it works well, if they get any errors, if there's anything that's frustrating, or if there's any friction that's slowing them down. So here's how you can do that. Now, this is like a loose case usability test. You're going to pretend to be the end user, pretend that you're getting this in the email as the attachment, or you're going to read this article or look at this slide, look at these slides as if you were in the audience and you're going to ask yourself, does this flow well? Does it make sense? Does it feel human? Does it feel like AI generated it? Or does it feel like there's a human being on the other end of this? And ultimately you want to remember that your personal brand is being built on the back of whatever work you're producing with AI. So build a brand for quality.

Use AI to enhance your credibility, not erode it. Because AI can do so much for you in terms of spotting errors, catching mistakes, and collaborating with you on your thinking to make that whatever it is, you're producing a better product because of AI. So we are going to get rid of the workslop and instead we are going to collaborate with AI. Because workslop is not an AI problem, it's a design problem and it's a sloppy human problem. When we skip the principles that make for great design like empathy and iteration and feedback loops. Then we're going to create output that nobody wants to use. So I want you to this week, as you use AI throughout your tasks, and I mean and really beyond, but especially this week, own your AI habits this week and ask your ask yourself, am I creating prompts with empathy? Am I reviewing the output and treating it as a prototype? And then am I iterating and am I working to use AI to, to elevate what I'm doing as opposed to expedite what I'm doing? So my call to action for you today, my friends, is before you hit send on whatever it is that you had AI help you with, whether that's an email, whether it's a report, whether it's some slides, stop and ask yourself, would I put my name on this if AI weren't involved? And if the answer is no, you are not done, my friends. All right.

I hope that this episode gave you some good insights on what workslop is and how, what it's costing our organizations and how you can prevent your own work from being perceived as workslop. All right, my friends, until next time, be well.

Download Full Episode Transcript

 


CHOOSE YOUR FAVORITE WAY TO LISTEN TO THIS EPISODE:


 

🎙 Listen on Apple Podcasts
🎙 Listen on Spotify