612-327-8026

Working Conversations Episode 151:

7 Things to NEVER Do in AI

Have thought about pushing the limits of Generative AI tools like ChatGPT to have them help you with everything?

Before you do, consider this: What if relying on AI in certain situations could lead you down a dodgy path?

While Generative AI holds immense potential to revolutionize various aspects of our lives, it's essential to recognize its limitations. Yes, you read it right. Artificial Intelligence has its limitations.

In this episode, I navigate through the murky waters of where not to rely on ChatGPT and similar tools like Google’s Bard. It's imperative to pause and assess where AI may not be the best solution.

We’ll do a reality check, where I shed light on critical areas where placing unwavering trust in AI could lead to undesirable outcomes. From job applications to seeking legal or medical advice, learn why certain scenarios are better left untouched by AI.

Tune in for practical tips and actionable advice on how to avoid common pitfalls and make informed decisions in an AI-driven world.

Don't miss out on this opportunity to enhance your understanding of AI's role in our lives and safeguard yourself from potential mishaps.

Listen and catch the full episode here or wherever you listen to podcasts. You can also watch it and replay it on my YouTube channel, JanelAndersonPhD.If you’ve found this episode helpful, spread the word! Share this podcast episode with a friend whom you might think needs to hear this. Don’t forget to leave a review and 5-star rating, it would mean the world to me.

LINKS MENTIONED IN THIS EPISODE:

Episode 150: 7 Things To Try in AI

EPISODE TRANSCRIPT

Hello and welcome to another episode of the working conversations podcast where we talk all things leadership, business communication and trends in organizational life. I'm your host, Dr. Janel Anderson.

Buckle up for a reality check in today's episode as we tackle the flip side of the generative AI coin. While this tech Marvel was buzzing with excitement, let's not ignore that cautionary tales, and discuss what you absolutely should not use generative AI for.

Now I know it's tempting to think of chat GPT and Bard and all the other tools for all the things that they can do and that they can do it all but hold your horses. In this episode, we're breaking down the boundaries and shedding light on situations where relying on AI might lead you astray from financial matters to job applications.

We will navigate the no go zones with a critical eye. And this isn't about raining on the parade of AI, but rather a thoughtful exploration of where human intuition and judgment should take the lead. So join me for a reality check because steering clear of AI mishaps is just as crucial as embracing the innovation. So let's dive in.

Hopefully you caught last week's episode where I shared seven fun and innovative things that you can use generative AI for in your life not just at work but in your whole life. I also laid the groundwork for what this increasingly popular technology is and generally kind of how it works. If you haven't caught that episode, you're definitely going to want to start there.

So I've linked that episode up in the show notes for this episode. You can find the show notes for this episode at janelanderson.com/151. And you can find the show notes for the previous episode at janelanderson/150.

Again, today we are diving into what not to do with generative AI. It can get really easy to get carried away with all the fun things that are available to you and just even your own curiosity might lead you down a path to ask generative AI to do different things or ask different questions of it. And so today I want to keep you in the know and have you be aware of seven things that you would never want to use generative AI for.

All right, so the first of these seven is job applications. Now this is ironic, since Artificial intelligence has been used against job seekers for years. I've talked about this before on the podcast, but I'll bring you up to speed again now. So ever since job applications went online and you have to fill out a form online to apply for the job. Artificial intelligence has been used to weed out certain applicants. So if you didn't check the box that said that you were proficient in Spanish, you were immediately eliminated if that job required proficiency in Spanish. If it required a certain number of years of experience and you didn't report having as many years of experience as it required or if you've reported having too many years of experience and they were looking for you were completely eliminated before a human ever set eyes on your application.

So Artificial intelligence has been used against job seekers for years to make the process of hiring more efficient and hopefully effective. Well, that remains to be seen. I know plenty of people who've been eliminated because of job application processes online where they shouldn't have been but that's a whole another episode. But anyway, these days, employers are noticing if you're using generative AI in your application process, in fact, so it might be very tempting, I get that it might be tempting to use generative AI to help write your cover letter or help write your resume. But remember, generative AI is drawing on a dataset that somebody else who's competing against you for that same job could use and your resumes could come up looking very similar or your cover letters might have almost the exact same language in them because you're both applying to the same job the same position description.

Now case in point, the other day I was with a client and I was about to do a keynote for them and we're waiting for you know everything was all set up but we're waiting for people to arrive which is kind of rare because usually we're running around bunch last minute things but we had plenty of time to visit as we waited for the audience to arrive. And as we chatted, she was telling me she's a hiring manager in her organization and she was telling me about a recent experience. They had posted a job and they had many many qualified applicants and so they did a further round of screening with all the people who pass the first round of screening. They sent them via email some additional screening questions. As they were reviewing the answers to those screening questions that came back. They found three of them that were almost verbatim exactly the same. And so then she and her team got curious and they put the questions into chat GPT and chat GPT spit out a response that was almost exactly the same as those three that were exactly the same. And of course, what they did is they immediately disqualified those three, and they also kept note of those three names. Those three people will never be hired into that organization.

So not only could generative AI eliminate you from being possibly considered for a job, it could also put you blacklisted in an entire organization. So you have to be really careful with chat GPT and other generative AI as it relates to job applications. Do not let generative AI fill out the application for you. That is your first thing. Do not use it in job applications.

Your second thing to never use generative AI for is in writing your self evaluation or peer evaluations at work as part of the performance review process. So oftentimes, during the review process, the annual performance review process you are asked to write an evaluation of your work over the past year, and how your work stacked up against the stated goals that you were working towards that year. If you were to use generative AI to try to even attempt that you would have to type in so much data manually that you might as well write it yourself. Because also it's not going to sound like your voice when you write it. Unless of course you have trained chat GPT or whichever tool you're using to write in your voice. But that takes an awful long time.

Now likewise, you may be asked to write evaluations of coworkers, project stakeholders, maybe even your manager whether that's part of the annual review process or maybe your organization is doing a 360 review where you're reviewing other people in the organization and getting a comprehensive look at how you're perceived. Do not use generative AI to write those.

Again, for it to work well, you would have to invest so much time into developing the generative AI tool to write in your voice and speak in your voice. It's just not worth it. It's just absolutely not worth it. And the consequences of getting caught at it are the stakes are just simply too high. So do not use generative AI to write your self evaluation or any sort of peer evaluations at work. That is number two.

Number three, do not use generative AI for financial advice. Now this may seem like it goes without saying. But here's the surprising statistic. A recent survey conducted by CNBC found that 37% of US adults stated that they were interested in using AI tools such as chat GPT to help them manage their money. 37% said they were interested in that. I find that very disconcerting. Now perhaps if you're using generative AI to come up with a counterpoint from what your financial adviser tells you, or if you're looking to maybe verify information from your financial advisor, then generative AI could be a place to start. Also, if you're looking for very general information, well then that's okay.

For example, if you're a young professional and you're interested in learning about a variety of different types of investment strategies or tools, chat GPT could do a great job of educating you on what's available and what's out there. And what the upsides and downsides of some of those tools and instruments are. But asking for specific advice from chat GPT is not a sound strategy. Again, it can generally educate you on what's available in terms of investment strategies, but do not rely on it to give you specific advice. And there are a number of reasons for this. As you've always already heard me say accuracy is not always the name of the game with some of these tools, lots and lots of data is being pulled into whatever the generative AI tool might tell you, but it does not know your specific context. It doesn't know your risk tolerance. It doesn't know other nuances about your situation that may have an impact on the investment strategies that truly are best for you. So number three, do not use generative AI for financial advice.

Number four, do not use generative AI for medical advice. Just like asking Dr. Google can give you poor advice or make you think you're dying of leprosy just because you have a swollen big toe. Generative AI is only as good as the data that it's accessing.

Now, the medical community can use artificial intelligence for sure to cut down the time it takes to make a diagnosis. This is especially true with medical imaging. And what happens with medical imaging is your x ray or MRI will be fed into a database or comparative it really compared against a database of medical images of similar age people or possible diagnoses and so on. And that artificial intelligence is very rapidly going to look for patterns or the absence of patterns in your MRI or your X ray and that is going to cut down the time that it takes for a radiologist for example, to identify an anomaly in your MRI or X ray and they will be able to spot that and zero in on that and then make their expert diagnosis accordingly. So it helps them speed up the process.

But as it relates to the average person using chat GPT or similar, the accuracy just isn't there. So as a case in point and as a test of this, a recent University of Florida College of Medicine study found when researchers asked chat GPT more than a dozen different questions, common questions related to urology that urologists often get asked by their patients. Well, they asked Chad GPT to answer the these 13 different questions, and they asked him to answer three different times since check GPT is known to give slightly different answers. Depending on which information stored in its data set that it happens to access at that given moment. Well once the results of chat GPT’s answer to those 13 questions three different times over were collected, then five different expert neurologists. Independently evaluated the accuracy of the chat, chat bot GPTs answers.

The outcomes of the study indicated that chat GPT was accurate and appropriate 60% of the time. That means it was wrong. 60% of the time. I don't know about you, but I want medical information that is more like 99 to 100% accurate, not 60% accurate. So do not use generative AI for medical advice. The accuracy just is not there.

Number five. Do not use generative AI for legal advice for many of the same reasons as medical advice but legal advice is even different in that artificial intelligence doesn't fully understand the nuance of legal arguments and legal language in the same way that it might understand everyday language. Now, in many cases, it doesn't even have access to the most recent laws and cases because the datasets that it's accessing are bound in time. Now, that said if you work inside of a legal research organization, such as a law firm or a company that provides legal research, like Bloomberg or LexisNexis, or Thomson Reuters and the like, you may have built in artificial intelligence tools. That have accurate information and accurate datasets behind them and that have been trained in the nuances of how the language is used differently in the legal context than it is in everyday situations. But that is qualitatively different than using open API's chat GPT or Google's Bard to give you legal advice.

 

In those types of cases in your specific legal situations, you're absolutely going to want to consult a legal professional and attorney, whomever but a human being who again may have AI tools at their disposal, but who is also not going to be using those AI tools as the end all be all in the advice they give you they are going to be layering on their own rationality, their own intelligence, their own understanding of case law or whatever it is in giving you their legal advice. So number five, do not use generative AI for legal advice.

Number six, do not use generative AI for anything that needs accuracy. So the large language models or LLM that are behind generative AI have been known to generate fictional information. Commonly this is referred to as the generative AI having a hallucination because it's something that really isn't there. So this can include any kind of made up facts, including publications, citations, dates, all kinds of things.

Now, where the real danger comes, is that the content that generative AI produces is usually in the ballpark of accuracy, but not actually exactly accurate. Now, as a fun experiment of this, I have used multiple different generative AI tools at different times over the past couple of years to write a 500 word bio biography of me now, I need a bio on a very regular basis. My clients often ask for a bio to go in their conference proceedings when I'm speaking at their conference, or up on their website where my keynote is for their event is being promoted. So I regularly update my bio and I have versions of it that are all different lengths because my clients have different needs.

Now when I asked chat GPT or Bard to write my bio, they have misinterpreted my dissertation research. They have gotten my book titles wrong. They have attributed books to me that I did not write. They have included co authors on books that I have written that did not have co authors. Oh my gosh, and on and on. It goes with all of the silliness that they come up with. But here's the thing. They don't have me listed as a rocket scientist or a brain surgeon or a pipe fitter. They know enough about me to write a very convincing bio. That is not accurate.

Now, as a public figure, there's a lot of information out there on the internet about me, that might not be the case for you. So you might not want to go asking it to write your bio, there might just be, there might just simply not be enough information to pull from. But for me there is but again, many in generative AI systems are trained on data that's incomplete or that's bound by a certain time period. And if that's the case, there may be outdated information or simply a lack of current information. So if I had a book that came out, maybe two or three months ago, and that was really important to include in my bio, the generative AI tool might miss it entirely. So if you were using generative AI for anything that needs accuracy, consider whatever it generates for you as a first draft and then my friends fact check fact check fact check, because I would never publish any of the bios that is written for me, because sometimes I wish that I had the best seller that they claimed I had or whatever but is just not there. The accuracy isn't there. So the number six thing to never use generative AI for is anything that means accuracy.

Now, my last and seventh thing to never use generative AI for is your final draft of anything. The final draft of whatever you write if you have used generative AI to help Springboard your thinking or write a draft of something, take it from there. Make it your own, really make it sound like you. And I don't care if you're a college student writing a research paper or an entrepreneur writing a blog post or a senior leader in a corporation writing a town hall message to the staff who report to you if you need to. And if you want to use generative AI as a springboard to come up with some creative ideas or some new turns of phrase or something like that, by all means, but do not use it as your final draft. There are a number of reasons for this.

The first is it's easy for you to be caught in the web of generative AI because somebody could detect that and put the same thing back into a search engine or back into generative AI and come up with something very, very similar. The second reason is in your interpersonal communication, whether again, whether that's a cover letter, or whether that's speech to your staff, or some talking points for talking to some reporters. You want to make sure that really is your unique spin on things because your voice matters. And we don't want you to be replaced by some amalgamation of other similar opinions out there. So let it truly be you.

Another way that this might happen is if you know what let's say you needed some help coming up with something to say in a letter or a note to a friend who was going through a tough time and you just felt a loss for words. You knew you wanted to be empathetic, you knew you wanted to be understanding and so forth. And so maybe you just weren't in that mood or that's not your MO. So you asked Chat GPT for some help. Well, again, use that as a draft and then take it from there on your own because how awful would it be if that person who received that heartfelt message from you then discovered that that was a message written by Chat GPT that would really destroy trust and and you know, might destroy an entire relationship?
So, do not use whatever generative AI creates for you as your final draft of anything, whether that be a cover letter, a note to a friend, talking points for a speech to your staff. Do not use generative AI for your final draft of anything my friends all right. These are some great tools, generative AI can do some amazing things for you. They can save you time. They can help you generate ideas. They are great for so many things, but today I hope that these seven things that you should not use generative AI for I've really landed for you and I hope this helps you become more knowledgeable and more responsible about how amazing generative AI can be. And what to definitely not use it for.

Because remember the future of work is not only about the technology, it's about the values we uphold the communities we build and the sustainable growth we strive for.

We need to keep exploring, keep innovating and keep envisioning the remarkable possibilities that lie ahead.

As always, stay curious, stay informed. And stay ahead of the curve.

Tune in next week for another insightful exploration of the trends shaping our professional world. Until then, my friends be well.

Download Full Episode Transcript

 


CHOOSE YOUR FAVORITE WAY TO LISTEN TO THIS EPISODE:


 

🎙 Listen on Apple Podcasts
🎙 Listen on Google Podcasts