AI is everywhere—we’re in a middle of a technology shift that’s as big (and possibly bigger) than the arrival of the web in the 1990s. Even though ChatGPT appeared almost two years ago, we still feel unprepared: we read that AI will change every job and we don’t know what that means or how to prepare.

Here are a few ideas about preparing for that shift. First, understand what AI can and can’t do—and in particular, understand what you can do better than AI. It’s frequently said that AI won’t take your job; but people who don’t use AI will lose their jobs to people who do. That’s true as far as it goes (and in a “blame the victim” sense)—but the real truth is that people who can’t add value to what AI can do are the ones who are in danger, whether they use AI or not. If you just reproduce AI results, you’re very replaceable.

How can you partner with AI to deliver better results than either you or AI could on your own? AI isn’t magic. It isn’t some superhuman intelligence, despite the pronouncements of a few billionaires who have a vested interest in convincing you to give up and let AI do everything—or to crawl into a shell because you’re scared of what AI can do. So, here are a few basic ideas about how you can be better than AI.

First, realize that AI is best used an assistant. It can give you a quick first draft of a report—but you can probably improve that report, even if writing isn’t one of your strengths. Having a starting point is invaluable. It is very good at telling you how to approach learning something new. It is very good at summarizing books, podcasts, and videos, particularly if you start by asking it to make an outline, and using the outline to focus on the parts that are most important. Shortly after ChatGPT was released, someone said that it was like a very eager intern: it can do a lot of stuff fast, but not particularly well. GPT (and the other AI services) have gotten better over the past two years, but that’s still true.

Related work from others:  Latest from Google AI - VideoPrism: A foundational visual encoder for video understanding

Second, realize that AI isn’t very good at being creative. It can tell you how to do something, but it’s not good at telling you what to do. It’s good at combining ideas that people have already had, but not good at breaking new ground.

So, beyond the abstract ideas above, what do you need to know to use AI effectively?

Using AI effectively is all about writing effective prompts. (“Prompts” implies chat and dialogue, but we’re using it for any kind of interaction, even (especially) if you’re writing software that generates or modifies prompts). Good prompts can be very long and detailed—the more detailed, the better. An AI is not like a human assistant who will get bored if you have to spell out what you want in great detail—for an AI, that’s a good idea.

You have to learn a few basic prompting techniques:

“Explain it to me like I’m five years old”: A bit hackneyed and perhaps not as useful as it used to be. But it’s worth keeping in mind.Chain-of-thought prompts: asking AI to tell you what steps it will take to solve a problem—then, in a separate prompt, asking it to solve the problem (possibly working step-by-step). Chain-of-thought prompts often include some examples of problems, procedures, and solutions that are done correctly, giving the AI a model to emulate.Structured prompts: Tell the AI who it is (“you are an experienced salesperson”), what you want it to do (“who has been asked to write a tutorial on how to close deals”), and who you are (“for a new hire in the sales department”). These prompts can get very long and elaborate, but the extra work pays off in the quality of the response.Iterated prompts: Using AI isn’t about asking a question, getting an answer, and moving on. If the answer isn’t quite what you want, modify the prompt—make it better. Tell it what’s wrong, give it more context, give it more information about what exactly you want. It won’t get impatient, and your first prompt is rarely your best.Include documents: You can include documents as part of a prompt. This is a good way to provide information the AI doesn’t already have. It may reduce hallucination. It’s also a very simple version of RAG, an important technique for building AI applications.

Related work from others:  Latest from MIT Tech Review - Deploying a multidisciplinary strategy with embedded responsible AI

You have to learn to check whatever output the AI gives you. We’ve all heard of “hallucination”: when an AI gives you output that has no basis in fact. I like to differentiate “hallucination” from simple errors (an incorrect result), but both happen, and the distinction is, at best, technical. It’s not clear what causes hallucination, though it’s more likely to occur in situations where the AI can’t come up with an “answer” to a question.

Checking an AI’s response is an important discipline that hasn’t been discussed. It’s often called “critical thinking,” but that’s not right. Critical thinking is about investigating the underpinning of ideas: the assumptions and preconceived notions behind them. Checking an AI is more like being a fact-checker for someone writing an important article:

Can every fact be traced back to a documentable source?Is every reference correct and—even more important—does it exist?Is the AI’s output too vague or general to be useful?Does the AI’s output capture the nuance that you would expect from a human author?

Checking the AI is a strenuous test of your own knowledge. AI might be able to help. Google’s Gemini has an option for checking its output; it will highlight portions of the output and give links that support, refute, or provide (neutral) information about facts it cites. ChatGPT can be induced to do something similar. But it’s important not to rely on the ability of an AI to check itself. All AIs can make subtle errors that are hard to detect; all of the AIs can and will make mistakes checking their output. This is laborious work, but it’s very important to keep a human in the loop. If you trust AI too much, it will eventually be wrong at the most embarrassing and dangerous time possible.

Related work from others:  Latest from MIT : A smarter way to streamline drug discovery

You have to learn what information you should and shouldn’t give to an AI. How will the AI use the prompts you submit? Most AIs will use that information to train future versions of the model. For most conversations, that’s OK, but be careful about personal or confidential information. Your employer may have a policy on what can and can’t be sent to an AI or on which models have been approved for company use. Some of the models let you control whether they will use your data for training; make sure you know what the options are and that they’re set correctly.

That’s a start at what you need to learn to use AI effectively. There’s a lot more detail—it’s worth taking a few courses, such as our AI Academy—but this advice will get you started. More than anything else, use AI as an assistant, not as a crutch. Let AI help you be creative, but make sure that it’s your creativity. Don’t just parrot what an AI told you. That’s how to succeed with AI.

Similar Posts