In the rush to get the most from AI tools, prompt engineering—the practice of writing clear, structured inputs that guide an AI tool’s output—has taken center stage. But for software engineers, the skill isn’t new. We’ve been doing a version of it for decades, just under a different name. The challenges we face when writing AI prompts are the same ones software teams have been grappling with for generations. Talking about prompt engineering today is really just continuing a much older conversation about how developers spell out what they need built, under what conditions, with what assumptions, and how to communicate that to the team.

The software crisis was the name given to this problem starting in the late 1960s, especially at the NATO Software Engineering Conference in 1968, where the term “software engineering” was introduced. The crisis referred to the widespread industry experience that software projects were over budget and late, and often failed to deliver what users actually needed.

There was a common misconception that these failures were due to programmers lacking technical skill or teams who needed more technical training. But the panels at that conference focused on what they saw as the real root cause: Teams and their stakeholders had trouble understanding the problems they were solving and what they actually needed to build; communicating those needs and ideas clearly among themselves; and ensuring the delivered system matched that intent. It was fundamentally a human communication problem.

Participants at the conference captured this precisely. Dr. Edward E. David Jr. from Bell Labs noted there is often no way even to specify in a logically tight way what the software is supposed to do. Douglas Ross from MIT pointed out the pitfall where you can specify what you are going to do, and then do it as if that solved the problem. Prof. W.L. van der Poel summed up the challenge of incomplete specifications: Most problems simply aren’t defined well enough at the start, so you don’t have the information you need to build the right solution.

These are all problems that cause teams to misunderstand the software they’re creating before any code is written. And they should all sound familiar to developers today who work with AI to generate code.

Much of the problem boils down to what I’ve often called the classic “do what I meant, not what I said” problem. Machines are literal—and people on teams often are too. Our intentions are rarely fully spelled out, and getting everyone aligned on what the software is supposed to do has always required deliberate, often difficult work.

Fred Brooks wrote about this in his classic and widely influential “No Silver Bullet” essay. He argued there would never be a single magic process or tool that would make software development easy. Throughout the history of software engineering, teams have been tempted to look for that silver bullet that would make the hard parts of understanding and communication go away. It shouldn’t be surprising that we’d see the same problems that plagued software teams for years reappear when they started to use AI tools.

By the end of the 1970s, these problems were being reframed in terms of quality. Philip Crosby, Joseph M. Juran, and W. Edwards Deming, three people who had enormous influence on the field of quality engineering, each had influential takes on why so many products didn’t do the jobs they were supposed to do, and these ideas are especially true when it comes to software. Crosby argued quality was fundamentally conformance to requirements—if you couldn’t define what you needed clearly, you couldn’t ensure it would be delivered. Juran talked about fitness for use—software needed to solve the user’s real problem in its real context, not just pass some checklists. Deming pushed even further, emphasizing that defects weren’t just technical mistakes but symptoms of broken systems, and especially poor communication and lack of shared understanding. He focused on the human side of engineering: creating processes that help people learn, communicate, and improve together.

Related work from others:  Latest from MIT Tech Review - Deploying high-performance, energy-efficient AI

Through the 1980s, these insights from the quality movement were being applied to software development, and started to crystallize into a distinct discipline called requirements engineering, focused on identifying, analyzing, documenting, and managing the needs of stakeholders for a product or system. It emerged as its own field, complete with conferences, methodologies, and professional practices. The IEEE Computer Society formalized this with its first International Symposium on Requirements Engineering in 1993, marking its recognition as a core area of software engineering.

The 1990s became a heyday for requirements work, with organizations investing heavily in formal processes and templates, believing that better documentation formats would ensure better software. Standards like IEEE 830 codified the structure of software requirements specifications, and process models such as the Software Development Life Cycle and CMM/CMMI emphasized rigorous documentation and repeatable practices. Many organizations invested heavily in designing detailed templates and forms, hoping that filling them out correctly would guarantee the right system. In practice, those templates were useful for consistency and compliance, but they didn’t eliminate the hard part: making sure what was in one person’s head matched what was in everyone else’s.

While the 1990s focused on formal documentation, the Agile movement of the 2000s shifted toward a more lightweight, conversational approach. User stories emerged as a deliberate counterpoint to heavyweight specifications—short, simple descriptions of functionality told from the user’s perspective, designed to be easy to write and easy to understand. Instead of trying to capture every detail upfront, user stories served as placeholders for conversations between developers and stakeholders. The practice was deliberately simple, based on the idea that shared understanding comes from dialogue, not documentation, and that requirements evolve through iteration and working software rather than being fixed at the project’s start.

All of this reinforced requirements engineering as a legitimate area of software engineering practice and a real career path with its own set of skills. There is now broad agreement that requirements engineering is a vital area of software engineering focused on surfacing assumptions, clarifying goals, and ensuring everyone involved has the same understanding of what needs to be built.

Prompt Engineering Is Requirements Engineering

Prompt engineering and requirements engineering are literally the same skill—using clarity, context, and intentionality to communicate your intent and ensure what gets built matches what you actually need.

User stories were an evolution from traditional formal specifications: a simpler, more flexible approach to requirements but with the same goal of making sure everyone understood the intent. They gained wide acceptance across the industry because they helped teams recognize that requirements are about creating a shared understanding of the project. User stories gave teams a lightweight way to capture intent and then refine it through conversation, iteration, and working software.

Prompt engineering plays the exact same role. The prompt is our lightweight placeholder for a conversation with the AI. We still refine it through iteration, adding context, clarifying intent, and checking the output against what we actually meant. But it’s the full conversation with the AI and its context that matters; the individual prompts are just a means to communicate the intent and context. Just like Agile shifted requirements from static specs to living conversations, prompt engineering shifts our interaction with AI from single-shot commands to an iterative refinement process—though one where we have to infer what’s missing from the output rather than having the AI ask us clarifying questions.

User stories intentionally focused the engineering work back on people and what’s in their heads. Whether it’s a requirements document in Word or a user story in Jira, the most important thing isn’t the piece of paper, ticket, or document we wrote. The most important thing is that what’s in my head matches what’s in your head and matches what’s in the heads of everyone else involved. The piece of paper is just a convenient way to help us figure out whether or not we agree.

Related work from others:  Latest from MIT : Gift from Sebastian Man ’79, SM ’80 supports MIT Stephen A. Schwarzman College of Computing building

Prompt engineering demands the same outcome. Instead of working with teammates to align mental models, we’re communicating to an AI, but the goal hasn’t changed: producing a high-quality product. The basic principles of quality engineering laid out by Deming, Juran, and Crosby have direct parallels in prompt engineering:

Deming’s focus on systems and communication: Prompting failures can be traced to problems with the process, not the people. They typically stem from poor context and communication, not from “bad AI.”

Juran’s focus on fitness for use: When he framed quality as “fitness for use,” Juran meant that what we produce has to meet real needs—not just look plausible. A prompt is useless if the output doesn’t solve the real problem, and failure to create a prompt that’s fit for use will result in hallucinations.

Crosby’s focus on conformance to requirements: Prompts must specify not just functional needs but also nonfunctional ones like maintainability and readability. If the context and framing aren’t clear, the AI will generate output that conforms to its training distribution rather than the real intent.

One of the clearest ways these quality principles show up in prompt engineering is through what’s now called context engineering—deciding what the model needs to see to generate something useful, which typically includes surrounding code, test inputs, expected outputs, design constraints, and other important project information. If you give the AI too little context, it fills in the blanks with what seems most likely based on its training data (which usually isn’t what you had in mind). If you give it too much, it can get buried in information and lose track of what you’re really asking for. That judgment call—what to include, what to leave out—has always been one of the deepest challenges at the heart of requirements work.

There’s another important parallel between requirements engineering and prompt engineering. Back in the 1990s, many organizations fell into what we might call the template trap—believing that the right standardized form or requirements template could guarantee a good outcome. Teams spent huge effort designing and filling out documents. But the real problem was never the format; it was whether the underlying intent was truly shared and understood.

Today, many companies fall into a similar trap with prompt libraries, or catalogs of prewritten prompts meant to standardize practice and remove the difficulty of writing prompts. Prompt libraries can be useful as references or starting points, but they don’t replace the core skill of framing the problem and ensuring shared understanding. Just like a perfect requirements template in the 1990s didn’t guarantee the right system, canned prompts today don’t guarantee the right code.

Decades later, the points Brooks made in his “No Silver Bullet” essay still hold. There’s no single template, library, or tool that can eliminate the essential complexity of understanding what needs to be built. Whether it’s requirements engineering in the 1990s or prompt engineering today, the hard part is always the same: building and maintaining a shared understanding of intent. Tools can help, but they don’t replace the discipline.

AI raises the stakes on this core communication problem. Unlike your teammates, the AI won’t push back or ask questions—it just generates something that looks plausible based on the prompt that it was given. That makes clear communication of requirements even more important.

The alignment of understanding that serves as the foundation of requirements engineering is even more important when we bring AI tools into the project, because AI doesn’t have judgment. It has a huge model, but it only works effectively when directed well. The AI needs the context that we provide in the form of code, documents, and other project information and artifacts, which means the only thing it knows about the project is what we tell it. That’s why it’s especially important to have ways to check and verify that what the AI “knows” really matches what we know.

Related work from others:  O'Reilly Media - AI Hallucinations: A Provocation

The classic requirements engineering problems—especially the poor communication and lack of shared understanding that Deming warned about and that requirements engineers and Agile practitioners have spent decades trying to address—are compounded when we use AI. We’re still facing the same issues of communicating intent and specifying requirements clearly. But now those requirements aren’t just for the team to read; they’re used to establish the AI’s context. Small variations in problem framing can have a profound impact on what the AI produces. Using natural language to increasingly replace the structured, unambiguous syntax of code removes a critical guardrail that’s traditionally helped protect software from failed understanding.

The tools of requirements engineering help us make up for that missing guardrail. Agile’s iterative process of the developer understanding requirements, building working software, and continuously reviewing it with the product owner was a check that ensured misunderstandings were caught early. The more we eliminate that extra step of translation and understanding by having AI generate code directly from requirements, the more important it becomes for everyone involved—stakeholders and engineers alike—to have a truly shared understanding of what needs to be built.

When people on teams work together to build software, they spend a lot of time talking and asking questions to understand what they need to build. Working with an AI follows a different kind of feedback cycle—you don’t know it’s missing context until you see what it produces, and you often need to reverse engineer what it did to figure out what’s missing. But both types of interaction require the same fundamental skills around context and communication that requirements engineers have always practiced.

This shows up in practice in several ways:

Context and shared understanding are foundational. Good requirements help teams understand what behavior matters and how to know when it’s working—capturing both functional requirements (what to build) and nonfunctional requirements (how well it should work). The same distinction applies to prompting but with fewer chances to course-correct. If you leave out something critical, the AI doesn’t push back; it just responds with whatever seems plausible. Sometimes that output looks reasonable until you try to use it and realize the AI was solving a different problem.

Scoping takes real judgment. Developers who struggle to use AI for code typically fall into two extremes: providing too little context (a single sentence that produces something that looks right but fails in practice) or pasting in entire files expecting the model to zoom in on the right method. Unless you explicitly call out what’s important—both functional and nonfunctional requirements—it doesn’t know what matters.

Context drifts, and the model doesn’t know it’s drifted. With human teams, understanding shifts gradually through check-ins and conversations. With prompting, drift can happen in just a few exchanges. The model might still be generating fluent responses until it suggests a fix that makes no sense. That’s a signal that the context has drifted, and you need to reframe the conversation—perhaps by asking the model to explain the code or restate what it thinks it’s doing.

History keeps repeating itself: From binders full of scattered requirements to IEEE standards to user stories to today’s prompts, the discipline is the same. We succeed when we treat it as real engineering. Prompt engineering is the next step in the evolution of requirements engineering. It’s how we make sure we have a shared understanding between everyone on the project—including the AI—and it demands the same care, clarity, and deliberate communication we’ve always needed to avoid misunderstandings and build the right thing.

Share via
Copy link
Powered by Social Snap