Compass over layered tiles representing a reusable workflow
When prompts “kind of work” but don’t work twice, it’s usually not your wording—it’s your process.

This guide is a reusable workflow you can run in the same order every time to get more consistent results (for writing, planning, code help, summaries, and more).

Think of it like a small assembly line: define, constrain, generate, check, refine.

Step 0: Set your definition of “done” (before you type a prompt)

The fastest way to improve prompting is to decide what a successful answer looks like in plain terms.

Write a one-sentence “done” statement you can compare outputs against.

  • Format: bullet list, table, JSON, email draft, outline, code snippet
  • Audience: beginner, teammate, customer, executive
  • Scope: quick answer vs deep dive
  • Constraints: word count, tone, tools you can/can’t use
  • Success test: “I can paste this into X” or “I can decide between A and B”

If you can’t describe “done,” the model can’t reliably hit it either.

Step 1: Provide the minimum context the model can’t guess

Inbox tray and blank cards for essential context
Models guess when context is missing, and guesses are where inconsistency comes from.

Give the few details that change the answer. Skip the life story.

  • Goal: what you’re trying to accomplish (not just the topic)
  • Inputs: text to rewrite, requirements, data, links, constraints
  • Environment: “web app,” “Excel,” “Python 3.11,” “no paid tools,” etc.
  • Example: one good example (or one “do not do this” example)

Rule of thumb: include anything you’d be annoyed to be asked in a follow-up.

Step 2: Lock the output shape (structure beats clever wording)

Most “bad” outputs are actually “unshaped” outputs.

Instead of asking for a “great prompt” or a “good plan,” specify the container.

  • Headings: “Return 5 sections with H2 titles and 2 bullets each.”
  • Fields: “Return JSON with keys: problem, assumptions, steps, risks.”
  • Limits: “Max 120 words per section. No filler.”
  • Ordering: “Sort by impact, then effort.”

A clear template reduces variation and makes iteration easier.

Step 3: Ask for options, then choose (don’t force a single-shot answer)

Three branching options tiles for comparing approaches
If you need quality, don’t request one answer. Request three, then pick.

Use this pattern:

  • Generate: “Give 3 approaches that meet the constraints.”
  • Label: “Name them A/B/C and describe tradeoffs.”
  • Choose: “Ask me 2 questions, then recommend one.”

This turns the model into a collaborator, not a slot machine.

Step 4: Run a quick “quality check” prompt (catch silent failures)

Before you use the output, have the model check its own work against your definition of “done.”

Simple QA prompts that work well:

  • Constraint check: “List any places you violated the constraints.”
  • Assumption audit: “What did you assume that wasn’t stated?”
  • Gap check: “What key info is missing for this to be actionable?”
  • Risk check: “What are the top 3 ways this could be wrong or misleading?”

Even if the model isn’t perfect at self-critique, you’ll surface issues faster than re-prompting blindly.

Step 5: Iterate with a changelog (small edits, no drifting)

Iteration loop around checklist tiles for refining output
The most reusable iteration habit is to request targeted deltas, not a full rewrite.

Try this iteration format:

  • Keep: what must remain unchanged
  • Change: the specific sections or lines to modify
  • Reason: why (clarity, accuracy, tone, scope)
  • Output: “Return only the updated section(s).”

This prevents “helpful” rewrites that undo your progress.

Step 6: Save a reusable prompt skeleton (so you start strong next time)

Once you get a good result, capture the structure as a fill-in template.

Here’s a compact skeleton you can reuse:

  • Task: [what you want produced]
  • Audience & tone: [who it’s for, how it should sound]
  • Context: [facts the model must use]
  • Constraints: [must/avoid, length, tools, policies]
  • Output format: [sections/fields/order]
  • Quality bar: [definition of done + checks]

Over time, you’ll build 3–5 skeletons for your common tasks, and prompting stops feeling like starting from scratch.

Takeaway: treat prompting like a loop, not a spell

Reliable outputs come from a repeatable loop: define “done,” supply essential context, lock the structure, generate options, QA the result, and iterate with deltas.

If you only adopt one habit: always specify the output shape.