Writing better prompts

A practical guide to writing prompts that actually work — starting with the one mistake almost everyone makes.

Most prompt advice overcomplicates things. Here’s what actually matters.

Be succinct

Say what you mean in as few words as possible. Every extra word is noise the model has to parse. Long, meandering prompts don’t give you better results — they give you more surface area for misinterpretation.

Compare:

“I would really like you to help me write a function that takes in a list of numbers and then goes through each one and checks if it’s a prime number and if it is then it should add it to a new list and return that list at the end”

vs.

“Write a function that filters a list of numbers, returning only the primes.”

Same result. A fraction of the tokens. The model isn’t impressed by verbosity. Be direct.

Use positive framing — and only positive framing

This is the one most people get wrong. Tell the model what to do, not what to avoid.

When you write “use descriptive variable names, don’t use single-letter variables” you’ve introduced two competing frames. The model processes both. The negative frame — single-letter variables — is now primed in the context. You’ve put the exact thing you wanted to avoid into the model’s attention.

This is priming. It’s well-documented in language models and in human cognition alike. When you name the thing you want to avoid, you make it more salient, not less.

Instead, just say: “use descriptive variable names.” That’s the complete instruction. The positive frame is self-sufficient — it already excludes what you didn’t want.

More examples:

Mixed framingPositive framing
”Write clean code, don’t use nested ternaries""Write clean code with clear, readable conditionals"
"Respond in JSON, don’t include markdown""Respond in raw JSON"
"Be concise, don’t ramble""Be concise"
"Use TypeScript, don’t use any""Use TypeScript with explicit types”

Notice how the positive versions are shorter too. Positive framing and succinctness reinforce each other.

Give the model a role

A short framing statement at the top of your prompt anchors the model’s behavior for everything that follows. “You are a senior backend engineer reviewing this pull request” produces meaningfully different output than no framing at all. It sets the lens.

One sentence is enough. You’re setting context, not writing a character sheet.

Structure for scannability

Models respond well to structure for the same reason humans do — it reduces ambiguity.

The model processes your entire prompt, but emphasis matters. Front-load the critical bits.

Provide examples

When the output format matters, show it. One concrete example eliminates more ambiguity than three paragraphs of description.

Convert these to kebab-case:
- myVariableName → my-variable-name
- getUserData → get-user-data

Now convert:
- fetchAllRecords
- parseInputString

The model infers the pattern from your example. This is few-shot prompting, and it works better than explaining the rules of kebab-case ever would.

Iterate on your prompts

Prompts are code. They should be versioned, tested, and refined. If a prompt isn’t giving you what you want, adjust it. Change one thing at a time so you know what moved the needle.

The best prompts I use today look nothing like their first drafts. They got good through the same loop everything else does — write, evaluate, refine.

The short version

  1. Be succinct. Fewer words, less noise.
  2. Frame positively. State what you want. Leave out what you don’t.
  3. Set a role. One sentence of context goes a long way.
  4. Use structure. Lists, headers, and hierarchy reduce ambiguity.
  5. Show examples. One good example beats a paragraph of explanation.
  6. Iterate. Treat prompts like code. Refine them.

That’s it. No frameworks, no acronyms, no twelve-step systems. Just clear communication with a machine that’s very good at following clear instructions.