It does not read like you do. It predicts the next token. Use that.
You talk like a person. The model does math. You give it a vibe and it returns melted crayons. Not because it hates you, but because you spoke in feelings and it hears probabilities. Learn what it hears and you can steer it without learning calculus.
The model predicts tokens.
Roles and output shape carry real signal.
Test three prompts on one input and keep the receipts.
The mental model
You are not “asking a question.” You are loading hints. The model predicts the next tiny unit of text based on patterns it has seen. Concrete nouns are strong hints. Clear roles are strong hints. Output shape is a strong hint. Vibes are weak hints.
Think of the prompt as a control panel. Labels matter more than pep talks. Name the job first. Name the container next. Style is a garnish.
A two-minute probe
Run this once. Keep the versions and timestamps so you can show your work.
Input text
Customer: I ordered the Pro pack last week but the invoice says Basic. I need the difference refunded. Order 88421. I used a work card and accounting is already yelling.
Three prompts on the same input
“Summarize this nicely.”
“You are a CRM editor. Produce a one sentence neutral note.”
“You are a CRM editor. Return only JSON with fields note
and tags
. Keep note
under 140 characters.”
Expected pattern
Prompt 1 will give a polite paragraph with extra fluff.
Prompt 2 repeats a short line that fits a CRM but varies in format.
Prompt 3 gives a structured object you can drop into a system.
Receipt example from run 3
{
“note”: “Customer ordered Pro but was billed Basic; requests refund of difference on order 88421; paid with work card and needs accounting friendly receipt.”,
“tags”: [“billing”, “refund”, “order-88421”]
}
If the model adds a greeting or a signature, your instruction was not tight enough. Add “Return only the object. No preface. No commentary.”
Why this sells prompt engineering
You just proved that small wording choices change outcomes. Same model. Same input. Different prompt shape. No guru tricks. You tuned the interface and the machine behaved. That is prompt engineering.
Shape beats style
A model loves to please you. Tell it “make it catchy” and it will decorate the carpet. Tell it “return only JSON with these fields” and you get something you can use.
Minimal pattern
Example
You are a CRM editor.
Turn the customer text into a note.
Return only JSON with: note, tags[].
If the order number is missing, set tags to [“needs-order”] and stop.
Add a safe failure path
Models guess when you leave gaps. Tell it how to fail so it does not invent facts.
Failure rule
“If any field is unknown, return unknown
for that field and stop.”
Test by stripping the order number from the input. If the model fabricates one, your failure rule is mushy. Sharpen it and re run.
Keep a tiny log
Write the model name, the context window, and the timestamp under each output. Add your header line if you have it. That is enough to catch drift later. Future you will thank you.
One more quick win
When you get a messy answer, do not start over. Repair the shape with a second pass.
Repair snippet
“Here is malformed JSON. Fix it to match this schema. Do not add fields. Do not change values that already match.”
Strong hints beat vibes
Think like a builder. Roles and containers carry signal. Style is garnish. Prove it to yourself with one input and three prompts, then keep the outputs with versions and timestamps. When the model slips, fix the shape before you fix the mood. That practice turns a black box into a tool you trust.