Lesson 4: Prompt Debugging – What to Do When Your Output Is Wrong
Why This Lesson Matters
Even experienced prompt writers run into bad outputs. Maybe the model misunderstood your intent. Maybe it ignored your format. Maybe it hallucinated.
Whatever the issue, prompt debugging is the key to getting unstuck. Rather than rewriting everything, this lesson will show you how to improve results systematically—just like debugging code.
Most Common Prompting Failures
- Too vague: The model doesn’t know what you want.
- Asking for too much at once: You combine multiple goals in a single prompt.
- Missing format instructions: The model gives a paragraph when you wanted JSON or a list.
- No role/context: You didn’t give the model a persona or framing for the task.
- Negative instructions only: Telling it what not to do instead of what to do.
The Prompt Debugging Framework
- Read the output carefully.
Is it wrong, vague, off-topic, too short, too long, inconsistent? - Isolate the issue.
What part of the output failed? Format? Tone? Relevance? Logic? - Map to a cause.
Was your instruction clear? Did you provide structure? Role? Examples? - Revise with intention.
Add specificity. Split into steps. Give an example. Adjust model settings. - Test and compare.
Try the revised version. Compare with previous output. Repeat if needed.
Before & After Fixes
❌ Weak Prompt:
“Write a summary of this report.”
Issue: Too vague — no direction on what to include or how to format the summary.
✅ Improved Prompt:
“Summarize the key takeaways of this quarterly report in 3 bullet points. Focus on revenue trends, customer growth, and any product updates.”
❌ Weak Prompt:
“Help me respond to this message.”
Issue: Missing role, tone, and structure — unclear who is responding, how they should sound, or what the response should include.
✅ Improved Prompt:
“You are a customer success manager. Write a friendly and concise response to this support ticket, acknowledging the issue and offering next steps.”
❌ Weak Prompt:
“Rewrite this to make it better.”
Issue: Too vague — no direction on tone, audience, or goals.
✅ Improved Prompt:
“Rewrite this email to sound more confident and concise. Keep it under 75 words and remove filler phrases.”
❌ Weak Prompt:
“What’s wrong with this landing page?”
Issue: Lacks structure or expectations — open to random interpretation.
✅ Improved Prompt:
“You are a conversion copywriter. Review this landing page and identify 3 things that may reduce signups. Focus on clarity, tone, and call-to-action placement.”
❌ Weak Prompt:
“Summarize this.”
Issue: No format, role, or focus — output could be rambling or too shallow.
✅ Improved Prompt:
“Summarize the key takeaways from this article in 3 bullet points, using simple language for a general audience.”
❌ Weak Prompt:
“Respond to this Slack message.”
Issue: Missing tone guidance, response length, or context.
✅ Improved Prompt:
“You are a team lead. Write a friendly, 2-sentence reply to this Slack message that confirms the next meeting time and expresses appreciation.”
❌ Weak Prompt:
“Explain this code.”
Issue: No detail on audience or depth — could be overly technical or overly basic.
✅ Improved Prompt:
“You are a technical educator. Explain what this Python function does to a junior developer who understands basic syntax but not decorators. Use comments and a short example.”
❌ Weak Prompt:
“Write a tweet about this product.”
Issue: Lacks hook style, tone, or CTA guidance.
✅ Improved Prompt:
“Write a Twitter post highlighting this product’s launch. Use a bold, curiosity-driven hook in the first line and end with a CTA to ‘Learn more.’ Keep it under 280 characters.”
Recap
Prompt debugging is a core part of getting good with LLMs. With the right mindset and a clear revision process, even bad outputs become teachable moments—and opportunities for improvement.