Dashboard
Lessons
Lesson 2: Prompting Techniques That Actually Work

Lesson 2: Prompting Techniques That Actually Work

Skills
Prompt Engineering
Generative AI

Why This Lesson Matters

Not all prompts are created equal. How you structure your request — and whether you include examples, roles, context, or reasoning instructions — can completely change the quality of the output.

This lesson gives you a practical toolkit of proven prompting techniques used by OpenAI, Anthropic, and top AI practitioners. You’ll learn what each technique is, when to use it, and how to apply it effectively across different tasks.

Techniques You’ll Learn

For each of the following techniques, you’ll get:

  • A plain-English explanation
  • Best practices
  • High-quality examples
  • “Try It Yourself” challenge

1. Zero-Shot Prompting

What it is

Zero-shot prompting means giving the model a direct instruction without any examples. You’re relying on the model’s training and general reasoning to figure out what you want based on how you phrase the task.

Think of it like asking a smart assistant to "Write a summary of this email" — no setup, no sample output, just a clear command.

Use when

  • The task is simple, common, or familiar to the model (e.g., summarization, classification, rewriting).
  • You’re prototyping quickly and want fast, lightweight output.
  • You want to minimize tokens or keep prompts short.

Avoid when

  • You need a specific structure or tone that the model might not assume.
  • The task involves nuance, ambiguity, or subjective interpretation.
  • You’re getting inconsistent or irrelevant results without more guidance.

Example Prompts:

“Summarize this customer support ticket in 3 bullet points.”
“Write a LinkedIn headline for someone who just got promoted to Head of Product.”
“List three pros and three cons of using AI for hiring.”
“Generate 5 subject lines for a Q2 product update newsletter.”
“Summarize this Slack thread into one executive-facing sentence.”

Try This Prompt Challenge

👉 Practice writing a zero-shot prompt using a real-world task.

Task: You’re reviewing customer support tickets.
Challenge: Write a zero-shot prompt that asks an AI model to classify each ticket as “Bug Report,” “Feature Request,” or “General Feedback.”

2. Few-Shot Prompting

What it is

Few-shot prompting involves giving the model a few (usually 2–5) examples of the kind of input-output pair you want it to follow. These examples teach the model the structure, tone, and logic of the desired response.

Think of it like showing a new team member a few past projects before asking them to do a similar one.

Use when

  • You need to guide output format, tone, or structure
  • The model struggles to infer your intent from the instruction alone
  • You want consistency across similar tasks (e.g., parsing, summarizing, categorization)

Avoid when

  • Token limits are tight and you need to keep prompts short
  • Examples aren’t high-quality or representative
  • The model can already infer the task correctly without examples

Example Prompts

"Parse this pizza order into JSON format. Example: 'I want a small pizza with cheese, tomato sauce, and pepperoni.' → { size: 'small', ingredients: [...] }"
“Convert this support ticket into a structured escalation report. Example: [ticket → report].”
“Reframe this negative review in a more constructive tone. Example: ‘This sucks.’ → ‘I found the experience frustrating.’”
“Tag each sentence in this email as greeting, pitch, CTA, or sign-off. Example: ‘Hope you're well’ → Greeting.”
“Format these feedback quotes into social proof testimonials. Example: ‘Love this!’ → ‘“Love this!” —Product Manager’”

Try This Prompt Challenge

👉 Apply this technique to a structured classification task:

Task: You’re labeling employee feedback as “Positive,” “Constructive,” or “Critical.”
Challenge: Write a few-shot prompt that teaches the model how to classify feedback, including at least 2 examples.

3. Chain-of-Thought Prompting (CoT)

What it is

Chain-of-thought (CoT) prompting explicitly tells the model to reason step by step before giving an answer. Rather than asking for a result directly, you encourage the model to explain its thinking along the way.

It mimics how a person would solve a problem out loud before arriving at a conclusion.

Use when

  • The task involves logical reasoning, planning, math, or trade-offs
  • You want to reduce hallucinations or improve accuracy in complex tasks
  • You’re troubleshooting inconsistent answers from the model

Avoid when

  • The task is trivial and doesn’t require reasoning
  • You need short, fast responses without verbose output

Example Prompt

"When I was 3 years old, my partner was 3x my age. Now I’m 20. How old is my partner? Let’s think step by step."
“What’s the ROI of this campaign? Walk through revenue, spend, and margin step-by-step.”
“Which of these 5 leads should I prioritize? Evaluate each based on firm size, ICP match, and urgency.”
“Which product feature should launch first? Consider engineering complexity, user demand, and differentiation.”
“How do I calculate the conversion rate from this table? Think through each metric and formula.”

Try This Prompt Challenge

👉 Help the model work through a prioritization task:

Task: You’re reviewing 5 tasks to complete this week.
Challenge: Write a CoT (Chain-of-Thought) prompt that asks the model to prioritize the tasks by impact and urgency, thinking aloud.

4. Role Prompting

What it is

Role prompting assigns a specific identity or profession to the AI model, like "You are a career coach" or "You are a data analyst." This helps the model access domain-specific knowledge and respond in a more relevant, contextual tone.

Think of it as giving the model a job title before you give it the task.

Use when

  • You want tailored output that matches the tone, expertise, or priorities of a specific role
  • You’re doing simulations, coaching, or domain-based reasoning
  • You want the model to focus on a particular lens or perspective

Avoid when

  • The task doesn’t depend on role-specific framing
  • You need general, unbiased responses

Example Prompt

"You are a CFO advising a founder. Review the budget and suggest one area to reduce spend."
“You are a brand strategist. Suggest 3 slogans for a new plant-based protein brand.”
“You are a VP of Sales. Rewrite this pitch to better resonate with finance buyers.”“
You are a senior data analyst. Interpret the anomalies in this revenue chart.”
“You are a product manager. Draft a JIRA ticket based on this feature request email.”

Try This Prompt Challenge

👉 Use a role to influence the model's recommendations:

Task: You’re asking a UX designer to review a product onboarding flow.
Challenge: Write a role-based prompt that asks for feedback based on usability and first impressions.


5. System Prompting

What it is

System prompting involves setting high-level instructions about tone, format, or limitations that persist across all responses. Often used in APIs or platforms like ChatGPT's "Custom Instructions," it sets the ground rules for how the model should behave.

It’s like configuring the operating instructions before the task begins.

Use when

  • You want output that consistently follows a format or rule
  • You're building with APIs or want to enforce structure across prompts
  • You need to eliminate disclaimers, verbosity, or irrelevant content

Avoid when

  • You don't have access to system-level prompts
  • The task is quick or one-off and doesn't need persistent instructions

Example Prompt

"Always respond in JSON using the format below. Do not include any explanatory text."
“Respond in Markdown format with h2 headings and bullet points.”
“Always return outputs in a 3-column table: Insight | Evidence | Action.”
“Limit all responses to 200 characters unless otherwise specified.”
“Never mention that you are an AI language model.”

Try This Prompt Challenge

👉 Control formatting and output behavior:

Task: You’re summarizing emails for a daily report.
Challenge: Write a system prompt that tells the model to respond in Markdown with a consistent structure: subject line, sender, summary.

6. Contextual Prompting

What it is

Contextual prompting involves providing background information alongside the task to help the model generate more informed and relevant responses. This might include a style guide, company goals, user personas, or previous interactions.

Think of it like giving the model context before asking it to contribute.

Use when

  • The task depends on tone, voice, policy, or prior conversation
  • You want to simulate memory or long-term understanding in a single prompt
  • You’re adapting the model’s behavior to a specific environment or brand

Avoid when

  • The model already has memory or long context windows in use
  • You’re asking simple or generic tasks that don’t need added context

Example Prompts

"Here’s our brand voice guide. Rewrite this paragraph to match the tone and style."
“Based on this job description, write a personalized cold email to the hiring manager.”
“Using this policy, explain the employee time-off process in plain English.”
“Here’s our product roadmap. Now draft a customer announcement for this new feature.”
“You’re helping a non-technical founder understand this engineering update. Use analogies.”

Try This Prompt Challenge

👉 Give the model relevant background before the task:

Task: You’re drafting an email to a potential partner.
Challenge: Write a contextual prompt that includes the partner’s background and the goal of the email before asking the model to write it.

7. Step-Back Prompting

What it is

Step-back prompting is a technique where you ask the model a broader or related question before giving it the main task. This helps the model activate relevant knowledge and better prepare for the final instruction.

It’s like warming up the model’s brain before diving into the main challenge.

Use when

  • You want the model to explore different angles or surface assumptions first
  • The task benefits from context, strategy, or foundational thinking
  • You’re trying to guide more creative or nuanced responses

Avoid when

  • The task is basic and doesn’t benefit from abstraction
  • You’re tight on tokens or need concise, immediate output

Example Prompts

"What are common investor concerns at Series A? Now write a pitch slide that addresses the top two."
“What makes a good explainer video? Now script one for our AI product.”
“What makes a good explainer video? Now script one for our AI product.”
“List common mistakes in onboarding. Then evaluate this onboarding flow against those.”
“What emotions drive newsletter opens? Now write 3 subject lines using that emotion.”
“What do great case studies include? Now outline one for this customer win.”

Try This Prompt Challenge

👉 Lead the model through discovery before the main task:

Task: You’re writing a case study about a successful customer implementation.
Challenge: Write a step-back prompt that first asks the model what makes a strong case study — then uses that output to write the final version.

8. Self-Consistency

What it is

Self-consistency is a prompting technique where you ask the same prompt multiple times and then analyze the answers to find the most common or stable result. This works well for tasks involving judgment, ambiguity, or subjectivity.

It’s like running a poll — then choosing the most agreed-upon answer.

Use when

  • You want more reliable answers for a nuanced or high-stakes task
  • The model’s answers vary slightly and you need to find the dominant pattern
  • You’re okay spending more tokens to improve accuracy

Avoid when

  • You’re trying to minimize cost or latency
  • The task has a single, well-defined answer

Example Prompts

"Classify the following customer review as Positive, Neutral, or Negative. Run this prompt 3 times and return the most common label."
“Run this classification prompt 3 times. Return the most frequent category.”
“Rank these headlines for clarity. Repeat and select the consensus top choice.”
“Classify this feedback as Product, Sales, or Support-related. Do it 5 times and return majority label.”
“Summarize this press release. Compare 3 versions and keep the most concise one.”

Try This Prompt Challenge

👉 Stabilize results using repetition and consensus:

Task: You’re interpreting qualitative survey responses.
Challenge: Write a self-consistency prompt that classifies the same feedback 3 times and returns the majority label.

9. Tree of Thoughts (ToT)

What it is

Tree of Thoughts (ToT) prompting encourages the model to explore multiple reasoning paths or options before choosing a final answer. It’s often used for strategy, brainstorming, or decision-making.

Think of it as generating a branching set of ideas — then pruning to find the best.

Use when

  • You want to explore trade-offs, compare ideas, or simulate multiple scenarios
  • You’re using the model for planning, positioning, or product thinking
  • The best answer isn’t obvious and depends on layered criteria

Avoid when

  • You need fast, single-response results
  • You’re working in a narrow or well-defined context

Example Prompts

"List 3 product marketing angles for this feature. Compare by audience appeal, clarity, and emotional hook. Recommend one."
“List 3 pricing models for this product. Then compare by ease of adoption, margin, and customer preference.”
“Brainstorm 3 partner channels for distribution. Evaluate their reach, credibility, and alignment.”
“Map 3 possible GTM strategies. Recommend the one that best fits an early-stage B2B startup.”
“List 3 user segmentation strategies. Identify pros/cons of each and pick the most scalable.”

Try This Prompt Challenge

👉 Explore multiple answers before selecting the best one:

Task: You’re planning a GTM launch strategy.
Challenge: Write a Tree of Thoughts prompt that outlines 3 different launch strategies, compares them, and picks the best based on brand fit.

10. ReAct (Reason + Act)What it is

ReAct prompting allows the model to both reason through a task and take action — such as running a search, calling an API, or executing code. It combines internal thinking with external tool use.

It’s the closest thing to giving your model arms and legs.

Use when

  • You want to query live data, use tools, or automate multi-step processes
  • The model needs to retrieve information before completing the task
  • You’re using agent frameworks (like Crew AI, LangChain, or OpenAI tools)

Avoid when

  • You don’t have tool access or integration support
  • You just need static, text-only reasoning

Example Prompts

"Search for the most recent Apple earnings report. Then summarize the key takeaways for a general business audience."
“Search for the latest LLM benchmark results. Then summarize how Claude 3 compares to GPT-4.”
“Find the average Glassdoor rating for this company. Then suggest how to tailor a recruiting pitch.”
“Use Python to calculate churn rate from this customer dataset.”
“Call an API to get today’s weather in NYC. Then write a caption for a travel ad based on that.”

Try This Prompt Challenge

👉 Combine a tool-based action with natural language reasoning:

Task: You want to validate a competitive claim using current search results.
Challenge: Write a ReAct-style prompt that searches for a competitor’s latest product launch and summarizes its differentiators.

11. Automatic Prompt Engineering (APE)

What it is

Automatic Prompt Engineering is the practice of using an LLM to generate better prompts — often by asking it to rewrite, score, or create variations of a base prompt. It’s especially useful when building for scale, UX flows, or tuning chatbot instructions.

Think of it as letting the model design the next version of its own instructions.

Use when

  • You’re experimenting with tone, phrasing, or structure
  • You need multiple prompt variants to test or deploy at scale
  • You want to refine prompts for clarity, engagement, or simplicity

Avoid when

  • The prompt is already working well and doesn’t need variation
  • You’re working on a highly sensitive or regulated task where human review is required

Example Prompts

"Generate 5 alternate versions of this prompt for summarizing research papers. Rank them by clarity and tone."
“Generate 10 variants of a prompt that asks for a pros/cons list. Rank by clarity.”
“Create 5 different prompts to generate ad copy for a budgeting app.”
“Write 7 onboarding email prompts. Tag each by tone: friendly, persuasive, concise.”
“Generate 5 prompts for summarizing legal contracts. Group them by use case.”

Try This Prompt Challenge

👉 Let the model improve its own instructions:

Task: You’re designing a prompt for a chatbot onboarding flow.
Challenge: Write a prompt that asks the model to create and rank 5 versions of a welcome message prompt based on friendliness and clarity.

Summary Table

Recap

Prompting isn’t just asking a question — it’s choosing the right method to guide the model. These techniques will become the foundation for everything else in this course.

Next Lesson Next Lesson
Table of contents
Teacher
Matthew Berman
All
Matthew Berman
lessons

https://forwardfuture.ai/lessons/lesson-2-prompting-techniques-that-actually-work