STOKE Framework Explained: The Prompt Engineering Method That Works
You've probably tried a dozen ways to make AI give better answers. Be specific. Add context. Give examples. All good advice — but hard to apply consistently when you're staring at a blank prompt box.
STOKE is a framework that turns "be specific" into a repeatable structure. Five components, one clear purpose each. It works because it forces you to include the exact information AI models need to produce useful output — and nothing they don't.
This guide breaks down what STOKE is, how each component works with real examples, and when to use it instead of other prompt engineering methods.
What Is the STOKE Framework?
STOKE is an acronym for five prompt components:
| Letter | Component | What It Does |
|---|---|---|
| S | Situation | Sets the context — who, where, why |
| T | Task | Defines the specific action to perform |
| O | Objective | States what success looks like |
| K | Knowledge | Provides domain-specific guidance and constraints |
| E | Examples | Shows what good output looks like |
Each component answers a question the AI would otherwise have to guess at. The less guessing, the better the output.
Here's the core insight: most prompts only include the T (Task). "Write a blog post about productivity." That's a task with zero context, no success criteria, no domain expertise, and no examples. The AI fills in all the blanks with generic defaults — and you get generic output.
STOKE fills those blanks deliberately.
The Five Components, In Depth
S — Situation
The Situation establishes context the AI can't infer from the task alone. Think of it as the briefing before the assignment.
What to include:
- Who you are (role, company, industry)
- Who the output is for (audience, their knowledge level)
- What's happening (launch, crisis, routine, exploration)
- Any relevant background (previous attempts, constraints, timeline)
Without Situation:
Write an email announcing a price increase.
With Situation:
Situation: I'm the founder of a B2B SaaS tool with 2,000 customers.
We're raising prices for the first time in 3 years. Most customers
are on annual plans. We've added significant features since their
last renewal. Customer sentiment is generally positive but we've
had some churn lately.
The difference: without context, the AI writes a generic corporate announcement. With the Situation, it knows to emphasize new value, be sensitive about churn, and frame the increase as reasonable given 3 years of stability.
T — Task
The Task is what most people already include — but STOKE asks you to be more specific than "write X about Y."
A weak task: "Write a blog post about remote work."
A STOKE task:
Task: Write a 1,200-word blog post arguing that async communication
is more productive than synchronous meetings for distributed teams.
Structure it as: bold claim → 3 supporting arguments with evidence →
practical implementation steps → one key counterargument addressed.
The Task should specify:
- The exact deliverable (blog post, email, code, analysis)
- Length or scope constraints
- Structure or format requirements
- The specific angle or argument (not just the topic)
O — Objective
The Objective answers: "What should this output accomplish?" It's not what you're asking the AI to do — that's the Task. It's what the result should achieve in the real world.
Task vs. Objective:
- Task: "Write a landing page for our API product"
- Objective: "Convince a senior developer to sign up for the free tier within 60 seconds of landing on the page"
This distinction matters because the same task with different objectives produces radically different outputs:
Objective A: Convince technical leads to evaluate the product
→ Focus on architecture, integrations, security, pricing transparency
Objective B: Get developers to try the free tier immediately
→ Focus on "curl this endpoint right now," instant gratification, no signup friction
Objective C: Position the product for enterprise procurement
→ Focus on compliance, SLAs, team management, SOC 2 certification
Same landing page. Three completely different approaches — all driven by the Objective.
K — Knowledge
Knowledge is where you feed the AI domain expertise it wouldn't otherwise have. This is the most underused STOKE component and often the most impactful.
Types of knowledge to include:
- Industry terminology and jargon the audience expects
- Best practices or standards to follow
- Common mistakes to avoid
- Constraints the AI wouldn't know about (regulatory, technical, cultural)
- Internal guidelines (brand voice, style guide rules)
Example:
Knowledge:
- Our brand voice is direct and technical. No marketing fluff.
We write "start a cluster" not "unleash the power of clusters."
- Our users are infrastructure engineers who know Kubernetes.
Don't explain what a pod or a deployment is.
- Pricing page must show monthly and annual pricing. Annual
discount is always 20%. Show annual price per-month, not total.
- Competitors to reference for positioning: Datadog (incumbent,
expensive), Grafana Cloud (open-source, complex setup).
Never mention competitors by name on the landing page —
use "legacy monitoring tools" and "open-source alternatives."
Without this Knowledge component, the AI would write generic copy that explains Kubernetes basics to Kubernetes experts and mentions competitors by name — both wrong.
E — Examples
Examples are the most powerful component for controlling output quality. Showing the AI what "good" looks like is faster and more reliable than describing it.
Three ways to use Examples:
1. Input/Output pairs — show the transformation you want:
Examples:
Input: "Our product helps teams communicate better"
Output: "Async-first messaging that cuts meeting time by 40%"
Input: "We use advanced AI technology"
Output: "GPT-4o-powered search across your entire codebase"
2. Style reference — show the tone and voice:
Examples of our writing style:
- "Ship it. Then measure. The data will tell you what to fix."
- "Three lines of YAML. That's the entire configuration."
- "We don't do 'enterprise demos.' Sign up, get an API key, start building."
3. Format template — show the exact structure:
Example of the format I want:
## Feature Name
One-sentence description of what it does.
**Before:** [What users do without this feature]
**After:** [What changes with this feature]
`code example showing usage`
The more specific and relevant your examples, the more closely the AI matches them. Two good examples beat a paragraph of instructions about what you want. For a deep dive into this technique, see our guide on few-shot prompting.
STOKE in Action: Full Example
Let's build a complete STOKE prompt for a real task.
The vague version most people would write:
Write documentation for our REST API authentication endpoint.
The STOKE version:
SITUATION:
I'm writing developer documentation for Promplify, an AI prompt
optimization API. Our users are developers integrating our API
into their apps. Most are mid-to-senior level and familiar with
REST APIs and Bearer token auth. The docs live alongside our
API reference page.
TASK:
Write documentation for the authentication flow. Cover:
1. How to get an API key (from the Developer dashboard)
2. How to include it in requests (Bearer token in Authorization header)
3. What happens when auth fails (401 response with error body)
4. Rate limits tied to the API key (60 req/min, 1,000 req/day)
Keep it under 500 words. Use code examples, not paragraphs.
OBJECTIVE:
A developer should be able to authenticate their first API request
within 2 minutes of reading this page. Zero ambiguity about where
the key goes or what errors mean.
KNOWLEDGE:
- Our API base URL is https://promplify.ai/api/v1
- API keys are 32-character hex strings prefixed with "pk_"
- Rate limit headers: X-RateLimit-Limit, X-RateLimit-Remaining,
X-RateLimit-Reset (Unix timestamp)
- 401 response body: { "detail": "Invalid or missing API key" }
- 429 response body: { "detail": "Rate limit exceeded",
"retry_after": <seconds> }
- We support cURL, Python (requests), JavaScript (fetch), and Go
EXAMPLES:
Format each code example like this:
```bash
curl -X POST https://promplify.ai/api/v1/optimize \
-H "Authorization: Bearer pk_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{"prompt": "your prompt here"}'
Error response format:
{
"detail": "Invalid or missing API key"
}
The vague prompt gets you generic API auth docs. The STOKE prompt gets you docs that match your product, your audience, your format, and your technical details — first try.
## When to Use STOKE vs. Other Frameworks
STOKE isn't always the right tool. Here's when it shines and when something simpler works better.
### Use STOKE when:
- **Content creation** — blog posts, emails, documentation, marketing copy. STOKE's Knowledge and Examples components are most valuable when voice and domain expertise matter.
- **Complex, multi-requirement tasks** — anything where the AI needs to juggle context, constraints, and quality standards simultaneously.
- **Work you'll repeat** — build the STOKE template once, reuse it with different Tasks. The Situation, Knowledge, and Examples often stay the same across similar requests.
- **Outputs that go to other people** — client deliverables, public-facing content, team documentation. The stakes justify the extra prompt structure.
### Use something simpler when:
- **Quick questions** — "What does `git rebase --onto` do?" doesn't need five components.
- **Single-step tasks** — "Convert this JSON to YAML" is pure Task. No Situation, Knowledge, or Examples needed.
- **Brainstorming** — When you want divergent, creative output, too much structure constrains the AI. A loose prompt is sometimes better.
- **Code generation with clear specs** — "Write a Python function that takes a list of integers and returns the median" is specific enough as a standalone Task.
### STOKE vs. Chain of Thought
Chain of Thought (CoT) is about *reasoning process* — asking the AI to show its work step by step. STOKE is about *input completeness* — giving the AI everything it needs to produce the right output.
They're complementary, not competing. You can use STOKE to structure your prompt AND request Chain of Thought reasoning within the Task component:
Task: Analyze these three pricing models step by step. For each model, reason through the impact on conversion rate, revenue per user, and customer lifetime value before giving your recommendation.
For a side-by-side comparison of STOKE against CO-STAR, RACE, CRISPE, and other popular frameworks, see our [prompt engineering frameworks compared](/blog/prompt-engineering-frameworks-compared) guide.
### STOKE vs. Few-Shot Prompting
Few-Shot prompting is essentially the **E** (Examples) component of STOKE by itself. It works well for pattern-matching tasks — "here are 3 examples, now do the same for this input."
STOKE adds the other four components when examples alone aren't enough. If the task requires domain knowledge, audience awareness, or specific success criteria, STOKE's full framework outperforms examples-only prompting.
## Tips for Writing Better STOKE Prompts
**Start with the Objective.** Most people start with the Task, but knowing what you're trying to *achieve* shapes everything else. Write the O first, even if it appears third in the prompt.
**Knowledge is your unfair advantage.** Any AI can follow a Task. The Knowledge component is where you inject expertise the model doesn't have — your brand guidelines, your customers' specific pain points, your industry's unwritten rules. This is what separates AI output that could be from anyone from output that sounds like it's from *you*.
**Two examples beat ten rules.** Instead of writing "be concise, use active voice, avoid jargon, write in a conversational tone, use short paragraphs" — just show two paragraphs that exemplify all of that. The AI extracts the pattern faster than it parses a list of adjectives.
**Scale to complexity.** A simple task might only need S + T + O. A complex one needs all five. Don't force-fit five components onto a task that needs two — it adds noise without value.
**Save and reuse.** Your Situation and Knowledge components are often reusable across tasks. "I'm a B2B SaaS founder targeting senior engineers" and your brand voice guidelines don't change between a blog post and a landing page. Build a library of components you can mix and match. For more practical techniques, read our guide on [how to write better AI prompts](/blog/how-to-write-better-ai-prompts).
## How Promplify Applies STOKE Automatically
Writing a full STOKE prompt takes practice. You have to think about audience, success criteria, domain knowledge, and examples before you start — and most people just want to type their request and get great output.
Promplify automates this. When you submit a prompt for optimization:
1. **Detects the task type** — creative, technical, analytical, persuasive, or informational
2. **Assesses complexity** — trivial tasks get lightweight prompts, complex tasks get full STOKE structure
3. **Generates each component** — Situation from task context, Objective from intent analysis, Knowledge from domain detection, Examples when patterns would help
4. **Adapts per model** — formats the STOKE prompt optimally for GPT-4o, Claude, Gemini, or DeepSeek (see our [ChatGPT vs Claude vs Gemini comparison](/blog/chatgpt-vs-claude-vs-gemini) to understand each model's strengths)
You can see the STOKE analysis breakdown in real time — each component gets a confidence score showing how completely your original prompt covers it. The optimizer fills in what's missing.
---
*Want STOKE applied to your prompts automatically — with the right components for every task? [Try Promplify free](https://promplify.ai/optimize). The optimizer detects what's missing from your prompt and adds Situation, Task, Objective, Knowledge, and Examples where they'll have the most impact.*
Ready to Optimize Your Prompts?
Try Promplify free — paste any prompt and get an AI-rewritten, framework-optimized version in seconds.
Start Optimizing