RACE Framework for Prompting: The Complete Guide (With Templates)
Most prompt engineering advice boils down to "give the AI more context." That is correct and also completely unhelpful when you are staring at an empty text box trying to figure out what context to include.
The RACE framework solves this with four components: Role, Action, Context, and Expectation. It is the simplest structured prompting method that consistently produces better output than freeform prompts -- and it takes about 30 seconds to apply.
This guide covers what RACE is, how each component works with real examples, when to use it over other prompt engineering frameworks, and includes copy-paste templates you can use immediately.
What Is the RACE Framework?
RACE is a four-component framework for structuring prompts to AI models like ChatGPT, Claude, and Gemini. Each letter represents one piece of information the AI needs to produce useful output:
| Letter | Component | What It Does |
|---|---|---|
| R | Role | Defines who the AI should act as |
| A | Action | Specifies exactly what to do |
| C | Context | Provides background, constraints, and audience info |
| E | Expectation | Describes the desired output format, length, and quality |
The framework works because it forces you to answer four questions before submitting a prompt: Who is doing this? What exactly needs to be done? What background information matters? What does a good result look like?
Most prompts only include the Action -- "write an email about our product launch." RACE fills in the three other dimensions that the AI would otherwise guess at. Less guessing means better output.
A note on disambiguation: RACE in prompt engineering is not the same as the RACE model in digital marketing (Reach, Act, Convert, Engage), created by Dave Chaffey. The prompting framework shares the acronym but serves an entirely different purpose -- structuring inputs to AI language models rather than mapping customer journey stages. If you arrived here looking for prompt engineering techniques, you are in the right place.
How Each Component Works
R -- Role
The Role tells the AI what expertise and perspective to bring. It is more than a job title -- a good Role definition shapes vocabulary, depth, assumptions, and the kind of judgment the AI applies.
What makes a strong Role:
- Specific expertise level (senior, junior, 10 years of experience)
- Domain context (B2B SaaS, healthcare, fintech)
- Relevant specialization (performance optimization, not just "developer")
- Perspective that matches the task (an editor reviews differently than a writer creates)
Weak Role:
You are a helpful assistant.
This adds nothing. The AI defaults to "helpful assistant" behavior anyway. You have wasted tokens repeating the baseline.
Strong Role:
You are a senior frontend engineer who specializes in React performance
optimization. You have 8 years of experience and have worked on
applications serving 10M+ daily active users.
This changes the output meaningfully. The AI will now prioritize performance over readability, reference React-specific patterns like useMemo and virtualization, and assume the reader understands JavaScript fundamentals.
The Role component is especially effective with technical tasks, where the level of expertise determines whether the AI explains basics (wasting your time) or goes straight to advanced patterns (what you actually need).
A -- Action
The Action is what you want done. This is the component most people already include -- but RACE asks you to be precise about it. Verb choice matters. Scope matters. Ambiguity here cascades through the entire output.
Vague Action:
Help me with my landing page.
"Help" is the least useful verb in prompt engineering. Help how? Write it? Review it? Rewrite the headline? Restructure the layout? Optimize the CTA? The AI will pick one interpretation and you will probably get something you did not ask for.
Specific Action:
Rewrite the hero section headline and subheadline for our landing page.
The current headline is "Welcome to DataSync Pro." Write 3 alternatives
that communicate the core value proposition (eliminating manual ETL)
in under 10 words each.
Good Actions include:
- A precise verb (rewrite, analyze, compare, draft, review, summarize -- not "help with")
- The specific deliverable (3 headline alternatives, not "some ideas")
- Scope boundaries (hero section only, not the entire page)
- Quantifiable parameters when possible (under 10 words, 3 alternatives)
C -- Context
Context is the information the AI cannot infer from the Role and Action alone. It is the difference between a generic output and one that fits your actual situation.
There are four types of context worth including:
1. Audience context -- who will consume the output:
The audience is non-technical marketing managers who have never
used an API before. They need to understand the benefits without
understanding the implementation.
2. Constraint context -- what limits apply:
We cannot mention competitor names directly due to legal guidelines.
The content must comply with GDPR requirements. Budget for this
campaign is $5,000.
3. Background context -- what has already happened:
We launched this product 6 months ago. Initial adoption was strong
(2,000 users) but growth has plateaued. Our last email campaign
had a 12% open rate, which is below industry average.
4. Tone context -- how the output should feel:
Our brand voice is direct and technical. We write "connect your
database in 30 seconds" not "unlock the power of seamless data
integration." No exclamation marks. No buzzwords.
You do not need all four types every time. Match the context to the task. A code review needs constraint context (which standards to follow) more than tone context. A marketing email needs audience and tone context more than technical constraints.
The most common mistake with Context is either providing none (the AI fills in blanks with generic assumptions) or providing too much (the AI gets confused about what matters). Aim for the minimum context that eliminates ambiguity.
E -- Expectation
The Expectation defines what the finished output looks like. Without it, the AI decides on format, length, structure, and quality level -- and its defaults rarely match what you had in mind.
Strong Expectations include:
- Format: Bullet points, numbered list, markdown table, paragraph prose, code block
- Length: Word count, number of items, number of sections
- Structure: "Start with the recommendation, then supporting evidence" or "Use H2 headers for each section"
- Quality signals: "Include specific metrics, not vague claims" or "Each point must have a concrete example"
- What to avoid: "Do not include a summary paragraph" or "Skip the introduction, start directly with the analysis"
Without Expectation:
Role: You are a data analyst.
Action: Analyze this sales data.
Context: Q1 2026 sales for our SaaS product, 3 pricing tiers.
This produces a wall of text with whatever structure the AI feels like using.
With Expectation:
Expectation: Present findings as a markdown table with columns:
Metric | Starter Tier | Pro Tier | Team Tier | Trend.
Follow the table with 3 bullet points: the single most important
insight, the biggest risk, and one recommended action. Keep the
entire response under 200 words.
Now you get a structured, scannable output that you can drop directly into a Slack message or a meeting doc.
RACE Prompt Examples by Use Case
Marketing and Content Creation
Role: You are a senior content strategist at a B2B SaaS company
that sells developer tools.
Action: Write a LinkedIn post announcing our new API rate limiting
feature. The post should drive traffic to our blog post about the
feature.
Context: Our audience is engineering managers and senior developers.
The feature allows customers to set custom rate limits per API key
instead of using a global limit. This was the #1 feature request
for 6 months. We use a professional but direct brand voice -- no
corporate jargon, no exclamation marks.
Expectation: 150-200 words. Hook in the first line (no "Excited to
announce"). Include one specific technical detail. End with a CTA
linking to the blog post. Format for LinkedIn (short paragraphs,
line breaks between ideas).
Why this works: The Role establishes B2B SaaS expertise. The Action specifies the exact platform and goal. Context provides audience, feature details, and voice guidelines. Expectation controls length, structure, and what to avoid.
Software Development
Role: You are a senior backend engineer experienced with Python
FastAPI applications and PostgreSQL optimization.
Action: Review this database query and suggest optimizations.
The query joins 3 tables (users, orders, order_items) and runs
on every page load of our dashboard. Current execution time is
2.3 seconds.
Context: The users table has 500K rows, orders has 2M rows,
order_items has 8M rows. We are on PostgreSQL 16 with 8GB RAM.
There are indexes on primary keys only. The dashboard page gets
roughly 200 requests per minute during peak hours.
Expectation: Provide the optimized query with comments explaining
each change. List the indexes to add. Estimate the expected
performance improvement for each optimization. Format as a
numbered list from highest to lowest impact.
Data Analysis
Role: You are a business intelligence analyst specializing in
SaaS metrics and cohort analysis.
Action: Analyze the following monthly revenue data and identify
patterns in churn by pricing tier.
Context: We have 3 tiers: Starter ($29/mo, 60% of customers),
Pro ($99/mo, 30%), Enterprise ($299/mo, 10%). Churn has increased
from 4% to 7% over the last quarter. The CEO wants to know if
this is a pricing problem or an onboarding problem. We recently
changed the onboarding flow for Starter tier only.
Expectation: Structure your analysis as: (1) Summary finding in
one sentence, (2) Tier-by-tier breakdown as a table, (3) Root
cause hypothesis with supporting evidence, (4) Two recommended
actions ranked by expected impact. Under 400 words total.
Education and Lesson Planning
Role: You are an experienced high school physics teacher who uses
active learning techniques and real-world applications to engage
students.
Action: Create a 50-minute lesson plan for teaching Newton's Third
Law to 10th graders.
Context: This is the third lesson in our Forces unit. Students
already understand Newton's First and Second Laws. The class has
28 students with mixed ability levels. I have access to a physics
lab with standard equipment but no digital sensors. Two students
have IEPs requiring visual aids and simplified instructions.
Expectation: Format as a timeline (minute ranges). Include: warm-up
activity (5 min), main instruction (15 min), hands-on activity
(20 min), wrap-up assessment (10 min). For each segment, list the
objective, materials needed, and teacher instructions. Include one
differentiation note for the IEP students per segment.
Business Strategy
Role: You are a management consultant with expertise in SaaS
go-to-market strategy for early-stage startups.
Action: Evaluate these three pricing strategies for our AI writing
tool and recommend which to launch with.
Context: We are pre-revenue with 500 beta users. Options: (A)
freemium with 10 uses/day free, (B) 14-day free trial then $19/mo,
(C) usage-based at $0.05 per generation. Our target customer is
a solo content creator making $50-150K/year. Competitors charge
$20-40/month for similar tools. We have 8 months of runway.
Expectation: For each option, provide: projected conversion rate
(with reasoning), estimated month-6 MRR, biggest risk, and one
mitigation. Then give your recommendation with a one-paragraph
justification. Present options in a comparison table before the
recommendation.
RACE vs Other Prompt Frameworks
RACE is not the only structured prompting method, and it is not always the best choice. Here is how it compares to three other popular frameworks. For a comprehensive comparison of all major frameworks, see our prompt engineering frameworks compared guide.
| Dimension | RACE | CO-STAR | RISEN | STOKE |
|---|---|---|---|---|
| Components | 4 (Role, Action, Context, Expectation) | 6 (Context, Objective, Style, Tone, Audience, Response) | 5 (Role, Instructions, Steps, End Goal, Narrowing) | 5 (Situation, Task, Objective, Knowledge, Examples) |
| Setup time | 30 seconds | 1-2 minutes | 2-3 minutes | 2-3 minutes |
| Best for | Quick structured prompts, daily tasks | Content and marketing (tone/audience control) | Multi-step processes, technical workflows | Domain-expert tasks, analytical work |
| Tone control | Indirect (via Context) | Explicit (Style + Tone fields) | None | Indirect (via Examples) |
| Process support | None | None | Explicit (Steps component) | None |
| Few-shot support | None | None | None | Explicit (Examples component) |
| Learning curve | Very low | Low | Medium | Medium |
| Output consistency | Good for simple tasks | High for content | High for processes | High for domain tasks |
When to pick RACE:
- You need a structured prompt in under a minute
- The task is straightforward with a single clear deliverable
- You are teaching prompting to non-technical teammates
- You want a starting point and will upgrade if the output is not specific enough
When to pick CO-STAR instead:
- Voice, tone, and audience targeting are critical (marketing, copywriting)
- You need granular control over how the output sounds
When to pick RISEN instead:
- The task has a clear sequence of steps
- You need the AI to follow a specific process, not just produce an output
When to pick STOKE instead:
- You need to inject domain expertise the AI does not have
- Few-shot examples would significantly improve output quality
- The task requires specialized knowledge (compliance, industry-specific analysis)
The honest answer: RACE handles 70% of daily prompting tasks. For the other 30%, you will want a heavier framework -- or you will want to borrow specific components from CO-STAR, RISEN, or STOKE and add them to your RACE prompt.
Advanced RACE Patterns
Once you are comfortable with basic RACE prompts, these patterns push the framework further.
Combining RACE with Chain-of-Thought Reasoning
Chain-of-thought prompting asks the AI to show its reasoning step by step. You can embed this inside RACE's Action or Expectation component:
Role: You are a senior product manager evaluating feature requests.
Action: Evaluate the following 5 feature requests and rank them by
expected impact on user retention. Think through each feature step
by step: first estimate the percentage of users affected, then
assess implementation complexity, then calculate an impact-to-effort
ratio before ranking.
Context: We are a project management SaaS with 5,000 monthly active
users. Our 90-day retention rate is 62%, which is below the 75%
benchmark for our category. The engineering team has capacity for
one medium feature per sprint (2 weeks).
Expectation: Show your reasoning for each feature in 2-3 sentences
before assigning the rank. Present the final ranking as a table
with columns: Rank | Feature | Users Affected | Effort | Impact
Score.
The chain-of-thought instruction lives in the Action. The Expectation reinforces it by requiring visible reasoning. This combination produces more reliable rankings because the AI cannot skip to conclusions without showing why.
RACE with Few-Shot Examples
Few-shot prompting means showing the AI examples of what you want. Add examples after your RACE components:
Role: You are a technical writer who converts complex API
documentation into developer-friendly quickstart guides.
Action: Write a quickstart section for our WebSocket API endpoint.
Context: Our users are mid-level developers who know REST APIs
but may not have used WebSockets before. The documentation
should work for both Python and JavaScript developers.
Expectation: Under 300 words. Include one code example per language.
Follow the format shown in these examples:
---
Example (good):
## Connect to the stream
Open a WebSocket connection to receive real-time updates.
```python
import websockets
async with websockets.connect("wss://api.example.com/v1/stream") as ws:
async for message in ws:
print(message)
One endpoint. One connection. Events arrive as JSON payloads.
Example (bad):
WebSocket API Connection Guide
In this section, we will walk you through the comprehensive process of establishing a WebSocket connection to our powerful real-time streaming API endpoint...
The examples do more work than any instruction could. They show the AI the exact density, tone, and format you expect.
### Iterative RACE: Refining Across Turns
RACE does not have to be a one-shot technique. Use it iteratively:
**Turn 1 -- Initial RACE prompt:**
Generate the first output using a full RACE prompt.
**Turn 2 -- Refine the Expectation:**
"The structure is good but the tone is too formal. Rewrite with shorter sentences and remove all passive voice."
**Turn 3 -- Adjust the Context:**
"Add this constraint: the audience has already tried two competing products and is skeptical about switching. Adjust the messaging to address switching costs."
Each turn narrows the output. You keep the Role and Action stable while iterating on Context and Expectation. This is faster than rewriting the entire prompt and gives you fine-grained control.
### RACE Tuning Tips for Different AI Models
Different models respond to RACE components with varying sensitivity. Based on testing across major LLMs (see our [ChatGPT vs Claude prompting comparison](/blog/chatgpt-vs-claude-vs-gemini) for more detail):
**GPT-4o** follows Role definitions closely -- sometimes too closely. If you assign "senior executive," it may produce overly formal language. Be specific about the communication style in your Context, not just the expertise level in the Role.
**Claude** excels with nuanced Context. It integrates background information naturally and produces outputs that feel like they understood the full situation. Give Claude richer Context and it will reward you.
**Gemini** benefits most from detailed Expectations. It can be verbose by default, so explicit length limits and format specifications in the Expectation component keep outputs focused.
**DeepSeek** handles technical Roles and Actions well but may need more explicit Expectation formatting for non-code tasks. When requesting structured output (tables, lists), be very specific about the format.
## Common RACE Mistakes (And How to Fix Them)
### Mistake 1: Role Too Generic
**Problem:**
Role: You are an AI assistant.
This is the prompt engineering equivalent of writing "Dear Sir or Madam" on a cover letter. It tells the AI nothing it does not already know.
**Fix:** Make the Role specific to the task. Include expertise level, domain, and relevant experience:
Role: You are a conversion rate optimization specialist who has run A/B tests for 50+ SaaS landing pages.
### Mistake 2: Action Buried in Context
**Problem:**
Role: You are a marketing expert.
Action: Our company launched a new feature last week and we want to tell our customers about it and also we need to update the website and maybe write a blog post and an email would be good too.
Context: The feature is real-time collaboration.
Expectation: Make it good.
Three problems: the Action contains multiple tasks (email, blog post, website update) without prioritizing, the Context is too thin, and the Expectation is meaningless. "Make it good" tells the AI nothing about what "good" means.
**Fix:** One Action per prompt. Move background information to Context. Define "good" in the Expectation:
Action: Write an announcement email for our new real-time collaboration feature.
Context: The feature launched last week. It allows multiple users to edit the same document simultaneously with cursor tracking. Our customers are small design teams (5-15 people). They have been requesting this feature for 8 months.
Expectation: Subject line under 50 characters. Email body under 200 words. Lead with the benefit (work together without version conflicts), not the feature name. Include one specific detail about how it works. End with a CTA to try it.
### Mistake 3: Missing Expectations
**Problem:**
Role: You are a data scientist. Action: Analyze this customer churn data. Context: SaaS company, 10K customers, 3 pricing tiers.
Without Expectations, the AI produces whatever format it defaults to -- usually a long, unstructured analysis that buries the key findings in paragraphs of hedging language.
**Fix:** Always define the output shape:
Expectation: Executive summary (3 sentences max), then a markdown table of churn rate by tier by month for Q4, then 2 actionable recommendations with estimated revenue impact. Total under 300 words.
### Mistake 4: Context Overload
**Problem:** Dumping your entire company wiki into the Context field. When the AI gets 2,000 words of context for a task that needs 200, it struggles to identify what matters.
**Fix:** Apply the relevance test. For each piece of context, ask: "Would the output be different without this?" If not, remove it. Context should be the minimum information needed to eliminate ambiguity -- not everything you know about the topic.
A good rule of thumb: your Context should be roughly the same length as your expected output, or shorter. If you are writing a 200-word email, you probably need 100-200 words of context, not 1,000.
## RACE Prompt Templates (Copy-Paste Ready)
Use these templates as starting points. Replace the bracketed placeholders with your specifics.
### Template 1: Blog Post Brief
Role: You are a content strategist specializing in [industry] with expertise in SEO-driven blog content.
Action: Write a [word count]-word blog post about [topic]. The primary keyword is [keyword]. The post should [specific angle or argument].
Context: Target audience is [audience description and knowledge level]. The post will be published on [platform/site]. Our brand voice is [2-3 adjectives]. Competitors have covered this topic by [what competitors focus on] -- we need to differentiate by [your unique angle].
Expectation: Structure with H2 and H3 headers. Include an introduction (no more than 100 words), [number] main sections, and a conclusion with a specific CTA. Use short paragraphs (3 sentences max). Include [number] practical examples. The meta description should be under 155 characters.
### Template 2: Code Review Request
Role: You are a senior [language/framework] developer with expertise in [specific area: performance, security, architecture].
Action: Review the following code and identify issues related to [focus area]. The code is [brief description of what it does].
Context: This is part of a [type of application] that handles [scale: requests per second, data volume]. We follow [coding standard or style guide]. The code will be reviewed by [junior/ senior] developers. Known constraints: [runtime, memory, etc.].
Expectation: List issues in priority order (critical, high, medium). For each issue: describe the problem in one sentence, show the problematic code snippet, provide the fixed version, and explain why the fix matters. If no critical issues exist, state that explicitly before listing improvements.
### Template 3: Email Draft
Role: You are an email copywriter for a [B2B/B2C] [industry] company. You write emails that get opened and clicked.
Action: Write a [type: announcement/nurture/re-engagement/ follow-up] email about [topic/offer].
Context: Sending to [audience description and list size]. The recipient's relationship with us: [new lead/existing customer/ churned user]. Previous email in this sequence was about [topic] and had [X]% open rate. Our brand voice is [description]. [Any compliance requirements: CAN-SPAM, GDPR, etc.]
Expectation: Subject line (under 50 characters, no spam trigger words) + preview text (under 90 characters) + body (under [word count] words). One primary CTA. Use short paragraphs. [Specific format: plain text / HTML-friendly]. Start with [hook type: question/statistic/pain point], not "Hi [Name], I hope..."
### Template 4: Data Analysis Task
Role: You are a [business/data/financial] analyst with experience in [domain: SaaS metrics, e-commerce, marketing analytics].
Action: Analyze the following [data type] and [specific analysis: identify trends, find anomalies, compare segments, forecast].
Context: The data covers [time period] for [business description]. Key business context: [what is happening -- growth, decline, new initiative, market change]. The analysis is for [audience: CEO, board, engineering team] who will use it to [decision they need to make]. [Any known data quality issues or caveats.]
Expectation: Start with the single most important finding (one sentence). Then present [number] supporting insights. Use a [table/chart description/bullet list] for the data. End with [number] recommended actions, each with expected impact and confidence level. Total under [word count] words.
### Template 5: Meeting Summary
Role: You are an executive assistant who writes concise, actionable meeting notes that busy leaders actually read.
Action: Summarize the following meeting transcript into structured notes.
Context: This was a [meeting type: sprint planning, board meeting, client call, 1:1] lasting [duration] with [number] participants. The key stakeholders who will read this are [names/roles]. Follow-up actions need to be tracked in [tool: Jira, Asana, email].
Expectation: Format as: (1) One-sentence meeting summary, (2) Key decisions made (bullet list), (3) Action items as a table with columns: Action | Owner | Deadline | Priority, (4) Open questions or parking lot items. Omit small talk and tangential discussions. Keep under [word count] words.
## FAQ
### What does RACE stand for in prompt engineering?
RACE stands for Role, Action, Context, and Expectation. These are four components for structuring prompts to AI models. Role defines who the AI should act as. Action specifies the task. Context provides background information and constraints. Expectation describes the desired output format and quality. The framework was designed to be simple enough to memorize and fast enough to apply to every prompt.
### How do I use the RACE framework with ChatGPT?
Open ChatGPT (or any AI model) and structure your prompt with four labeled sections. Start with Role: define the expertise the AI should bring. Then Action: state exactly what you need done with a specific verb. Add Context: include audience, background, constraints, and tone relevant to the task. End with Expectation: specify the format, length, structure, and quality criteria for the output. Label each section explicitly so the AI can parse the structure. For more on structuring prompts effectively, see our guide on [how to write better AI prompts](/blog/how-to-write-better-ai-prompts).
### What is the difference between RACE and CO-STAR?
RACE has 4 components focused on task clarity: Role, Action, Context, and Expectation. CO-STAR has 6 components -- Context, Objective, Style, Tone, Audience, and Response -- adding explicit control over voice and audience targeting. Use RACE when you need speed and the task is straightforward. Use CO-STAR when how the output sounds matters as much as what it says, particularly for marketing copy, brand content, and audience-specific communication. See our [full frameworks comparison](/blog/prompt-engineering-frameworks-compared) for a detailed breakdown.
### Is RACE good for beginners?
Yes. RACE is the most commonly recommended starter framework for prompt engineering because it has only four components and maps naturally to how people already think about assigning tasks. Most people instinctively include an Action ("write an email") but forget Role, Context, and Expectation. RACE makes the missing pieces obvious. Once you are comfortable with RACE, you can graduate to more detailed frameworks like [STOKE](/blog/stoke-framework-explained) or CO-STAR for complex tasks.
### Can I combine RACE with other frameworks?
Yes. RACE pairs well with [chain-of-thought prompting](/blog/chain-of-thought-prompting) for complex reasoning tasks -- add "think through this step by step" to the Action component. It works with [few-shot prompting](/blog/few-shot-prompting) by appending examples after the Expectation. You can also borrow specific components from other frameworks: add STOKE's Knowledge section when domain expertise matters, or add RISEN's Steps component when the task has a clear sequence. Frameworks are tools, not religions -- use whatever combination gives the AI the information it needs.
### Does RACE work with all AI models?
RACE works with GPT-4o, Claude, Gemini, DeepSeek, Llama, and any instruction-following language model. The principle behind RACE -- providing structured, complete information -- is model-agnostic. That said, each model responds to components slightly differently. Claude integrates Context particularly well. GPT-4o follows Role definitions closely. Gemini benefits from detailed Expectations. The framework itself does not need to change, but you may want to emphasize different components depending on the model.
---
## Ready to Try RACE?
Promplify applies the RACE framework automatically. Paste your prompt, select RACE, and get a structured version in seconds -- with each component optimized for the model you are targeting.
No manual labeling. No guessing which context to include. The optimizer analyzes your prompt, identifies what is missing, and fills in Role, Action, Context, and Expectation where they will have the most impact.
[Try RACE in Promplify](/optimize)
Ready to Optimize Your Prompts?
Try Promplify free — paste any prompt and get an AI-rewritten, framework-optimized version in seconds.
Start Optimizing