Preflight Logo
QA Process
Mar 13, 2026

How to Write a Test Plan From a PRD (With Template)

A repeatable five-step method for translating any PRD into a structured, executable test plan, including a free Google Sheets template and where AI can take the grunt work off your plate.

8 min read

You've got a fresh PRD in your hands. The feature is scoped, the designs are ready, and the sprint is about to kick off. Now someone needs to turn that document into a test plan, and that someone is you.

If you've ever stared at a PRD and wondered where do I even start testing this?, you're not alone. Writing test plans from product requirements is one of those tasks that sounds straightforward but eats an hour before you've even opened Jira.

This guide walks you through a repeatable process for translating any PRD into a structured, executable test plan. We'll cover the method, share a template you can steal, and show you where AI can take the grunt work off your plate.

Why your test plan should start with the PRD

A test plan that isn't grounded in the product requirements is just a list of things you hope work. The PRD tells you what the feature is supposed to do, who it's for, and what "done" looks like. Your test plan's job is to verify all of that.

Starting from the PRD also means your test coverage maps directly to stakeholder expectations. When a PM asks "did we test the edge case where a user has no payment method?", you can point to a specific charter rather than shrugging.

Here's the core principle: every acceptance criterion in the PRD should have at least one corresponding test case. If it doesn't, either the PRD is missing something or your test plan is.

Step 1: Read the PRD for testable claims

Before you write a single test case, read the PRD with a tester's eye. You're not reading to understand the feature. You're reading to find every claim that can be verified.

Look for:

  • Acceptance criteria (the obvious ones)
  • User flows described in the narrative sections
  • Edge cases mentioned in passing ("users with existing accounts should see...")
  • Assumptions stated or implied ("assumes the user is signed in")
  • Non-functional requirements like performance or accessibility

Highlight or copy these into a working document. Each one is a seed for a test charter.

A practical tip: PRDs often describe the happy path in detail and bury the edge cases in a bullet list at the bottom. Don't skip that bullet list. Those edge cases are where bugs live.

Step 2: Group claims into test charters

A test charter is a focused area of testing with a clear mission. Think of it as a theme: "Payment flow for new users" or "Error handling when the API is down."

Take your list of testable claims and group them by feature area or user flow. Each group becomes a charter. A mid-size feature typically produces five to eight charters; a complex one might need 12–15.

For each charter, write:

  • Charter name: A short, descriptive title
  • Objective: What you're verifying (one sentence)
  • Steps: Numbered actions the tester should take
  • Expected outcomes: What should happen at each step if the feature works correctly

Charter: User sign-up with email

Objective: Verify that a new user can create an account using their email address and land on the onboarding screen.

Steps:

  1. Navigate to the sign-up page
  2. Enter a valid email address
  3. Check your inbox for the verification email
  4. Click the verification link
  5. Confirm you're redirected to the onboarding screen

Notice how the expected outcomes are specific. "It works" isn't an expected outcome. "The onboarding screen displays with the user's email pre-filled" is.

Step 3: Add coverage for edge cases and negative paths

Your charters so far probably cover the happy path, the flow where everything goes right. Now add the flows where things go wrong.

For each charter, ask:

  • What happens if the user enters invalid data?
  • What if the network drops mid-action?
  • What if the user is on a slow connection or a mobile device?
  • What if they navigate away and come back?
  • What if they've already completed this flow before?

These negative-path cases are often where the highest-severity bugs hide. A feature that works perfectly on Chrome on a MacBook might fall apart on Safari on an iPhone. Cover the platforms your users actually use.

Step 4: Map charters to platforms

If your team tests across multiple browsers or devices, each charter might need to be executed more than once. Rather than duplicating the entire plan, note which platforms each charter needs to be run on. This prevents the "we only tested on Chrome" problem that haunts release retrospectives.

Step 5: Review the plan with your team

Before anyone starts testing, walk through the plan with the PM who wrote the PRD and at least one developer who built the feature. This review catches two things:

  1. Missing coverage: The PM might point out a flow you missed. The developer might flag a tricky implementation detail that deserves its own charter.
  2. Stale requirements: PRDs evolve during development. A flow that was in scope at spec time might have been cut, or a new one might have been added.

This review typically takes 15–20 minutes and can save hours of wasted testing effort.

The test plan template

We put together a free Google Sheets template with the full structure: feature metadata at the top, then a charter block for each area of testing: objective, numbered steps, expected outcomes, platform, and priority. Make a copy and fill in one block per charter.

Free Test Plan Template — Google Sheets

Charter-based structure with steps, expected outcomes, and platform tracking

A mid-size feature typically needs five to eight charters. A complex one might need 12–15. If writing them out by hand sounds like a lot, that's because it is. Which brings us to where the process tends to break down.

Where this process breaks down (and how to fix it)

The method above works, but it has two friction points that compound over time.

The time problem. Writing a thorough test plan from a PRD takes 30–60 minutes for a mid-size feature. For a team shipping weekly, that's a significant chunk of your QA capacity spent on writing about testing rather than actually testing.

The context problem. If your team uses Figma for designs, the PRD alone doesn't capture everything. Button labels, navigation flows, modal states, error message copy: all of that lives in the design file, and your test plan should reference it.

This is where AI-assisted test generation starts to make sense. Preflight lets you paste a PRD (and optionally connect a Figma file), then generate a full set of test charters, with numbered steps and expected outcomes, in under two minutes. The AI reads both the requirements and the actual UI, so the generated charters reference real button labels and screen layouts rather than generic placeholders.

The bottom line

A good test plan is a translation exercise: take what the PRD says should happen and turn it into specific, verifiable steps. Start from the requirements, group into charters, cover the edge cases, map to platforms, and review with your team.

Whether you write your test plans by hand or use AI to generate the first draft, the structure matters more than the method. A well-organized plan with clear expected outcomes will catch more bugs than a brilliant but disorganized one every time.

Clear Your Next Release for Takeoff

Don't launch on a wing and a prayer. Replace manual docs with an organized workflow that catches bugs before your customers do.