How to Write a QA Test Plan That Gets Used

A QA test plan is a document that defines the scope, approach, resources, and schedule for a testing effort on a specific product or release. Most QA teams have written at least one test plan that collected dust — a 40-page document nobody read after the kickoff meeting. The goal of this post is to explain what actually belongs in a test plan and how to write it in a way that makes it useful to real teams under real deadline pressure.

What Is a QA Test Plan?

A test plan is not a collection of test cases, and it is not a test strategy document. A test strategy is a high-level organizational standard — it defines how the company approaches testing across all products. A test plan is release-specific. It answers: what are we testing for this release, with these people, on this schedule, and under these constraints?

The best test plans I have worked with at Nike were concise enough to be read in ten minutes, specific enough to eliminate ambiguity about who owns what, and honest enough to name the risks that might cause the release to slip.

The 7 Key Sections of a QA Test Plan

1. Scope

Define clearly what is in scope and — just as importantly — what is explicitly out of scope. For a Nike Run Club release covering a new GPS workout feature, the scope might include all iOS and Android workout flow functionality, GPS data accuracy, and the companion watch app. Out of scope might include the social feed, settings, and in-app purchases unless they were modified. Scope boundaries prevent the creeping "can you also test X while you're at it?" that expands timelines without expanding resources.

2. Test Approach

Describe the testing methodologies you will use: functional testing, regression, exploratory, performance, accessibility, etc. Specify which platforms, OS versions, and device types are covered. If you're running automated regression via a CI pipeline, say so here and name the framework. This section tells engineers and product managers how QA will operate, not just what it will cover.

3. Resources

List the people responsible for testing and their specific areas. A test plan with no ownership is a test plan with no accountability. If integration testing between the app and the backend API is owned by a specific QA engineer, name them. If a feature requires a physical device that only one person has access to, flag it here before it becomes a blocker.

4. Schedule

Map the testing phases to the release timeline. When does smoke testing start? When does full regression begin? When is the final sign-off deadline? Working on multi-platform releases at Nike taught me that the schedule section is often the first thing stakeholders read — they want to know when QA will be done, not how. Give them that information clearly, with dates tied to real sprint events.

5. Entry and Exit Criteria

Entry criteria define what must be true before testing begins (e.g., "build passes smoke test," "all P1 features are code-complete"). Exit criteria define what must be true before testing is considered complete (e.g., "all P1 and P2 test cases executed," "no open Critical or High severity defects," "RTM shows 95%+ coverage"). Without these, "testing is done" becomes a subjective call that gets made differently every release.

6. Deliverables

List the artifacts QA will produce: the RTM, a test execution report, a defect summary, and a final release recommendation. This section sets stakeholder expectations. If your organization expects a written sign-off memo before the release build goes to the app store, put it here so it doesn't get forgotten in the sprint crunch.

7. Risks and Mitigations

This is the section most QA engineers write too vaguely or skip entirely. Be specific. "Risk: API endpoint for new workout type is not stable in QA environment. Mitigation: QA will test against staging environment; backend team has committed to a freeze by [date]." Naming risks explicitly protects QA from being blamed for delays caused by factors outside their control, and it gives leadership the information they need to make resource decisions.

Test Plan vs. Test Case: Know the Difference

A test plan does not contain test cases. Test cases live in your test management tool — JIRA with X-Ray, TestRail, Zephyr, or a spreadsheet. The test plan references the test suite or test cycle but does not enumerate individual scenarios. Conflating the two is one of the most common mistakes I see in teams that are new to formal QA documentation. The test plan says "we will execute the workout GPS regression suite." The test cases in that suite say exactly what steps to follow and what the expected result is.

Common Pitfalls to Avoid

  • Writing for completeness instead of clarity. A 5-page test plan that people actually read beats a 40-page document that no one opens after approval. Cut everything that doesn't inform a decision.
  • No named owner for each section. If a section has no owner, it has no accountability. Assign names, not team names.
  • Treating the test plan as a one-time artifact. Update it when scope changes. A test plan that reflects the plan from three sprints ago is worse than no test plan at all — it creates a false sense of coverage.
  • Skipping the out-of-scope section. The most valuable thing you can write in a test plan is sometimes a clear statement of what you are not doing and why.

A test plan that gets used is one that was written for the people who have to execute it — not for the archive. Keep it specific, keep it honest, and update it when reality changes.

The discipline of writing a clear, concise, and accurate test plan is a career differentiator for QA engineers. It signals that you understand the business context of your work, not just the technical mechanics of running test cases. Start with these seven sections, trim aggressively, and make sure every sentence earns its place.