Functional testing verifies that a new or changed feature behaves according to its specified requirements. Regression testing verifies that existing functionality still works correctly after code changes have been made. Both are essential — and understanding when to run each is one of the most practical skills in QA engineering.
What Is Functional Testing?
Functional testing answers the question: does this feature work the way it was designed to work? It focuses on new behavior — a newly built screen, a newly integrated API endpoint, a newly added user flow. Functional test cases are derived directly from requirements, user stories, or acceptance criteria. They cover happy paths (the intended successful flow), negative paths (invalid inputs, error states), and boundary conditions (edge cases at the limits of valid input).
In practice, functional testing happens as soon as a feature is ready for QA — typically when a build is promoted to the QA environment in a sprint. On iOS and Android, this often means testing the same feature on multiple OS versions and device form factors, since behavior can differ significantly. A feature that passes functional testing on iOS 17 may behave differently on iOS 15, particularly around permissions, gestures, or background processing.
What Is Regression Testing?
Regression testing answers the question: did the new code changes break anything that was already working? Every code change — even a small bug fix — carries the risk of unintended side effects. Regression testing is the safety net that catches those effects before they reach production.
A regression suite is typically a curated set of test cases that covers the most critical existing functionality of the product. It is not meant to be exhaustive of all possible test scenarios — that would take too long to run on every build. It is meant to be representative: hitting the core user journeys and high-risk areas where a regression is most likely to have impact.
Key Differences at a Glance
- Purpose: Functional testing validates new behavior; regression testing guards existing behavior
- When it runs: Functional testing runs when a new feature is ready; regression runs after any code change that could affect existing functionality
- Test case source: Functional tests come from requirements; regression tests come from a maintained suite of previously-passing scenarios
- Scope: Functional testing is narrow (this feature); regression is broad (the whole product)
- Automation suitability: Both benefit from automation, but regression is the higher-priority candidate because it runs most frequently
Where They Overlap
Once a new feature has been functionally tested and released, its test cases typically get promoted into the regression suite. This is how a regression suite grows over time — it is the accumulated record of "things that work and must keep working." This overlap is worth understanding because it means the quality of your functional test cases directly determines the quality of your future regression coverage.
Poorly written functional test cases — vague steps, unclear expected results, missing negative paths — become poorly written regression tests. The investment in writing precise functional test cases pays dividends every sprint thereafter.
When to Use Each in an Agile Sprint
In a typical two-week sprint on a mobile application, the cadence looks roughly like this:
- Sprint start: QA reviews requirements and writes functional test cases for stories in the sprint backlog
- Mid-sprint: As features become code-complete, functional testing begins in the QA environment
- End of sprint: A regression pass is run against the release candidate build — either manually against critical paths or via an automated suite triggered by the CI pipeline
- Release gate: Both functional test results and regression results are reviewed before the build is approved for release
Regression does not wait until the end of a sprint to begin. On a product with active CI/CD, a subset of regression tests — typically the smoke test suite — runs on every build. A deeper regression pass runs on release candidates. This layered approach catches regressions as close to the code change as possible.
Automation and Regression Testing
Regression testing is the strongest candidate for test automation in most QA organizations, and for a clear reason: regression suites need to run frequently, consistently, and against many builds. Running a 200-case regression suite manually on every build is not scalable. Automating even 60-70% of that suite — focusing on the highest-priority, most stable test cases — frees QA engineers to spend their manual testing time on exploratory work, edge cases, and new features.
Regression automation is not about replacing QA engineers — it is about redirecting their time from repetitive verification toward the kind of creative, risk-based thinking that automation cannot replicate.
At Nike, running the Nike Run Club app across iOS, Android, watchOS, and Wear OS simultaneously, a fully manual regression pass on every platform would have been weeks of work. Strategic automation of core workout flows, authentication, and data sync scenarios made it possible to run a meaningful regression pass in hours, not days. The remaining manual regression effort focused on platform-specific behaviors, hardware sensor interactions, and UX scenarios where a human eye was irreplaceable.
Know which type of testing you are doing at any given moment, and design your test cases accordingly. The distinction is not academic — it shapes how you write tests, how you prioritize your time, and how you communicate coverage to your team.