Preflight Logo
Team
Feb 27, 2026

How to Do QA Without a Dedicated QA Team

A lightweight QA framework for startup teams of 3–15 people: what to test, who tests it, how to track it, and the signals that tell you it's time to hire your first QA engineer.

6 min read

Here's a reality most startup advice doesn't prepare you for: your first QA hire is probably your sixth or seventh hire. Until then, QA is everyone's job and nobody's specialty.

That doesn't mean it's optional. Every team that ships software has a QA process. It's just that some processes are "we click around for 10 minutes before deploying" and others are actually structured enough to catch bugs before users do.

If your team doesn't have a dedicated QA engineer (yet), this guide is for you. We'll walk through how to build a lightweight QA process that works when developers, PMs, and designers are the ones doing the testing.

Why "we'll just write more automated tests" isn't enough

Let's get this out of the way: automated tests are essential, but they don't replace manual testing. They're different jobs.

Automated tests verify that the code does what the developer intended. Manual testing verifies that the product does what the user expects. Those aren't the same thing.

Your unit tests won't catch that the signup button is invisible on mobile Safari. Your integration tests won't flag that the onboarding flow makes no sense to someone who isn't a developer. Your end-to-end tests won't notice that the error message says "null" instead of something helpful.

Manual testing is how you see your product through your users' eyes. Even five minutes of deliberate, structured manual testing catches things that 500 automated tests miss.

The lightweight QA framework

Here's a framework that works for teams of three to 15 people without a QA specialist. It has three parts: what to test, who tests it, and how to track it.

Part 1: Decide what to test (and what to skip)

You can't test everything. You don't have the headcount. So prioritize ruthlessly:

  • Always test: New features before they ship, anything touching payments or authentication, flows that have broken before
  • Test when you can: UI changes, performance-sensitive pages, integrations with third-party services
  • Skip for now: Internal tools, admin panels, features behind a feature flag that aren't yet visible to users

Part 2: Assign testing to the right people

The worst version of "everyone does QA" is when nobody actually does it because everyone assumes someone else will. Here's how to make it concrete:

The developer who didn't build it tests it. The person who wrote the code is the worst person to test it, because they'll follow the same mental model they coded against. Pair up: developer A tests developer B's feature, and vice versa.

The PM tests the user flow. PMs are closest to the user's mental model. Have them walk through the feature as if they're a customer seeing it for the first time.

The designer tests the design fidelity. Designers should compare the implementation against the mockups. Spacing, colors, copy, responsive behavior: these details matter and developers often miss them.

Part 3: Track your testing (even informally)

The minimum viable tracking system has three elements:

  1. A list of what needs to be tested (your test charters or test cases)
  2. A status for each item (passed, failed, not yet tested)
  3. A place for bug details (what happened, steps to reproduce, screenshot)

What you don't want is testing that happens entirely in someone's head. If a developer says "yeah, I tested it," but there's no record of what they tested or what they found, that's not QA. That's just hope.

Tools that make this easier

You don't need an enterprise QA platform to do this well.

For test planning: A shared document or template. If you want to save time, Preflight can generate structured test plans from your PRD and Figma designs using AI, which is useful when nobody on the team has the time (or inclination) to write test cases from scratch.

For bug tracking: Whatever your team already uses. Linear, Jira, GitHub Issues. Pick one and use it consistently. The key is that every bug has steps to reproduce, expected vs. actual behavior, and a screenshot.

For communication: Resist the urge to report bugs in Slack. Slack is where bug reports go to die. File tickets in your project management tool so they enter the normal workflow.

Signs you need a dedicated QA person

  • Features ship with recurring bugs in areas that were "tested." The ad hoc testers are missing things consistently
  • Developers spend more time triaging bug reports than fixing them, because reports lack detail or reproduce steps
  • Nobody wants to do QA. Testing gets pushed to the last minute and done hastily
  • Your release cadence is slowing down because the team doesn't feel confident shipping without more thorough testing
  • Customer-reported bugs are increasing despite the team feeling like they're testing more

The uncomfortable truth

Not having a QA team isn't a failure. Plenty of successful products have shipped for years with developers and PMs doing the testing. But not having a QA process is a choice, and it's one your users pay for.

The bar isn't perfection. It's "can someone on the team tell me what we tested, what passed, and what failed?" If you can answer that question for every release, you're in better shape than most startups, with or without a QA hire on the org chart.

Clear Your Next Release for Takeoff

Don't launch on a wing and a prayer. Replace manual docs with an organized workflow that catches bugs before your customers do.