You've signed the statement of work. Your penetration test is scheduled to start in two weeks. Now what? The window between booking an engagement and the first day of testing is the most underrated phase of the entire process, and getting it wrong can waste thousands of dollars and weeks of calendar time.

We've run hundreds of penetration tests for engineering teams of all sizes. The engagements that deliver the best results always have one thing in common: the team on the other side was prepared. Here's exactly what that preparation looks like.


Why preparation matters more than you think

Penetration testing is expensive. Depending on scope and complexity, you're spending anywhere from $5,000 to $30,000 or more for a qualified team to spend one to three weeks attacking your application. Every hour a tester spends fighting access issues, waiting for credentials, or trying to understand your environment is an hour they're not spending finding real vulnerabilities.

We've seen engagements where 20% of the testing window was lost to access issues alone. That's not an exaggeration. A five-day engagement that loses its first day to account provisioning problems is effectively a four-day engagement at a five-day price. The testers still find things, but coverage suffers. Edge cases don't get explored. Deeper attack chains don't get followed.

The ROI of spending 2-3 hours preparing before an engagement starts is enormous. It's the difference between a report that tells you "we found some XSS" and a report that maps your entire attack surface, chains vulnerabilities together, and gives your engineering team a prioritized remediation roadmap. The testers are only as effective as the access and context you give them.

Poor preparation also delays results. If testers hit blockers mid-engagement, they need to pause, communicate the issue, and wait for a response. If your team isn't responsive, those delays compound. We've seen final reports delayed by weeks because issues that should have been resolved before day one kept surfacing throughout the engagement.


Define your scope before the engagement starts

"Test our web app" is not a scope. It's a direction, and it leaves too much open to interpretation. A well-defined scope is the foundation of a useful penetration test, and it needs to be specific enough that both your team and the testers agree on exactly what's being assessed.

Start by identifying what's in scope. This should include specific URLs, subdomains, API endpoints, and application features. If your application has distinct modules (billing, admin panel, user dashboard, public API), list each one explicitly. Call out the critical user flows you want tested: authentication, payment processing, file uploads, data export, invitation workflows.

Equally important is what's out of scope. Third-party integrations you don't control (Stripe's checkout page, Auth0's login widget), production databases with real customer data, infrastructure managed by your cloud provider. Being explicit about boundaries protects everyone and prevents testers from accidentally touching systems they shouldn't.

Prioritize admin functionality. Admin panels are consistently where the most critical vulnerabilities live, because they're built quickly, tested less, and carry the highest privilege levels. If your admin interface has user management, role assignment, data export, or system configuration features, make sure those are explicitly in scope and that testers have the access they need to test them.

If you have automated vulnerability scanning already in place, share those results with your testers. It helps them skip the obvious findings and focus on the deeper, manual-only vulnerabilities that scanners miss.


Set up the right test environment

One of the first decisions you'll make is where testing should happen. Both staging and production environments have tradeoffs, and the right choice depends on your specific situation.

Factor Staging Environment Production Environment
Data safety Safe Risk of data corruption
Realistic results May differ from production Most accurate
Performance impact None on production users Possible slowdowns
Recommended for Most engagements Specific assessments only

The best practice for most teams is a staging environment with production-like data that has been anonymized. This gives testers a realistic target without any risk to real customer data or production stability. The key word is "production-like." A staging environment that's three versions behind production, missing entire features, or running against an empty database will produce results that don't reflect your actual risk.

Verify your staging environment is actually working before the engagement starts. Deploy the latest code, seed it with representative test data, and confirm that all features function correctly. Testers finding that the staging environment is broken on day one is more common than you'd think, and it's an entirely preventable waste of time.


Provision test accounts and access

This is the single most common source of delays in penetration testing engagements. Testers need accounts, and they need them before the engagement starts, not on the morning of day one.

At minimum, provide accounts for every distinct role in your application. That typically means:

Why multiple roles? Because some of the most critical vulnerabilities in web applications are authorization flaws, where a standard user can access admin functionality, or a viewer can modify data they should only be able to read. Testers need accounts at different privilege levels to systematically test these boundaries.

If your application uses SSO or MFA, set up a bypass or dedicated authentication flow for test accounts. Requiring testers to authenticate through your corporate Okta instance with hardware tokens will create friction on every single request. Most identity providers support creating test users that can authenticate with a simple password. Set this up in advance.

If your application is behind a VPN, provide VPN credentials and connection instructions before the engagement starts. Include the specific endpoints, ports, and any firewall rules that need to be allowlisted for the testing team's IP addresses.


Prepare your documentation

Testers are skilled at discovering how applications work through exploration and analysis. But giving them context up front means they spend their time finding vulnerabilities instead of reverse-engineering your architecture.

Here's what's genuinely useful to share:

What's nice-to-have but not essential: detailed database schemas, internal team structure documents, and deployment runbooks. These can help for white-box engagements but aren't necessary for standard application-level testing.


Coordinate with your team

A penetration test generates unusual traffic patterns, error logs, and potentially alarm triggers. If your team doesn't know it's happening, you'll waste time on false incident responses and internal confusion.

Who needs to know:

Set up a dedicated communication channel for the engagement. A Slack or Teams channel with your engineering lead, the testing team, and anyone who might need to resolve access issues works well. Real-time communication is critical when testers find something urgent or hit a blocker that's burning testing hours.

Designate a single point of contact on your side who can respond within a few hours during the testing window. This person doesn't need to be available 24/7, but they should be responsive during business hours. When a tester asks "is this behavior intentional?" or "can you reset this test account?", a fast response keeps the engagement on track.


The pre-pentest checklist

Before your engagement starts, walk through every item on this list. If anything is missing, resolve it before day one. The time you invest here directly translates into better testing coverage and more actionable results.

Scope and logistics
- Scope document signed and shared with testing team
- Point of contact designated and available during testing window
- Internal team briefed (engineering, DevOps, SOC)
- Monitoring team notified of testing dates and source IPs
- Dedicated Slack/Teams channel created

Environment and access
- Staging environment deployed and verified working
- Test data seeded with realistic, anonymized data
- Test accounts created: admin + standard user + viewer
- API keys and tokens generated
- VPN credentials provided (if applicable)
- SSO/MFA bypass configured for test accounts

Documentation
- Architecture diagram shared
- API documentation shared (Swagger/OpenAPI preferred)
- Previous pentest reports shared
- Known issues list provided

Print this out, pin it to a board, or paste it into a project management ticket. Assign owners to each item and set a deadline of at least two business days before the engagement starts. That buffer gives you time to resolve anything that falls through the cracks without eating into the testing window.


What happens if you're not prepared

We're not speaking hypothetically. These are patterns we've seen repeatedly across engagements:

The Monday morning scramble. The engagement is scheduled to start Monday. On Monday morning, the testers log in and discover their accounts don't have the right permissions. The admin account is actually a standard user. The API key was generated for the wrong environment. The person who set everything up is on PTO. By the time access is sorted out, it's Wednesday. That's 40% of a five-day testing window, gone.

The silent critical finding. Testers discover a critical vulnerability on day two, something that exposes customer data or allows account takeover. They flag it immediately through the communication channel, but no one on the client side responds for 36 hours. The testers can't determine the full impact because they need information about the backend architecture, and they can't move forward on related attack paths because they don't know if the behavior they're seeing is intentional. The final report has a critical finding with incomplete impact analysis.

The environment crash. The staging environment goes down mid-test. No one on the client team notices because they're not monitoring it. The testers lose half a day waiting for it to come back up. When it does, the test data has been wiped and they need fresh accounts. By the time everything is restored, another half day is gone. A five-day engagement just became a four-day engagement.

Every one of these scenarios is preventable with the checklist above. The common thread is always the same: someone assumed it would be fine, and it wasn't.

For a detailed walkthrough of what testers actually do once the engagement begins, read our guide on what happens during a penetration test. Understanding the testing process makes preparation even more intuitive.

If you're assessing your overall security posture before scheduling a pentest, our guide on startup security fundamentals before Series A covers the baseline controls that should be in place first. And if you're still deciding whether you need a pentest or a vulnerability scan, we break down the differences between vulnerability scanning and penetration testing to help you decide.

For teams preparing specifically for a SOC 2 audit, your penetration test preparation will overlap significantly with the compliance readiness work. Getting both right starts with the same foundation: knowing exactly what you're protecting and making sure the people assessing it have what they need.

Planning Your First Pentest?

We'll help you scope it right and prepare your team. Our engagements start with a free scoping call so nothing gets missed.

Book a Scoping Call What Happens During a Pentest
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.