Performance Analysis Workflow¶
When a load test is complete, someone will ask "How did we do?" The quality of your answer depends entirely on the quality of your preparation. This guide walks you through the complete performance analysis workflow, from setting goals before testing to interpreting results and identifying bottlenecks.
Define Goals Before Testing
The most important step in performance analysis happens before you run your first test. Without specific, concrete goals, you cannot give a specific, concrete answer to the "How did we do?" question.
Step 1: Define Performance Goals Early¶
You need to know which metrics matter and what their targets are, preferably long before you start testing.
Determine What to Measure¶
For websites, you will typically be interested in:
- Page duration: How long it takes to display a page in the browser
- Load level: Amount of concurrent activity (simultaneous users, hits/sec, transactions/min, page-views/hour)
Set Concrete Goals¶
Assign specific, measurable targets to your metrics. For example:
- "All pages must load within 4 seconds under a load of 1000 users"
- "The homepage must respond within 2 seconds at 500 concurrent users"
- "API endpoints must complete within 1 second at 10,000 requests/min"
Avoid Vague Goals
Goals like "the site should be fast" or "handle lots of users" are not actionable. You need concrete numbers: specific metrics, specific thresholds, specific load levels.
Identify Exceptions¶
Look for pages that have different requirements:
- Slower acceptable: Final checkout page, complex reports, site-wide search
- Must be faster: "Add to cart" button, login page, API health checks
Knowing what to measure and what "good enough" looks like lets you design tests that actually answer the right questions.
Step 2: Configure Analysis Methods¶
Load Tester provides multiple analysis methods to evaluate your performance goals. Choose the methods that match your requirements.
Available Analysis Methods¶
- Average Page Duration: Mean response time across all requests
- Average Wait Time: Time waiting for first byte (TTFB)
- Maximum Duration: Worst-case response time
- 95th Percentile: 95% of requests complete within this time
- 90th Percentile: 90% of requests complete within this time
Why Multiple Methods Matter
Goal failures are reported at different load levels depending on the analysis method used. A page might pass on average duration but fail on 95th percentile, revealing that some users experience poor performance even when the average looks good.
Time-Based vs User-Level Analysis¶
Load Tester offers two complementary ways to view results:
Time-Based Analysis (traditional approach):
- Charts show metrics plotted against time
- Useful for observing trends during test execution
- You see what happened when during the test
User-Level Analysis (recommended approach):
- Charts show metrics plotted against concurrent user count
- Automatically summarizes data at each load level
- You see what happened at each user level
- Easier to answer "What's the performance at 250 users?"
Use Both Views
Time-based analysis helps you understand test behavior over time. User-level analysis helps you understand performance characteristics at specific load levels. Both provide valuable insights.
Step 3: Run Your Load Test¶
While the test runs, monitor key metrics to catch obvious problems early. See Monitoring During a Load Test for details on real-time monitoring.
The Embedded Analytics Dashboard provides interactive charts and real-time visibility during test execution.
Step 4: Interpret Results¶
After the test completes, analyze the results systematically.
Start with Test Summary¶
The Test Summary section shows highest-level metrics:
- Overall page duration trends
- Total throughput (hits/sec, bandwidth)
- Error rates
- Server resource utilization
Examine User-Level Metrics¶
User-level analysis automatically plots charts with user level on the X axis instead of time. This makes it easy to see performance at specific load levels.
Example: If your target was 250 users, look at the data point for 250 users on the user-level chart to see the exact average page duration.
User-level tables provide precise numbers for each metric at each load level, eliminating the need for manual calculations.
Drill Down to Individual Pages¶
The Web Pages section lets you analyze performance for specific pages:
- Navigate to the page of interest in the report
- View user-level charts for that specific page
- Check performance goal analysis for the page at each user level
- Compare different pages to identify bottlenecks
Finding Slow Pages
If the overall average at 250 users is 0.6 seconds, but one page averages 1.8 seconds, you've identified a bottleneck. Focus optimization efforts on that page.
Step 5: Use Visual Comparison for Errors¶
When pages fail or return unexpected content, visual comparison helps you quickly understand what went wrong.
How Visual Comparison Works¶
Load Tester compares what the virtual user received versus what was expected:
- Left side: Expected content (from recording)
- Right side: Actual content (from replay or load test)
- Highlighted differences: Elements that don't match
When to Use Visual Comparison¶
- Replay failures: 404 errors, authentication failures, missing content
- Load test errors: Individual user failures during load testing
- Performance debugging: Understanding why a page behaves differently under load
Quick Visual Troubleshooting¶
Right-click on any error in the results and select Visual Compare to see the problem from the virtual user's perspective.
Example Scenario
A user should see a greeting page with links, but instead gets returned to the login page with "Invalid username or password" error. Visual comparison immediately reveals the authentication failure, showing you exactly what the virtual user experienced.
Step 6: Estimate User Capacity¶
One of the most important goals of load testing is to find out how many users your application can handle simultaneously. Load Tester provides automatic capacity estimation.
Understanding the Capacity Chart¶
The User Capacity section shows a bar chart with:
- Y axis: Number of performance goals analyzed
- X axis: User level (load applied during test)
- Green bars: Goals that passed at this user level
- Red bars: Goals that failed at this user level
- Blue bars: Goals not evaluated (insufficient data)
Interpreting Capacity Estimates¶
All items pass (all green):
- Capacity is "at least" the maximum user level tested
- Server likely has additional capacity
- Next step: Test with more users to find the actual limit
Don't Extrapolate
Even if your server shows only 50% CPU usage at 100 users, don't assume you can handle 200 users. Software limitations (connection pools, session limits, locks) are non-linear and unpredictable until you hit them. Test beyond your expected load.
Some items fail (mixed green/red):
- Capacity is between the highest all-pass level and the lowest any-fail level
- Example: All pass at 150, some fail at 200 → capacity is between 150-200
- Next step: Run another test with finer granularity (e.g., ramp 25 users at a time instead of 50)
All items fail (all red):
- Capacity is "less than" the minimum user level tested
- Next step: Start with lower user counts and ramp more slowly
Incomplete data (blue bars):
- Some pages didn't complete at certain user levels
- Usually means test duration was too short for the test case length
- Next step: Increase test duration to allow more time at each user level
Complex Patterns¶
Sometimes results show pass-fail-pass patterns (passed at 100, failed at 150, passed again at 200). That's suspicious. It usually means test variability, external factors, or intermittent issues that surface under specific conditions.
Take a close look at which specific goals failed and examine what was happening when they failed.
Step 7: Analyze Performance Goals¶
The Performance Goals section summarizes which pages failed their goals at each user level.
Performance Goal Results¶
For each page with performance goals configured, Load Tester shows:
- Goal definition: The threshold and analysis method
- Pass/Fail status: At each user level analyzed
- Actual values: Measured performance vs. goal
This section quickly guides you to the pages that need attention.
Example Analysis
At 200 users, 3 pages failed their goals:
- Checkout page: 5.2s average (goal: 4.0s) → Failed on average duration
- Search results: 3.8s average (goal: 3.0s) → Failed on 95th percentile
- Product detail: 2.1s average (goal: 2.0s) → Passed on average, failed on max duration
Focus optimization efforts on these three pages, prioritizing based on business impact.
Analysis Methods in Action¶
Different analysis methods reveal different problems:
- Average fails: General performance issue affecting most users
- Max or 95th percentile fails: Outliers or worst-case scenarios
- Wait time fails: Server processing delays (not network/content transfer)
Use the analysis method that matches your user experience requirements.
Step 8: Identify Bottlenecks¶
Once you know which pages failed their goals, determine why they failed. See Identifying Bottlenecks for detailed guidance on bottleneck analysis.
Common bottleneck indicators:
- High server CPU: Application or web server overloaded
- High database response time: Database queries need optimization
- High network wait time: Network latency or bandwidth limits
- Increasing response times: Degradation under load indicates scaling issues
Correlate server monitoring data (CPU, memory, disk I/O, network) with page performance to pinpoint root causes.
AI-Powered Analysis
The AI assistant can help interpret complex result patterns:
- "Analyze the performance degradation between 200 and 300 users"
- "Why did the checkout page fail at 250 users but pass at 200?"
- "Identify which server resources correlate with slow response times"
- "Compare performance across different user levels"
See AI for Analysis for more analysis prompts.
Step 9: Take Action¶
Based on your analysis, determine next steps:
If Capacity Meets Goals¶
- Document the results: Capture baseline performance metrics
- Test edge cases: Verify performance under different scenarios (peak traffic, failure conditions)
- Plan for growth: Test at 20-50% above expected maximum load
If Capacity Falls Short¶
- Prioritize fixes: Start with pages that have the biggest impact on user experience
- Optimize bottlenecks: Address the specific resource constraints identified
- Re-test: Verify that optimizations achieved the desired improvement
- Iterate: Repeat analysis → optimize → test cycle until goals are met
If Results Are Unclear¶
- Improve test design: Adjust ramp rates, durations, or user levels for clearer data
- Check test validity: Ensure virtual users behave realistically (think times, data variation)
- Examine variability: Run multiple tests to establish repeatability
Analysis Workflow Summary¶
Follow this systematic approach for every load test:
- Before testing: Define concrete performance goals
- Configure: Choose analysis methods (average, percentile, max)
- Run test: Monitor in real-time for obvious issues
- Interpret results: Start with summary, drill down to pages
- Visual comparison: Diagnose errors and unexpected behavior
- Estimate capacity: Determine maximum supported user count
- Analyze goals: Identify which pages failed and why
- Find bottlenecks: Correlate performance with server resources
- Take action: Optimize, re-test, and verify improvements
Follow this workflow consistently and you'll extract real value from every load test, not just a number you can report upward, but an understanding you can act on.
Related Topics¶
- Understanding Metrics - Detailed explanation of all performance metrics
- Embedded Analytics Dashboard - Interactive real-time analysis interface
- Legacy Reports - Traditional static reports for archival purposes
- Identifying Bottlenecks - Deep-dive on root cause analysis
- Load Testing Concepts - Virtual users, think time, load profiles