AI-Generated Performance Reports¶
The Report tab in the Analytics Dashboard turns your load test data into a professional performance analysis document. The AI reads your results, generates charts, writes narrative analysis, and produces something you can hand to a stakeholder who has never opened Load Tester.
This is not a template with blanks filled in. The AI interprets your data, identifies capacity limits, flags degradation patterns, and writes specific findings. Instead of "average response time was 10.11 seconds at 53 users," a good report says "the system delivered acceptable performance only up to approximately 3 concurrent users. That is not a typo." The analysis depends on what actually happened in your test.
The output is an HTML preview inside the dashboard, exportable to a formatted DOCX file with a cover page, embedded charts, and styled tables.
Prerequisites¶
Before you can generate a report, three things need to be in place:
- A completed load test result must be open in Load Tester. The AI needs data to analyze, so you need a finished (or at least partially completed) load test.
- An AI provider must be configured. If you have not set one up yet, see AI Provider Setup.
- The Analytics Dashboard must be open. It opens automatically when you view a load test result. If you closed it, double-click the load test result in the Project Explorer to reopen it.
Generating a Report¶
Step 1: Open the Report Tab¶
In the Analytics Dashboard, click the Report tab (the rightmost tab in the tab bar). You'll see the Report Configuration panel across the top of the view.
Step 2: Configure Report Options¶
The configuration panel has five fields that control what goes into your report:
| Field | What It Does |
|---|---|
| Author | Your name. Appears on the report cover page. |
| Organization | Your company name. Appears on the cover page. |
| RT Threshold (MS) | Response time threshold in milliseconds. Pages exceeding this value are flagged as slow. Default is 5000 (5 seconds). |
| Include stress appendix | Adds a section analyzing behavior at peak load levels. |
| Include server correlation | Adds server metric correlation analysis. Only useful if you collected server monitoring data during the test. |
Setting the Response Time Threshold
The default 5000ms threshold works for many web applications, but adjust it based on your performance goals. If your SLA requires sub-2-second page loads, set the threshold to 2000. The AI uses this value to decide which pages to flag as problematic.
Step 3: Click Generate Report¶
Click the Generate Report button. A progress bar tracks four phases:
- Collecting data from the load test result
- Generating charts (response time trends, throughput, server metrics, error distributions)
- AI analysis (the AI reads the collected data, then writes the narrative sections)
- Assembling the final report document
Generation typically takes 30 to 60 seconds, depending on test size and how fast your AI model responds. Larger tests with more pages and longer durations produce more data for the AI to analyze, so they take longer.
Step 4: Review the Report¶
The finished report appears in the preview area below the configuration panel. Scroll through to review the analysis. The AI identifies specific findings: capacity limits, degradation inflection points, bottlenecks, error patterns, and their relationships to each other.
What the Report Contains¶
Executive Summary¶
The opening section covers the essentials: what was tested, how long the test ran, how many virtual users, how many total requests. It states the headline finding (the capacity limit, whether the system met its goals, or what failed) and lists the critical metrics at various load levels.
This section is written for people who will not read the rest of the report. The key finding is stated directly, not buried in qualifications.
Response Time Analysis¶
Charts showing response time trends over the full test duration, broken down by page. The AI identifies the slowest pages, analyzes percentile distributions (not just averages, which are liars), and explains where performance started to degrade and at what user count.
Server Performance Correlation¶
This section only appears if you checked Include server correlation and your test includes server monitoring data.
The AI correlates CPU, memory, disk, and network utilization with the load levels and response times from your test. The goal is to identify which server resource hit its limit first. An application server CPU maxed out at 50 users while the database idles tells a very different story than a database at 95% CPU while the app server coasts.
Transaction Analysis¶
A per-page breakdown of performance under load. The AI identifies which transactions degraded the most, which ones stayed stable, and what patterns explain the differences.
Error Analysis¶
Error distribution by HTTP status code, an error timeline showing when errors started and at what load level, and pattern analysis. The AI looks for whether errors are concentrated on specific pages or spread across all transactions, and whether they correlate with a specific load level or resource limit.
Recommendations¶
Specific recommendations based on the findings, prioritized by impact. These are tied to the actual data from your test, not generic performance advice.
Stress Appendix¶
This section only appears if you checked Include stress appendix.
A detailed analysis of system behavior at peak load: how response times, error rates, and throughput changed under maximum stress, and what the failure mode looked like. Gradual degradation and sudden collapse are different diseases with different treatments.
Exporting to DOCX¶
After the report generates, click Export to DOCX and choose a save location.
The exported document includes:
- A professional cover page with the author name and organization you configured
- All narrative sections with headings, paragraph formatting, and inline emphasis
- Embedded charts as images
- Formatted tables for transaction and error data
- Styled with Calibri font, section headings with red rules, and professional table formatting
Sharing Reports
The DOCX export is designed for stakeholders who don't have access to Load Tester. The document stands on its own as a complete performance analysis, with all the charts, data, and narrative included. No Load Tester installation required to read it.
Understanding the AI Analysis¶
The AI does not just restate numbers from the charts. It interprets them. A few examples of what that looks like in practice:
- Inflection points: The AI identifies the specific user count where performance shifted from acceptable to degraded, and explains what changed.
- Cross-metric correlation: Response time spikes correlated with server CPU saturation, or error rate increases that coincide with connection pool exhaustion.
- Distinguishing failure modes: Response times rising linearly with users tells a different story than everything running fine at 50 users and total collapse at 55. The AI explains which pattern your test shows and what it means.
- Honest findings: If the system can only handle 3 concurrent users, the report says so plainly.
The quality of the analysis depends on the AI model you've configured. Claude Sonnet 4.6 and GPT-5.4 both produce strong, detailed reports. Haiku and GPT-4.1-mini generate faster but with less depth.
Model Selection Matters
If your reports feel thin or overly generic, try a more capable model. The difference between Haiku and Sonnet (or mini and GPT-5.4) is significant for analytical writing. See AI Provider Setup for model options.
Troubleshooting¶
No report generated and no progress bar activity: The AI provider is either not configured or the connection is failing. Check Preferences -> Web Performance -> Accounts -> AI Assistant and verify your API key and model selection. Use Test Connection to confirm the provider is reachable.
Report generation stalls partway through: The AI model may be slow or rate-limited by your provider. Give it time to complete. If it stays stuck for more than two minutes, check your internet connection and provider status.
Report is missing charts: Chart generation failed silently during the collection phase. Check the diagnostic log at ~/WebPerformance7/.metadata/.plugins/com.webperformanceinc.util/diagnostic.log for error details.
Analysis is thin or generic: The model you're using may not have enough capability for detailed analytical writing. Switch from Haiku or mini to Sonnet or GPT-5.4 and regenerate.
DOCX export fails: Check that you have write permissions on the target directory. If you're saving to a network drive or cloud-synced folder, try saving locally first.
Report log files: Generated reports and their intermediate data are saved to ~/WebPerformance7/reports/ for debugging purposes.
Related Topics: