Skip to content

Embedded Analytics Dashboard

Load Tester v7.0 introduces the Embedded Analytics Dashboard. Instead of scrolling through static HTML reports, you get real-time metrics, AI-powered insights, and drill-down capabilities that help you find bottlenecks fast.

Performance problems are rarely obvious from high-level charts alone. You need to drill down: click on a spike to see which pages caused it, filter by user level to see when degradation started, ask the AI to correlate server metrics with response times. The dashboard makes this kind of investigation natural and fast.

Dashboard vs Legacy Reports

Dashboard: Interactive analysis, AI insights, real-time drill-down → Use for investigation and diagnosis

Legacy Reports: Static HTML snapshots → Use for archival, offline sharing, executive PDFs

Most users will spend 90% of their time in the dashboard. See Legacy Reports for when to use static reports.


Opening the Dashboard

Three ways to access the dashboard:

During a Live Load Test

While your test is running, click Dashboard button in the Load Test Results View (top-right corner). The dashboard updates in real-time as metrics stream in.

Live Monitoring Workflow

  1. Start your load test
  2. Watch the 6 summary charts in Results View for overall trends
  3. Click Dashboard to drill into specific metrics or ask AI questions
  4. Continue monitoring; the dashboard stays synchronized with live test

After a Load Test Completes

From Navigator view:

  1. Double-click any load test result in Navigator
  2. Dashboard opens automatically showing the complete test data

From Results View:

  1. Select the load test result
  2. Click Dashboard button (top-right)

As a Persistent View

Make the dashboard always visible (docked in Eclipse):

  1. Window → Show View → Embedded Analytics Dashboard
  2. Dashboard docks as a view you can keep open alongside other views
  3. Automatically updates when you select different test results

Dashboard Interface Overview

The dashboard is organized into tabs for different analysis perspectives. Each tab provides interactive charts, filtering, and AI-powered insights.

Main Tabs

Summary Tab (Default):

  • Overall performance health matrix
  • 6 key metrics at-a-glance
  • AI-generated summary of test results
  • Quick links to problem areas

Metrics Tab:

  • Response time, throughput, bandwidth, errors
  • Interactive zoom and time range selection
  • Multiple analysis views (time-based, user-level)

Pages Tab:

  • Individual page/transaction performance
  • Drill-down from overview to specific pages
  • Side-by-side page comparison

Servers Tab:

  • CPU, memory, disk, network metrics
  • Correlation with response times
  • Bottleneck identification

Errors Tab:

  • Error timeline and distribution
  • Filter by error type, page, user level
  • AI-powered error analysis

User Levels Tab:

  • Performance at each concurrent user level
  • Capacity estimation and goal analysis
  • Degradation patterns across load levels

Exploring the Summary Tab

The Summary tab answers the first question everyone has: "How did the test go?"

Performance Health Matrix

A color-coded grid shows the health status of all metrics across the test:

  • Green: Metrics within acceptable thresholds
  • Yellow: Warning, approaching thresholds or showing degradation
  • Red: Critical, exceeded thresholds or performance failure
  • Gray: No data or not applicable

Click any cell in the matrix to jump to detailed charts for that metric and time period.

Ask the AI for Test Summary

Open the AI panel (right side of dashboard) and try:

  • "Summarize this test result. Did we meet our performance goals?"
  • "What's the overall health of this load test?"
  • "Compare this test to the baseline from last week"
  • "Highlight any red flags or concerning trends"

AI-Generated Test Summary

The dashboard automatically generates a natural language summary of your test:

Example summary:

"Load test completed successfully with 500 concurrent users over 30 minutes. Average response time: 1.2s (goal: 2.0s ✓). Peak throughput: 850 hits/sec. Zero errors detected. Server CPU peaked at 67%, memory at 45%. No bottlenecks identified. System has additional capacity."

Or when problems exist:

"Load test completed with performance degradation. Response times exceeded 2.0s goal at 400+ users (actual: 3.8s at 500 users). Error rate spiked to 8% at 450 users (connection timeouts). Server metrics show database CPU bottleneck (95% at 500 users). Recommend: optimize database queries, increase connection pool."

Quick Navigation

Problem Areas section provides direct links to issues:

  • Pages that failed performance goals
  • Time periods with elevated errors
  • User levels where degradation occurred
  • Server resources that hit thresholds

Click any link to jump to detailed analysis in the appropriate tab.


Working with Interactive Metrics

The Metrics tab provides deep analysis of response times, throughput, and resource utilization.

Interactive Chart Features

Zoom and Pan:

  • Click and drag on chart to zoom to specific time range
  • Double-click to reset zoom
  • Scroll wheel to zoom in/out at mouse position

Time Range Selector:

  • Use slider at bottom to select time window
  • Click preset buttons: "Last 5 min", "Peak load", "Entire test"

Metric Toggles:

  • Click legend items to show/hide metrics
  • Shift+Click to show only one metric
  • Ctrl+Click (Cmd+Click on Mac) to compare specific metrics

Time-Based vs User-Level Views

Toggle between two analysis perspectives:

Time-Based View (default):

  • X-axis: Time during test execution
  • Shows: How metrics changed over time
  • Use for: Understanding test progression, identifying when problems occurred

User-Level View:

  • X-axis: Concurrent user count (load level)
  • Shows: Performance at each load level
  • Use for: Capacity estimation, understanding performance vs load relationship

Switching Views

Click the "Time-Based" / "User-Level" toggle at top of Metrics tab.

Time-based shows: "Response times spiked at 10:45 AM"

User-level shows: "Response times exceeded 2s threshold at 350 users"

Both perspectives reveal different insights, so use both!

Drill-Down to Individual Pages

From the overall response time chart, click any data point to see which pages contributed to that metric:

  1. Click a spike in the response time chart
  2. Dashboard shows Page Breakdown panel
  3. See response time breakdown by page for that time period
  4. Click any page to navigate to Pages tab with that page selected

AI-Powered Metric Interpretation

Ask the AI to analyze trends and patterns:

  • "Why did response times spike at 350 users?"
  • "Explain the degradation pattern between 200 and 400 users"
  • "What's causing the throughput plateau at 600 hits/sec?"
  • "Compare response times across different user levels"
  • "Show me the correlation between CPU and response time"

Analyzing Individual Pages

The Pages tab lets you drill down from overall metrics to specific page performance.

Page Performance Overview

Table view shows all pages with sortable columns:

  • Page Name: Transaction or page identifier
  • Average Response Time: Mean across all executions
  • 95th Percentile: 95% of requests completed within this time
  • Max Response Time: Slowest request observed
  • Throughput: Requests per second
  • Error Rate: Percentage of failed requests
  • Goal Status: ✓ Pass or ✗ Fail based on performance goals

Sort by any column to find:

  • Slowest pages (sort by 95th percentile descending)
  • Highest error rates (sort by error rate descending)
  • Failed goals (click "Show Failed Only" filter)

Page Detail Charts

Click any page to see detailed charts for that specific page:

  • Response time distribution: Histogram showing request spread
  • Response time trend: How this page performed over time
  • Component breakdown: DNS, connect, wait, receive times
  • Size metrics: Request/response sizes

Side-by-Side Page Comparison

Compare multiple pages to understand relative performance:

  1. Ctrl+Click (Cmd+Click on Mac) to select multiple pages
  2. Dashboard shows side-by-side charts
  3. Use to compare: Fast vs slow pages, critical vs non-critical paths

Ask AI to Identify Slow Pages

Let the AI find performance problems for you:

  • "Which pages are the slowest?"
  • "What pages failed their performance goals?"
  • "Compare the checkout page to the homepage performance"
  • "Explain why the search page is slower than other pages"
  • "Show me pages that degraded most significantly under load"

Slow Pages Report - Response Times by User Level

One of the most helpful dashboard features: The slow pages report shows response times for all pages across different user levels, helping you identify exactly where to aim your optimization efforts.

How it works:

The dashboard charts response times by user level for every page in your test, showing you which pages:

  • Stay fast under load: Response times remain low as user count increases (well-optimized)
  • Degrade gradually: Response times increase linearly with load (resource contention)
  • Spike suddenly: Response times are fine until a threshold, then jump (capacity limit hit)

Why this matters:

If all pages are slow, you're looking at a common bottleneck: CMS recalculating every page, bandwidth saturation, shared database lock, connection pool exhaustion. Look at infrastructure (network, load balancer, database), not individual page code. Example: all pages show 5-second response times at 200+ users, which usually points to bandwidth or database.

If specific pages are slow, you need page-level optimization: inefficient queries, complex business logic, external API calls. Look at the code for those pages, the database queries they trigger, and any third-party integrations. Example: checkout page at 8 seconds while everything else is under 2 seconds. The checkout queries need work.

If some pages degrade while others hold steady, the slow pages are doing something resource-intensive (search, reporting, complex calculations). The question is why those pages consume more CPU, memory, or database resources than others. Example: search page degrades from 1 second to 6 seconds as load increases, homepage stays at 0.5 seconds. Optimize search indexing.

Using the Slow Pages Chart

View the chart:

  1. Go to Pages tab in dashboard
  2. Click "Response Times by User Level" view
  3. Chart shows all pages with user level on X-axis, response time on Y-axis

Each line represents one page, and you can immediately see:

  • Flat lines (good): Pages that maintain performance under load
  • Gradually rising lines: Pages that slow down proportionally with load
  • Steep spikes: Pages that hit a capacity limit and degrade sharply

Prioritization strategy:

  1. Start with steepest slopes: Pages that degrade most significantly have biggest impact
  2. Focus on critical paths: Slow checkout/login pages hurt business more than slow admin pages
  3. Look for outliers: One page taking 10s while others take 2s → clear optimization target
  4. Check thresholds: Pages that spike at specific user levels indicate capacity limits

Real-World Example: Healthcare Portal (Login Bottleneck)

Scenario: Healthcare system tested to 5600 concurrent users

Login and Search Patient workflow:

  • At 2400 users: Total workflow 2:20 (140 seconds)
    • Login step: ~5 seconds
    • Search patient: Reasonable
  • At 5600 users: Total workflow 3:40 (220 seconds)
    • Login step: ~90 seconds (1:30) ← 18x degradation!
    • Search patient: Still reasonable

Critical insight:

"A full 1:30 of that [degradation] was due to the login page alone. If the login step was not degraded (e.g. averaging 5 seconds), then users would be proceeding through that workflow 38% faster."

Hidden effect: "At 5600 users under the tested conditions, these users are only producing the equivalent load of around 3500 users. Thus, the measurement of the other pages look better because this one step is reducing the hits/sec on the rest of the system."

Lesson: One slow page can mask problems in other pages by reducing overall throughput. Server resources appear underused while users wait.

Real-World Example: Student Registration Rush (Extreme Spike)

Scenario: University course registration opens to students at specific time

Add or Drop Classes page performance:

  • Before registration opens: 0.07 seconds (70ms) - excellent
  • When registration opens (11:00 into test): Jumped to 37 seconds - 528x slower!
  • Next minute: Decreased back below 1 second

During the spike:

  • Minimum duration: 24 seconds (even fastest users waited)
  • Maximum duration: 43 seconds (slowest users)
  • Average: 37 seconds

Why this pattern occurred:

All students hit "Add Classes" simultaneously when registration opened → sudden load spike overwhelmed database → created temporary bottleneck → cleared once initial wave processed.

Engineering insight: "The longer durations of the previous page helped to draw out the timespan over which the add/remove form appeared (and therefore the span over which it was completed.)" One slow page can actually help prevent downstream bottlenecks by spreading load over time.

Solution: Implement queue system for registration page, pre-cache class availability data, add database read replicas for course search queries.

Real-World Example: Payment Checkout (Progressive Degradation)

Scenario: E-commerce checkout and payment processing tested to 2000 users

Final checkout step (credit card verification):

  • At 1200 users: Majority of users wait >4 seconds
  • At 1600+ users: Many users wait >10 seconds
  • At peak load: Some requests exceed 20 seconds

Pattern observed: "The final checkout step (accepting and verifying credit card details) is one of the slowest steps."

Business impact:

  • Payment page is most critical conversion point
  • 10-20 second delays cause cart abandonment
  • Lost revenue from timeout/frustrated customers

Root cause investigation:

System encountered "period of degraded performance ~4 minutes into the test, as the system begins committing orders from the cart and taking payment information. During this time, response time of the entire system is degraded, and users experience much longer load times and system throughput is cut in half."

Solution: Payment processing was synchronous and blocking. Changed to asynchronous queue-based processing: accept order immediately, process payment in background, confirm via email. Result: Checkout step reduced to <2 seconds at all user levels.

AI Analysis of Slow Pages

Ask the AI to interpret the slow pages chart:

  • "Which pages should I optimize first?"
  • "Why does the search page spike at 350 users but other pages don't?"
  • "Compare page performance across all user levels"
  • "Are all pages slow or just specific ones?"
  • "What's the common bottleneck if all pages degrade together?"
  • "Show me pages with the steepest degradation curves"

Cross-reference with server metrics:

After identifying slow pages, switch to Servers tab to see if server resources correlate:

  • Slow page spike at 300 users + database CPU spike at 300 users = database bottleneck
  • All pages slow but servers idle = network/bandwidth or client-side issue
  • Gradual page degradation + gradual CPU increase = linear scaling issue

This combination of which pages are slow (Pages tab) + what server resources are taxed (Servers tab) gives you the complete picture for optimization.


Correlating Server Metrics

The Servers tab is where you connect application performance to server resources. This is the key to finding bottlenecks.

Server Metrics Display

For each monitored server, see:

  • CPU utilization: Overall and per-core breakdowns
  • Memory usage: Used, free, cached, buffers
  • Disk I/O: Reads/writes per second, queue length
  • Network: Bytes sent/received, connections

Each metric shows:

  • Real-time chart (during live test)
  • Historical data (completed test)
  • Threshold lines (configurable in monitoring setup)

Correlation with Response Times

The real power here: the response time chart is overlaid on server metrics so you can see cause-and-effect relationships directly.

Example correlation:

  • Response times: Flat at 1.2s up to 300 users, then spike to 4.5s
  • Database CPU: Flat at 45% up to 300 users, then spike to 98%
  • Conclusion: Database CPU is the bottleneck starting at 300 users

Visual correlation indicators:

  • Synchronized zoom: Zoom on response time chart → server charts zoom to same time range
  • Vertical alignment: Charts stack vertically for easy visual correlation
  • Timestamp markers: Click any point to see all metrics at that exact moment

Bottleneck Detection

The AI automatically analyzes correlation and identifies bottlenecks:

Automatic Bottleneck Detection

The dashboard highlights resources that correlate with performance degradation:

Example output:

Database CPU Bottleneck Detected

  • Response times increased 275% (1.2s → 4.5s) at 300+ users
  • Database CPU utilization hit 98% at same load level
  • Correlation coefficient: 0.94 (strong positive correlation)
  • Recommendation: Optimize queries, add database read replicas, increase connection pool

AI Server Correlation Analysis

Ask the AI to identify resource bottlenecks:

  • "What server resources correlate with slow response times?"
  • "Identify the bottleneck causing degradation at 350 users"
  • "Is this a CPU, memory, disk, or network bottleneck?"
  • "Compare server metrics across different load levels"
  • "Why are servers not busy despite slow response times?" (indicates client-side or network issues)

Understanding Error Patterns

The Errors tab provides interactive error analysis with filtering and AI explanations.

Error Timeline

Visual error distribution across the test:

  • X-axis: Time (or user level in user-level view)
  • Y-axis: Error count per time bucket
  • Color-coded bars: Different colors for different error types

Click any bar to see:

  • Specific error messages
  • Affected pages/transactions
  • Virtual user IDs that encountered errors
  • Server conditions at that time

Error Type Breakdown

Pie chart and table show error distribution:

  • HTTP errors: 404, 500, 503, timeouts
  • Application errors: Business logic failures, validation errors
  • Infrastructure errors: Connection refused, DNS failures

Filter by error type to focus investigation:

  1. Click error type in legend
  2. Timeline updates to show only that error type
  3. Related server metrics highlight

Error Drill-Down

For each error instance, see:

  • Full error message: Complete stack trace or HTTP response
  • Transaction context: What the virtual user was doing
  • Server state: CPU, memory, connections at time of error
  • Visual comparison: Expected vs actual page content

AI Error Analysis

Ask the AI to explain error patterns and root causes:

  • "What's causing the 404 errors at 400 users?"
  • "Explain the spike in connection timeouts at 450 users"
  • "Why do errors only occur at high user levels?"
  • "Correlate errors with server resource exhaustion"
  • "Are these configuration errors or capacity errors?"
  • "Recommend fixes for the observed error patterns"

User-Level Analysis and Capacity

The User Levels tab focuses on performance vs load: how does your application behave at different concurrent user counts?

User-Level Metrics Table

For each user level tested (50, 100, 150, 200, etc.):

User Level Avg Response 95th %ile Throughput Errors Server CPU Goal Status
50 0.8s 1.1s 245 hits/sec 0% 35% ✓ Pass
100 1.0s 1.4s 480 hits/sec 0% 52% ✓ Pass
150 1.2s 1.7s 690 hits/sec 0% 68% ✓ Pass
200 1.8s 2.9s 820 hits/sec 0.2% 84% ✗ Fail
250 3.2s 5.8s 780 hits/sec 5% 96% ✗ Fail

Click any row to see detailed charts for that user level.

Performance Goal Analysis

Bar chart shows goal results at each user level:

  • Green bars: Number of pages/transactions that passed goals
  • Red bars: Number that failed goals
  • Blue bars: Not evaluated (insufficient data)

Patterns to look for:

  • All green: System has additional capacity (test with more users)
  • All red: System already overloaded (start with fewer users)
  • Mixed: You've found the capacity boundary; drill down to understand why

Capacity Estimation

The dashboard automatically estimates maximum supported user count:

Capacity Estimate Example

Estimated Capacity: 150-200 users

  • 150 users: All performance goals passed
  • 200 users: 3 of 12 pages failed goals (checkout, search, profile)
  • 250 users: 9 of 12 pages failed goals

Recommendation: System reliably supports 150 concurrent users. Capacity between 150-200 users depends on acceptable failure rate. Optimize slow pages to increase capacity.

AI Capacity and Degradation Analysis

Ask the AI to interpret user-level patterns:

  • "What's the maximum user capacity?"
  • "Explain the degradation pattern from 200 to 300 users"
  • "Why did throughput plateau at 800 hits/sec?"
  • "Compare performance between 150 users and 250 users"
  • "Which pages degrade most significantly under load?"
  • "At what user level does the bottleneck appear?"

Using AI-Powered Insights

The AI panel is available on every tab. Click the AI icon (right side) to open.

Effective AI Prompts

Start with high-level questions, then drill down:

High-Level Analysis:

  • "Summarize this test result"
  • "Did we meet our performance goals?"
  • "What are the main problems?"

Specific Investigation:

  • "What's causing slow response times at 350 users?"
  • "Identify bottlenecks"
  • "Explain the error spike at 10:45 AM"

Comparative Analysis:

  • "Compare this test to baseline from 2026-02-01"
  • "How did adding 2 web servers affect performance?"
  • "What changed between 200 and 300 user levels?"

Root Cause Analysis:

  • "Why is the checkout page slow?"
  • "What server resource is the bottleneck?"
  • "Correlate database CPU with response times"

Recommendations:

  • "Recommend optimizations to reach 500 user capacity"
  • "What should we fix first?"
  • "Estimate cost of scaling to 1000 users"

AI Analysis Features

The AI can:

  • Correlate metrics: Identify relationships between server resources and performance
  • Detect patterns: Recognize degradation trends, error spikes, capacity thresholds
  • Compare tests: Show differences between multiple test runs
  • Explain anomalies: Interpret unusual behavior or unexpected results
  • Generate reports: Create executive summaries or technical deep-dives
  • Recommend actions: Suggest specific optimizations based on observed bottlenecks

The AI cannot:

  • Modify your system configuration
  • Run new tests automatically
  • Access external systems (databases, logs, APM tools)

See AI Assistant Limitations for complete details.

Multi-Turn Conversations

Have a conversation with the AI to iteratively explore results:

  1. You: "What's the bottleneck?"
  2. AI: "Database CPU hit 95% at 300 users, correlating with response time spike to 4.2s"
  3. You: "Show me which queries are slowest"
  4. AI: "Top 3 slow queries: search index (2.8s avg), user profile join (1.9s avg), report aggregation (1.5s avg)"
  5. You: "Recommend optimizations"
  6. AI: "Add database index on search.keywords, denormalize user profile data, pre-compute report aggregations"

The AI maintains context throughout the conversation, so you don't need to repeat yourself.


Dashboard Workflow: Step-by-Step

Follow this workflow for efficient performance analysis:

1. Start with Summary Tab

  • Check performance health matrix (green/yellow/red)
  • Read AI-generated summary
  • Identify problem areas (if any)

2. Investigate Problems (If Any)

If errors exist:

  1. Go to Errors tab
  2. Review error timeline and distribution
  3. Ask AI: "What's causing these errors?"
  4. Drill down to specific error instances

If response times are slow:

  1. Go to Metrics tab
  2. Switch to User-Level view
  3. Identify when degradation started
  4. Ask AI: "Why did response times increase at X users?"

If specific pages are slow:

  1. Go to Pages tab
  2. Sort by 95th percentile (descending)
  3. Click slowest pages
  4. Ask AI: "Why is [page] slow?"

3. Correlate with Server Resources

  1. Go to Servers tab
  2. Look for server metrics that spike when response times increase
  3. Ask AI: "Identify bottlenecks"
  4. Review correlation analysis

4. Estimate Capacity

  1. Go to User Levels tab
  2. Review performance goal bar chart
  3. Find highest all-green user level (reliable capacity)
  4. Find lowest all-red user level (exceeded capacity)
  5. Capacity is between these values

5. Generate Report or Take Action

If test passed:

  • Document baseline metrics
  • Export summary for stakeholders
  • Plan next test (edge cases, higher load)

If test revealed problems:

  • Prioritize fixes based on AI recommendations
  • Focus on bottlenecks with highest impact
  • Re-test after optimizations

Tips for Effective Dashboard Use

Performance Analysis Best Practices

Compare tests over time:

  • Keep test configurations consistent
  • Document changes between tests
  • Use AI to compare "before" vs "after" results

Use both time-based and user-level views:

  • Time-based: Understand test execution and transient issues
  • User-level: Understand capacity and load-dependent behavior

Don't ignore warnings (yellow cells):

  • Yellow indicates approaching thresholds
  • Investigate before they become critical (red)
  • Often easier to optimize warning-level issues

Ask the AI for second opinions:

  • Even when you think you understand the problem, ask AI to confirm
  • AI might spot correlations you missed
  • Use AI to validate hypotheses

Common Workflow Patterns

Capacity planning:

  1. User Levels tab → Find capacity estimate
  2. Servers tab → Identify limiting resource
  3. Ask AI: "Recommend scaling to reach [target] users"

Troubleshooting errors:

  1. Errors tab → Identify error types and timing
  2. Pages tab → Find pages with highest error rates
  3. Ask AI: "Explain these errors and recommend fixes"

Performance regression investigation:

  1. Summary tab → Open test comparison
  2. Metrics tab → Overlay baseline and current test
  3. Ask AI: "What changed between these tests?"

Bottleneck identification:

  1. Servers tab → Look for resource saturation
  2. Metrics tab → Correlate with response times
  3. Ask AI: "What's the bottleneck and how do I fix it?"