AI Assistant Overview¶
Load Tester includes an AI assistant that helps you configure test cases, monitor load tests in real time, analyze performance results, and generate professional reports. Think of it as an expert load testing engineer looking over your shoulder, one who actually knows your test case and can give specific answers instead of generic advice.
The AI works through three interfaces:
- Embedded panel: Built into Load Tester's UI, context-aware, available from any view.
- MCP server: Exposes 75+ tools to external AI clients like Claude Desktop, Claude Code, Cursor, or any MCP-compatible tool. Starts automatically when Load Tester launches. See MCP Server.
- Report generation: The Analytics Dashboard's Report tab produces real performance analysis documents, not template fill-ins, with executive summaries, charts, and DOCX export. See AI-Generated Reports.
What the AI Assistant Does¶
The AI assistant understands three distinct stages of load testing, each with different challenges and goals:
Stage 1: Configure & Debug Test Cases¶
What it helps with:
- Platform detection - Identifies your application stack (React, .NET, Salesforce, etc.)
- ASM analysis - Explains which dynamic values were detected and why
- Correlation troubleshooting - Debugs 401, 403, 404 errors from missing session tokens or CSRF fields
- Field configuration - Suggests which fields need datasets vs dynamic extraction
- Replay debugging - Diagnoses why replays fail and recommends fixes
Goal: Get your test case to replay successfully with a single virtual user before you throw load at it.
When to use it: After recording, when ASM has run, and especially when replay fails with errors you don't understand.
Example questions you can ask:
- "Why is my replay failing with 401 errors?"
- "Which fields should I configure for this OAuth application?"
- "Why isn't the session cookie being extracted?"
- "Help me debug this CSRF token correlation issue"
Stage 2: Run Load Test & Monitor¶
What it helps with:
- Real-time monitoring - Watches metrics as your load test runs
- Error detection - Identifies when errors spike during ramp-up
- Performance alerts - Notifies you when response times degrade significantly
- Load-specific issues - Diagnoses connection refused, timeouts, rate limiting
- Stage transitions - Determines if issues are config problems (→ Stage 1) or performance problems (→ Stage 3)
Goal: Ensure your load test runs cleanly and detect problems as they happen.
When to use it: During active load tests, when errors appear, or when performance degrades under load.
Example questions you can ask:
- "Why are errors increasing as users ramp up?"
- "Is this a configuration issue or a performance problem?"
- "What's causing these connection timeouts?"
- "Why did response times suddenly spike at 50 users?"
Key insight: Stage 2 bridges configuration and performance. Problems that surface under load could be either misconfiguration masquerading as a performance issue, or genuine performance degradation. The AI helps you tell the difference, which saves you from optimizing code when the real problem is a missing session cookie.
Stage 3: Performance Analysis¶
What it helps with:
- Metrics trends - Analyzes response time patterns over the test duration
- Degradation analysis - Identifies when and why performance deteriorated
- By-user-level analysis - Compares performance at 10 vs 50 vs 100 users
- By-page analysis - Finds which pages slow down under load
- Root cause identification - Connects symptoms to likely causes
- Capacity planning - Estimates maximum sustainable load
Goal: Understand WHY performance degraded and where bottlenecks exist.
When to use it: After load tests complete, when analyzing results to improve performance.
Example questions you can ask:
- "Which pages are bottlenecks in this load test?"
- "Why did performance degrade after 75 users?"
- "Compare response times at different user levels"
- "What's the root cause of this performance issue?"
- "How many users can my system handle?"
How the AI Works¶
The AI assistant uses MCP (Model Context Protocol) tools that run directly in the Load Tester desktop application. These tools give the AI deep access to your test case data, load test metrics, and configuration, all without uploading anything. The AI model itself runs through whichever provider you configure (Anthropic, AWS Bedrock, or OpenAI). See AI Setup for configuration.
What this means for you:
- Your test data stays local. MCP tools read data already in Load Tester's memory. Test cases and results are not uploaded anywhere.
- API calls go to your chosen provider. The AI model processes your questions through the configured API (Anthropic, Bedrock, or OpenAI). Prompts and tool results are sent to the provider for inference.
- 75+ specialized tools. The AI has deep access to test case structure, replay results, load test metrics, server monitoring data, and more. See MCP Server for the full tool list.
- Works where you work. The AI is embedded in the views you already use, and also accessible from external AI tools through the MCP server.
What the AI Can See¶
The AI has access to:
- Test case structure - Pages, transactions, response codes, timings
- Configuration - Field datasources, extractors, validation rules
- Replay results - Errors, extracted values, response content
- Load test metrics - Response times, throughput, error rates, user levels
- Real-time data - Current state during active load tests
What it cannot see:
- ❌ Your source code or application internals
- ❌ Other users' test cases or results
- ❌ Data from previous sessions (each conversation starts fresh)
When to Use the AI vs Manual Documentation¶
Use the AI assistant when:
- ✅ You're stuck and don't know what's wrong
- ✅ You need help understanding specific errors in YOUR test case
- ✅ You want to compare options ("Should I use a dataset or custom value here?")
- ✅ You need real-time guidance during load testing
- ✅ You want analysis of YOUR specific results
Use the manual when:
- 📖 You're learning a new concept from scratch
- 📖 You need step-by-step tutorials
- 📖 You want reference information (settings, view descriptions)
- 📖 You're planning your approach before starting
- 📖 You need comprehensive background on a topic
The best approach: Learn fundamentals from the manual, then let the AI help when you're staring at a specific error in your specific test case.
AI Capabilities and Limitations¶
What the AI Does Well¶
Context-specific troubleshooting: Examines YOUR test case and YOUR errors to diagnose problems.
Pattern recognition: Identifies common issues like missing correlation, authentication failures, performance bottlenecks.
Comparative analysis: "Compare this page's performance at 10 users vs 100 users" - direct answers from your data.
Workflow guidance: Knows where you are in the process and what typically comes next.
Multi-stage awareness: Understands when config issues surface during load testing vs genuine performance problems.
What the AI Cannot Do¶
Cannot execute actions. The AI suggests fixes but cannot modify your test case, run replays, or start load tests. You control all actions.
Cannot access external systems. It does not connect to your application server, database, or infrastructure. It only sees what Load Tester captures.
Cannot predict application behavior. It can analyze patterns in captured data but cannot know how your application works internally.
Cannot make architectural decisions for you. It can explain trade-offs and implications, but you decide the approach.
Cannot guarantee solutions. AI suggestions are based on common patterns and may need adjustment for your specific application.
When AI Uncertainty Happens¶
The AI will tell you when it's uncertain or doesn't have enough information:
- "I don't see enough context to diagnose this. Can you run a replay and share the errors?"
- "This could be either X or Y. Let's check [specific thing] to narrow it down."
- "I'm not familiar with this specific platform. The manual has a section on [platform] that may help."
Treat AI suggestions as expert guidance, not gospel. Verify that recommendations make sense for your application before acting on them.
Where AI Guidance Appears¶
The AI assistant is embedded throughout the manual and the application:
In the Manual¶
- Configuration section - Help with ASM, correlation, authentication, datasets
- Replaying section - Debugging failed replays, understanding errors
- Load Testing section - Monitoring, real-time alerts, load-specific issues
- Analysis section - Performance investigation, bottleneck identification
Look for these indicators in the manual:
Ask the AI
Specific prompt suggestions for common questions in each section.
In the Application¶
- AI panel - Accessible from any view, context-aware
- Error views - Quick "Diagnose with AI" links
- Results analysis - "Explain this metric" and "Find bottlenecks" actions
- Load test monitoring - Real-time alerts and suggestions
Getting Started with the AI¶
Ready to try the AI assistant?
- Set up your AI provider - Configure Anthropic, AWS Bedrock, or OpenAI
- First interaction - Open the AI panel and ask your first question
- Connect external tools - Use Claude Desktop, Claude Code, or other MCP clients
- Generate a report - Create professional performance analysis documents
- AI for configuration - Use AI to debug test case setup
- AI for monitoring - Get real-time guidance during load tests
- AI for analysis - Investigate performance results
Or just start using it the next time something breaks. That is, honestly, how most people discover it.
Privacy and Data¶
The AI assistant's MCP tools run locally and read data already in Load Tester's memory. Test cases, credentials, and results are not uploaded to external servers.
When you ask a question, the prompts and tool results are sent to your configured provider (Anthropic, Bedrock, or OpenAI) for inference. If your organization cares about data retention, review your provider's policies. The short version: your test data stays on your machine, but your questions travel to the AI model.
- ✅ Test data stays in Load Tester's memory (MCP tools are local)
- ✅ API keys are encrypted with AES-256-GCM on your machine
- ✅ MCP server is localhost-only (not accessible from the network)
- ✅ Each conversation is private to your session
- ✅ Load Tester does not track your AI usage or collect conversation histories
Next Steps¶
- Set up your AI provider - Configure API keys and model selection
- Get started with your first AI question
- Connect external AI tools via MCP - Claude Desktop, Claude Code, and more
- Generate performance reports - Professional analysis documents with DOCX export
- AI for configuration - Debug test case setup
- AI for monitoring - Real-time load test guidance
- AI for analysis - Post-test performance investigation
- Limitations and safety - What AI can and cannot do