AI Assistant¶
Load Tester's AI assistant is the difference between staring at a wall of correlation errors and getting a one-line answer that tells you which page is missing which extractor. It works through three interfaces: an embedded panel inside the application, an MCP server for external tools like Claude Desktop and Claude Code, and an automated report generator that produces DOCX deliverables from completed load tests.
The AI is most useful at three specific points in the workflow: configuring a test case (correlation errors, authentication, ASM rule selection), monitoring a running load test (real-time degradation alerts, error spike interpretation), and analyzing results (bottleneck identification, percentile comparisons, narrative reports). It runs on whichever LLM provider you choose: Anthropic Claude, AWS Bedrock, or OpenAI. You bring the API key.
In This Section¶
- Overview: What the AI can do across the three workflow stages, and what it can't.
- Setup: Configure your AI provider and API key. One-time setup, takes a few minutes.
- Getting Started: Open the AI panel and ask your first question. Worked example using a real configuration problem.
- MCP Server: Connect external AI tools to Load Tester's 75+ MCP tools so you can drive configuration and analysis from Claude Desktop or Claude Code.
- AI-Generated Reports: Generate professional performance analysis documents with charts, narrative, and DOCX export.
- For Configuration: AI help with ASM, correlation, authentication, and replay debugging. The densest place to read AI guidance, because configuration is where users get stuck most.
- For Monitoring: Real-time AI guidance during load tests.
- For Analysis: Post-test performance investigation and bottleneck identification.
- Limitations: What the AI cannot do, where it tends to be wrong, and how to use it safely.
If you haven't used the AI before, Setup then Getting Started. If you've already used it and want the deep workflow guidance, jump straight to For Configuration.